Install Theme
tbh the framing here is dubious. Consider: I have noticed a bias in the use of cognitive biases, a higher order bias if you will, as follows: Cognitive biases are more prevalent when to the detriment of black people. I call this metabias “racism”.

Yeah but then someone comes along and says that’s just a special case of the various race- and ethnicity-based prejudices that have existed in various times and places, which are/were in turn caused by more fundamental factors like something-or-other about ingroups and outgroups and pretty soon you’re listening to an abstract lecture about the Green People and the Blue People and all their bad ideas and you nod along agreeably but nothing in your picture of reality changes.

again, you’d think a group interested in the science of cognitive biases would talk more about the cognitive biases involved in, say, sexism, or racism, or homophobia, or (etc), what with those things being thoroughly documented in the literature…

I remember there being a big (and bizarre) argument about this way back on Overcoming Bias ca. 2008

I think the line being pushed at the time was “we only talk about biases in the most general sense and leave specific applications up to you.”  This may have been an attempt to make the site more apolitical than social science academia, at the cost of ignoring some of its work?  This idea may have fallen out of favor since then, I dunno.

(As I’ve mentioned before I think a lot of biases only go away if you think about them in the context of “specific applications,” so I think “leaving the applications up to you” kills a lot of the point of talking about bias.)

LW and neoreaction (proposal/guess):

If you sell yourself with the line “I’ll seriously consider any idea and I don’t like rejecting ideas for ‘political’ reasons,” then the pariahs of political idea-space, the people no one else will listen to, come flocking to you.  You keep them around because explicitly pushing them away would be too “political,” and you’re open-minded, right?

But over time their presence shifts your Overton Window, and they start to feel like a relatively nearby point on the political spectrum, even if you never intended for that to happen.

Since LW tries so hard to stay out of politics, it ends up with the politics of the people everyone less “apolitical” has pushed away.  Politics only becomes okay to talk about (rather than “the mind-killer,” or at best “hard mode”) once not talking about it feels more “political” than talking about it.  Should we do something about all these neoreactionaries?  Well, wouldn’t that be a politically motivated decision?  And so the neoreactionaries stay, and the Overton Window shifts.

ihfsttinuf asked: About Heaper's Hangout: It's a very small forum mostly made up of ex-members of the TV Tropes forums (although we prefer not talking about that place). We basically started out as an outlet for nonsense and friendly in-jokes, but mostly we just talk about media and day-to-day stuff. One long-standing member of ours has linked to your LessWrong posts several times, and reception has generally been positive. I moderate there, by the way. Namaste.

Good to know, thanks!  (I have heard bad things about the TV Tropes forum so it’s good to hear you prefer not to talk about it)

About the ZQ issue, it’s just something that strikes me as really complicated, and I see a lot of conflicting opinions about it on the dash, from people none of whom I’d like to dismiss out of hand.  So any more talking I do about it will be more and more mired in complications and I don’t want to do that when it seems like there’s this peanut gallery watching that considers it a failure every time I don’t say “actually the issue is clear-cut and you guys are obviously 100% right.”

ozymandias271:

reading SSC comments led me to the discovery that there is a SHIT RATIONALISTS SAY VIDEO

I DID NOT KNOW THIS BEFORE

IT IS AMAZING

nostalgebraist

(via bpd-dylan-hall-deactivated20190)

Finally, my headcanon is confirmed

Finally, my headcanon is confirmed

accordion-druid asked: Hey Nostalgebraist, your conversation with hot-gay-rationalist reminded me of a question I have about LessWrong "Bayesianism": Does *anybody* in that movement actually live by Bayesianism as described? Like, keep a file on their computer with thousands of estimated truth probabilities for various statements that they continually update using Bayes' rule? And if so, have they ever shared their file? I'm trying to imagine someone actually doing this and it's too hilarious to be real (continued)

Like, their probability for “Obama is a secret Muslim” starts at 0.00001 and when Obama makes a conciliatory gesture towards Iran they go “Hmm, a secret Muslim would do that with probability 0.99, a non-Muslim democratic president would do that with probability 0.87, better update Obama-is-a-Muslim to 0.000012.” And don’t I also need probabilities for “Obama is a werewolf” and “Obama likes ice cream” and in fact *every possible declarative English sentence* i can write about Obama? (cont)

Even if I pick a finite subset to focus on, how the hell do I estimate the conditional probabilities? Aren’t I just pulling 2 numbers out of my ass to “update” the accuracy of another number I pulled out of my ass? When an LW-er says “I’m updating my probability for X” are they literally estimating conditionals or is it just a jargony way of saying “oh hey i hadn’t considered that angle”? In which case, does Bayesianism really mean anything more precise than “try to be open to new evidence”?

I don’t know of anyone who’s actually done anything like this.

However, there are LWers who make bets in prediction markets, which (among other things) can be seen as a way of definitively committing to specific probabilities in somewhat the way you describe.  E.g. here are the bets made on predictionbook.com by prominent LW guy gwern

I think the standard LW answer to the questions in your last paragraph would be “ideally one would literally do the spreadsheet thing, but of course one can’t do that in real life.  But keeping that in mind as an ideal can lead to various rules of thumb for thinking.  ’Try to be open to new evidence’ is one, but not the only one.”  (hot-gay-rationalist has a post here about this kind of thing.)

It’s kind of similar to how some of the early utilitarians hoped that giant unwieldly computations about utility could eventually be boiled down to some simple rules, so that you’d be able to “derive” the “right” simple practical morality by writing some 1000-page treatise on utility or whatever, and then go around living by that simple practical morality in real life, rather than thinking about utility all the time.

(Also this is probably the right place to say that I have some work to do and should stay off tumblr for the rest of the day)

Apparently someone once bet gwern $100 that “Cosma Shalizi believes that P=NP”?

People are strange

su3su2u1-deactivated20160226 asked: I wrote a post about computability/time travel/HPMOR (its titled something like chapter 14). If you get a chance, let me know if you spot anything obviously wrong there. I'm pretty sure I'm right, but I'd hate to mislead people.

ghostdunk:

nostalgebraist:

su3su2u1:

hot-gay-rationalist:

nostalgebraist:

I didn’t see anything wrong in the post (which, for others’ reference, here).

I guess you could be wrong in your criticism of Harry’s statement about “Turing computability,” but only because I have no idea what I means by that statement.

Like, it seems obvious that physicists talk about GR possibly allowing closed timelike curves and this doesn’t make the equations of GR any less capable of being approximated on a Turing machine, so the idea “a computer couldn’t simulate this” seems clearly wrong?  (There might be more than one self-consistent possibility for what happens on the CTC, but at worst that would be indeterminacy that means you’d need more initial data to uniquely solve the equations, which isn’t really a computability issue?)

On the other hand, maybe he’s assuming that the simulation is running in polynomial time (in some reasonable sense) in the external-to-simulation universe, and that by doing too much time travel you could slow it down a lot by forcing it to do NP computations in order to find a self-consistent state (the famous “you can do NP-complete computations with a time machine” thing)?  But why would that be a problem?  Even if you did something that made it took a million times longer (in the external universe) to compute more of the internal universe, you wouldn’t be able to notice that from the internal universe.  (EY is a huge fan of Permutation City so this should be a familiar point to him)

I can’t think of any possibly meaning for Harry’s statement that makes it true, but I’m so unsure about what it’s supposed to mean that I don’t feel like i can confidently declare it false.

[snip]

[snip]

[snip]

Or to put it another way: “well P = NP if you make time irrelevant”. Which is makes it meaningless. Because then P = PSPACE = EXP = DECIDABLE. Why stop at NP?

The whole point of the P = NP problem isn’t about how to break the rules to solve problems quickly. It’s that our understanding of complexity theory is so poor that we can’t even show whether there are problems where you can check a solution quickly but cannot solve them quickly, OR if being able to check a solution quickly implies you can solve them quickly.

I don’t think it’s true that introducing CTCs makes PSPACE = EXP = DECIDABLE.  Aaronson says that P_CTC = PSPACE in his chapter on time travel.  (P_CTC is the class of things you can do in polynomial time, where the relevant extent of time is the length of the CTC.)

CTCs don’t quite make time irrelevant, since the length of the CTC itself is still a variable.  You still have to have enough time inside the CTC to do whatever magic trick is guaranteed to be consistent iff the answer came out at the end.  In the case of NP, say, we can use the “magic trick” of just “checking answers,” which can be done in polynomial time.  In some harder cases (I think?) the length of the CTC has to get non-polynomially longer with the size of the problem, unlike with NP.

I’m not entirely sure why this is true (I’m very sleep-deprived and can’t tell if Aaronson doesn’t really argue for PSPACE being an upper bound for P_CTC, or whether he does and I’m just not noticing it).  Here’s the relevant chapter of the lecture notes, which I think is either identical or close to identical to the book chapter based on it.

su3su2u1-deactivated20160226 asked: I wrote a post about computability/time travel/HPMOR (its titled something like chapter 14). If you get a chance, let me know if you spot anything obviously wrong there. I'm pretty sure I'm right, but I'd hate to mislead people.

hot-gay-rationalist:

nostalgebraist:

hot-gay-rationalist:

nostalgebraist:

I didn’t see anything wrong in the post (which, for others’ reference, here).

I guess you could be wrong in your criticism of Harry’s statement about “Turing computability,” but only because I have no idea what I means by that statement.

Like, it seems obvious that physicists talk about GR possibly allowing closed timelike curves and this doesn’t make the equations of GR any less capable of being approximated on a Turing machine, so the idea “a computer couldn’t simulate this” seems clearly wrong?  (There might be more than one self-consistent possibility for what happens on the CTC, but at worst that would be indeterminacy that means you’d need more initial data to uniquely solve the equations, which isn’t really a computability issue?)

On the other hand, maybe he’s assuming that the simulation is running in polynomial time (in some reasonable sense) in the external-to-simulation universe, and that by doing too much time travel you could slow it down a lot by forcing it to do NP computations in order to find a self-consistent state (the famous “you can do NP-complete computations with a time machine” thing)?  But why would that be a problem?  Even if you did something that made it took a million times longer (in the external universe) to compute more of the internal universe, you wouldn’t be able to notice that from the internal universe.  (EY is a huge fan of Permutation City so this should be a familiar point to him)

I can’t think of any possibly meaning for Harry’s statement that makes it true, but I’m so unsure about what it’s supposed to mean that I don’t feel like i can confidently declare it false.

[snip]

[snip]

No, the emphasis there was on the word straightforwardly. The most straightforward way of computing a causal universe is, well, in order. Compute thing, then compute consequences of thing, then compute consequences of that thing, and so on.

But that doesn’t work for CTCs, which need to be computed more like “compute all universes that start from these initial conditions, then discard all except the ones that are self-consistent.” You can’t do cause-effect chains when CTCs are involved there. CTCs can only be computed by brute-force, which is not the “standard” way of computing a thing, style of thing.

Okay, I see what you mean now, but this distinction seems kind of trivial to me.  There is no “standard way of computing a thing”; there are just various algorithms that, say, approximate various differential equations to one order of accuracy or another.

If GR includes solutions with CTCs and we can approximate them to arbitrary accuracy numerically, then it’s Turing computable.  It might not look like quite like an ordinary method for solving PDEs (or whatever), but who cares?  We want to solve the problem posed to us, that’s all.  I don’t know where this cultural rule about “the most straightforward way of computing a causal universe” is coming from.  I don’t live in a world where people have to “compute” various “causal universes” all the time and have a set of standards built up around this; I live in a world described by various equations where people sometimes have to find approximate solutions to those equations using computers, and do it however fits the task.

(Technically it might be the case that sometimes solutions with CTCs exist but the Cauchy problem for that spacetime is not well-posed, i.e. you can’t “predict" what the system is going to do from initial data?  I don’t know much from Googling around but it seems like we’re not sure about these kind of things yet.  I think if the Cauchy problem was ill-posed, that might mean that there are too many possible CTCs, and the physics needs to be completed with some extra information about how nature picks a unique one?  But that’s a problem with the equations, not with your computer’s ability to integrate them.)