Install Theme

slatestarscratchpad:

nostalgebraist:

As I said in an earlier post, I think the rationalists have a tendency to get stuck in rabbit holes as a consequence of the fact that people not believing in something won’t shut off the community spiral around that thing.

It’s kind of a weird side effect of “I take all ideas seriously,” where people file away certain ideas as worthy of taking seriously even if they don’t believe them, while not treating every idea in the same way (because that would be impossible).

In practice, there are certain ideas that really do cause you to take people less seriously.  This is what I am trying to get across (perhaps badly) with the flat earth examples – there are some things that you not only disbelieve but do not consider worth time worrying over, and in fact you judge people for believing them.  You can’t not do this, unless you have some sort of strange view where someone reliably coming to incorrect conclusions contains no evidence about the quality of their thinking.

Since you can’t really take every idea seriously, certain ideas become “protected,” so that no matter how little credence you give to them, you pretend they don’t convey any evidence about their believers.  Friendly AI was my example of such an idea in an earlier post: the practice among the rationalists is to treat it as a protected position even if they think it’s utterly absurd, while they (or most of them) would never give the same credence to various religious or political positions which have many more intelligent adherents than FAI.

As I said in the earlier post, it’s easy for me to imagine a world in which some blogger with the right Grey Tribe credentials had gotten convinced that there really was something in phrenology after all, and scores of mindless contrarians had latched onto this, and it had migrated into the rationalist world, and “phrenology” had become a protected position.  Although many people would nonetheless think there was nothing at all in phrenology, they would assure you that they did so in an informed way, tempered by the arguments of prominent internet phrenologists.  And arguments about phrenology would pop up again and again, often just for the sake of argument, since few would actually believe in it – it would just be one of those topics, like radical Bayesianism, or FAI, or Mencius Moldbug.

(A fondness for these things can be tolerated, although of course you don’t believe in them; a fondness for those without the right Grey Tribe credentials just indicates mindless conformity.  Of course.)

(And here is where everyone says “you’re saying things I like are as silly as phrenology!”  No, I’m trying to point out that it is possible to spiral around things in a way that has nothing to do with their truth value; the fact that you find phrenology silly is what makes it a good example of this.  Can you really not imagine that a few worlds over there is a rationalist community in which phrenology is a protected position even though almost everyone thinks it’s wrong?  Where tolerance of phrenologist views is used to signal one’s intellectual openness?  Where you aren’t a phrenologist, but some of your best friends are, and that’s OK, and fuck anyone who says otherwise?  But if this is OK, why isn’t it true here and now?)

I think what you’re calling “treat as a protected position” is what I call “don’t immediately dismiss a position without thinking about it, then spend the rest of your time mocking people who believe in it.”

I wrote in Cowpox Of Doubt that:

The more we talk about homeopathy, and moon hoaxes, and creationism – the more people who have never felt any temptation towards these beliefs go through the motions of “debunk”-ing them a hundred times to one another for fun – the more we are driving home the message that these are a representative sample of the kinds of problems we face.

Phrenology is obviously being used the same way here as homeopathy in my Cowpox example. As a stand-in for “Correct beliefs are always obvious, therefore if I hear anything I disagree with it can be dismissed without thought and its proponents mocked soundly.”

If you demand we dismiss a certain class of beliefs as equivalent to phrenology, and therefore too stupid to deserve serious analysis then you’re going to have to back that up with a good heuristic for determining that a belief is in this class which is quicker than studying it and debating it honestly.

The only heuristic I’ve ever learned that even approaches this level is “unpopular beliefs are stupid and should be dismissed without thought”. This is a good heuristic for most people, but if you force everyone to adopt it, then progress halts, since all popular beliefs have to start as unpopular beliefs.

My own experience has always been that all attempts to come up with better heuristics end in flaming disaster. For example, we know from studies (one of which I replicated myself with the LW survey data) that the ratio of the lengths of your fingers is correlated with how feminist you are. How in the world are we supposed to know that it’s OBVIOUSLY idiotic to try to infer your personality characteristics by bumps on your head, but it’s correct to try to infer your political beliefs by the length of your fingers?

The only reason phrenology is supposed to be so obviously idiotic that it’s a reductio ad absurdum for anything, is that it was once popular and is now discredited. If you want to use that as a good analogy for neoreaction, fine. But Friendly AI doesn’t follow that pattern. “Radical Bayesianism,” as if that’s a thing, doesn’t follow that pattern. And sometimes that pattern is wrong - cognitive psychology was considered discredited during the Behaviorist years, but then everyone admitted that actually Behaviorism had been wrong and the cognitive perspective was right all along.

If very many smart people whom I otherwise trusted started saying phrenology was correct, and they were able to point to studies and experts who supported them, then I would take it seriously as something worth thinking about (even if there were other studies and experts who didn’t). I’m not sure how else you expect people to think, unless you want everyone to stick to their first prejudice and never change.

Again, I think talking about “protected positions” is a serious misinterpretation of the issue. The reason that rationalists tolerate discussion of Friendly AI, and not discussion of phrenology, is that nobody tries to argue for phrenology, and if they did they wouldn’t be able to make a good case for it. If they made a great case for it and had a bunch of studies supporting it and some people whom we really trust like SarahC and gwern said they’d looked into it and found it extremely convincing, we’d talk about that too. The fact that this hasn’t happened probably says a lot more about phrenology than it does about the norms of the rationalist community.

Also, it’s not like we’re patronizingly “tolerating” Friendly AI. I think Eliezer’s mostly correct about that. So do probably at least half of other rationalists. How are we supposed to reject a position we actually believe?

It seems like you’re pushing back against “if it seems obviously wrong, it’s wrong,” where I’m trying to push back against “if it’s wrong, that doesn’t mean we shouldn’t stop talking about it endlessly.”  These are two completely distinct issues.

In the latter phrase, I wrote “wrong” rather than “obviously wrong.”  This is not because I think it’s possible to have perfect personal access to whether an idea is right or wrong, but because what I’m saying is not about quick heuristics, it’s about considered positions.  My question is, if you are pretty damn sure something is wrong (to the standard “about as sure as you are of anything in the relevant category” – cf. my earlier statement that if you have to reject any one ideology, fascism is a good choice), and people around you keep talking about it, what should you do?  What if they keep arguing about it but the arguments don’t seem to change anyone’s mind?

What if a large proportion of a given community considers the idea respectable, almost no one outside that community does, you think it’s almost certainly wrong, a lot of people in the community actually agree with you, but none of this will stop the endless debate cycle about this one idea?  At what point do you start to worry that the community is making some kind of error?

I’m finding that it’s very difficult to make analogies about this issue that will be interpreted the way I want them to be.  One desirable quality for such an analogy is that it selects some idea that most people reading it will have a considered objection to.  But that means choosing an idea that is widely disapproved of, like homeopathy, and those ideas now scan as “obviously wrong,” so when I mention them, people assume I’m talking about quick heuristics and not considered positions.

It’s possible there is a bigger problem here – I am implying that there is a category between “obviously wrong” and “respectable (deserving of what I called ‘protection’ earlier),” and maybe you don’t think there is.  I do, though, because the space of possible ideas is vast, and the space of possible ideas for which a good argument has been made is much smaller.  There are ideas that are just boringly wrong, that is, ideas which don’t ping the “but that’s absurd!” sensor in one’s mind, but which nonetheless the evidence is against.  (Most discredited academic ideas are like this – someone thought they were worth looking into, someone looked into them, the end.  The exception is the ones many people have heard about, like phrenology or the plum pudding model, which now ping us as “obviously wrong” for that purely historical reason.)

My frustration here is with feeling unable to say “no, you are just boringly wrong, in addition to being wrong according to the quick heuristic.”  Friendly AI, the digit ratio thing, and a lot of Moldbug’s ideas sound like crackpot stuff to many people; they might also actually be wrong, for boring empirical reasons.  But ideas that have become protected do not go away no matter how much evidence stacks up against them; people solemnly nod their heads in response to the latest bit of evidence, and then go on respecting the position in principle.

This happens in other communities with other protected ideas – including those that might strike you or I as “obviously wrong” – which is part of why I think the “protected position” idea has validity.  When I was younger I spent a lot of time around alternative medicine people because my father is into alternative medicine, and a lot of these people (including my father) are not stupid.  I was even prescribed homeopathic remedies, which my father encouraged me to keep an open mind about.  These people don’t just ignore negative evidence or physical plausibility; what they do is to point to the vast uncertainty of medical knowledge, to the distorting effects of vested interests, to the occasional study that contradicts the trend, to complicated treatises by their favorite mavericks which you will never have time to read, to statistical nuances, and so forth.  And yet the feeling remains: they are always performing these gestures in favor of the positions they happen to like.  The gestures always go in the direction of “keep [homeopathy / biofeedback / vitamin megadoses / this month’s preferred obscure supplement] respectable,” and never in the direction of any of the other myriad ideas in idea-space.  It feels very similar to talking to LWers about FAI or the like: you can make all the arguments you like, but the fixation on seeing the position as worthy-of-interest is never going to go away.

(via slatestarscratchpad)

  1. almostcoralchaos reblogged this from taymonbeal
  2. nostalgebraist reblogged this from xhxhxhx and added:
    I should have been clearer, but: the point I’m making isn’t a straightforward critique of these traditions, it’s more...
  3. perrydeplatyplus reblogged this from nostalgebraist
  4. drethelin reblogged this from slatestarscratchpad
  5. princess-stargirl reblogged this from slatestarscratchpad
  6. brazenautomaton reblogged this from slatestarscratchpad and added:
    Really? Because what I see is people pointing out that your conception of AI in general relies on a very large number of...
  7. slatestarscratchpad reblogged this from nostalgebraist and added:
    What you say is true and I agree with it, but it’s not more true of false ideas than of true ones. That is, there are...
  8. fake-rationality reblogged this from slatestarscratchpad