why we can’t have nice things
I think the reason I have this fascination with Less Wrong / Overcoming Bias is that while some of the ideas that they espouse strikes me as ridiculous, I really like some of their goals, and I’m curious whether you can get the good parts without the bad parts – and worried that the answer might be “no”
I really, really liked Overcoming Bias when I first discovered it as a sophomore in college. At the time I was dealing with some people who actively and explicitly embraced “irrationality” as a way of ad-hoc justifying their awful, harmful behavior and beliefs, and the idea of a group of people who spent a lot of time thinking about what might be wrong with their beliefs seemed like a huge breath of fresh air. "Rationalism" is a really misleading word for this kind of behavior (which is part of the larger pattern of poor communication that seems to plague these sorts of people). "Self-improvement" or “productive, healthy self-criticism” are probably closer to the mark. And I’m really all for these things, both from a general moral standpoint, and from the emotional standpoint of having had extensive experience with a number of people who could have benefited from a bit of productive, healthy self-criticism
The downside is that on Overcoming Bias and Less Wrong, the attempt to train people to reflect and self-criticize was mixed in with a number of other, much more questionable things. In the OB era there was Robin Hanson’s all-encompassing obsession with “signaling” and his general dickishness. That ceased to be a problem when Hanson and Eliezer Yudkowsky split ways and Yudkowsky formed Less Wrong, but Less Wrong was full of Yudkowsky’s own fixations: Bayesianism (a defensible but by no means obvious position in philosophy of math/science that I doubt most LWers could defend beyond saying “it’s obvious when you think about it”), the idea that the singularity is near (totally incompatible with all of my admittedly amateur knowledge of neuroscience and AI, and something I have never seen a good argument for), the “Friendly AI” theory (ditto), etc.
What’s so frustrating about this is that none of it has anything to do with the basic idea of getting people to reflect on their beliefs and think about their cognitive biases. If anything, it seems like a demographic coincidence – this particular type of self-improvement is popular among white male tech workers in the Bay Area and so is transhumanism and all that other stuff, so they all happened to collide. From what I’ve heard about Less Wrong meetup groups (outside the Bay Area where Yudkowsky and MIRI are), they basically sound like friendly self-improvement societies for nerds, which seems like a perfectly good thing
What I’m wondering, though, is whether those groups would even exist if Less Wrong were less bizarre. Maybe it just isn’t possible, for some reason, to have a group that provides community and psychological support in the way religion does without having beliefs that are sufficiently removed from people’s immediate experience? Maybe if you made “rationalism” cleaner and less tainted by San Francisco (this would probably involve giving it a better name than “rationalism”), it would be too boring for people to pay attention to, and thus these (probably unequivocally good) meetup groups wouldn’t exist? Is it possible that you just can’t produce religion-like groups (which have many benefits) without having religion-like beliefs? That if Less Wrong were just about carefully reflecting on your beliefs rather than about God robots creating time loops by resurrecting you in the future and subjecting you to eternal damnation, it would be a great little blog that no one would read, completely pure but completely ineffectual?
