Install Theme

why we can’t have nice things

I think the reason I have this fascination with Less Wrong / Overcoming Bias is that while some of the ideas that they espouse strikes me as ridiculous, I really like some of their goals, and I’m curious whether you can get the good parts without the bad parts – and worried that the answer might be “no”

I really, really liked Overcoming Bias when I first discovered it as a sophomore in college.  At the time I was dealing with some people who actively and explicitly embraced “irrationality” as a way of ad-hoc justifying their awful, harmful behavior and beliefs, and the idea of a group of people who spent a lot of time thinking about what might be wrong with their beliefs seemed like a huge breath of fresh air.  "Rationalism" is a really misleading word for this kind of behavior (which is part of the larger pattern of poor communication that seems to plague these sorts of people).  "Self-improvement" or “productive, healthy self-criticism” are probably closer to the mark.  And I’m really all for these things, both from a general moral standpoint, and from the emotional standpoint of having had extensive experience with a number of people who could have benefited from a bit of productive, healthy self-criticism

The downside is that on Overcoming Bias and  Less Wrong, the attempt to train people to reflect and self-criticize was mixed in with a number of other, much more questionable things.  In the OB era there was Robin Hanson’s all-encompassing obsession with “signaling” and his general dickishness.  That ceased to be a problem when Hanson and Eliezer Yudkowsky split ways and Yudkowsky formed Less Wrong, but Less Wrong was full of Yudkowsky’s own fixations: Bayesianism (a defensible but by no means obvious position in philosophy of math/science that I doubt most LWers could defend beyond saying “it’s obvious when you think about it”), the idea that the singularity is near (totally incompatible with all of my admittedly amateur knowledge of neuroscience and AI, and something I have never seen a good argument for), the “Friendly AI” theory (ditto), etc.

What’s so frustrating about this is that none of it has anything to do with the basic idea of getting people to reflect on their beliefs and think about their cognitive biases.  If anything, it seems like a demographic coincidence – this particular type of self-improvement is popular among white male tech workers in the Bay Area and so is transhumanism and all that other stuff, so they all happened to collide.  From what I’ve heard about Less Wrong meetup groups (outside the Bay Area where Yudkowsky and MIRI are), they basically sound like friendly self-improvement societies for nerds, which seems like a perfectly good thing

What I’m wondering, though, is whether those groups would even exist if Less Wrong were less bizarre.  Maybe it just isn’t possible, for some reason, to have a group that provides community and psychological support in the way religion does without having beliefs that are sufficiently removed from people’s immediate experience?  Maybe if you made “rationalism” cleaner and less tainted by San Francisco (this would probably involve giving it a better name than “rationalism”), it would be too boring for people to pay attention to, and thus these (probably unequivocally good) meetup groups wouldn’t exist?  Is it possible that you just can’t produce religion-like groups (which have many benefits) without having religion-like beliefs?  That if Less Wrong were just about carefully reflecting on your beliefs rather than about God robots creating time loops by resurrecting you in the future and subjecting you to eternal damnation, it would be a great little blog that no one would read, completely pure but completely ineffectual?

It was startling to me when I noticed Eliezer Yudkowsky had his own Dark Enlightenment trading card, but in retrospect the connection between the Dark Enlightenment and Less Wrong (which apparently goes pretty deep) seems somehow totally natural

They’re clearly the same kind of nerds, though I don’t really know what name to call that kind or how to characterize them

There’s a certain style of blog out there that includes lots and lots of links to earlier blog posts by the same author, often with links standing in for arguments (instead of “[extravagant claim] [evidence]” you’ll just get “[extravagant claim]”).

This is a pretty clever rhetorical strategy, because it makes it look like the blogger has already demonstrated these claims in earlier posts, yet the posts behind the links will have their own links to posts with their own links and so forth.  It’s makes it effectively impossible to pin down the writer for not substantiating their claims, because anywhere you might look for substantiation turns out to be an empty box with a note pointing you somewhere else, and the process ends only with your own exhaustion.  (Unless you literally read all of the blogger’s output, it’s impossible to be sure you just haven’t come upon the one post in which all the treasure is actually hidden.)

Is there any equivalent of this in pre-internet writing, or is it a truly new thing?  It’s hard for me to imagine how someone could have written in this way without the internet.  (Scholars who constantly cite themselves are a lot like this, but there’s a strong existing prejudice against doing that, especially when making big claims.  Citing yourself and no one else in a book or paper “looks bad” in a way that linking yourself does not, at this point in internet culture, “look bad.”)

A device too small and weak to satisfy the above definition does not count as a paperclip and so a given mass cannot be subdivided into infinitely many paperclips.

Since this has been Eliezer Yudkowsky Diss Central for, like, several weeks, I feel like I should balance myself out by saying that his Quantum Physics Sequence is a real gem

(from what I’ve read of it at least)

aloadedspiralgalaxy replied to your post: I’ve finally been trying …

I’ve just started reading it upon the request of a casual hs fan because I think they implied they’d help me trawl the archives if I started it.but I’ve often been told that I’d like it because it is ‘like hs’. maybe b/c weird&on the internet/themes?

I think I’ve heard the same thing, and it makes me wary of judging it too early, because I know how perilous that is with HS.

(On the other hand, HPMOR has a notice that says roughly “it gets good around Ch. 5 and if you don’t like it by Ch. 10, stop reading.”  If someone were to try to write such a notice for HS, I can’t imagine there being even a rough consensus on where to place those two points.)

kadathinthecoldwaste:

nostalgebraist:

I’ve finally been trying to sit down and actually read Harry Potter and the Methods of Rationality, both out of Yudkowsky curiosity and (more importantly) because three or four people have independently recommended it to me over the years

I may be biased against it because of my opinions of the author, and I’m also not very far at all (Chapter 7), but so far it is really not very good and is making me worry about what it means that multiple people have independently told me “you would really like this”

I have a bit of a hypothesis about this. Let me start with the caveat that I haven’t actually read HPMOR, and this is based solely on my reading of the HP series and fan reaction to both.

So, there’s this thing in the Harry Potter series, this thing that doesn’t make a lick of sense: the near-complete absence of anything like science in the wizarding world. Sure, there’s the Department of Mysteries, but that’s like envisioning a modern America in which Los Alamos is the only working laboratory. Wizards, even the most intelligent and inquisitive, display a bovine incuriosity when it comes to this force that makes their entire society possible. It would be one thing if the society in question showed some sort of visible and dogmatic aversion to science, but instead it somehow just doesn’t come up, even among muggle-born wizards, at least some of whom have presumably heard of the scientific method.

Now, my ranty presentation aside, this isn’t something that I lose much sleep over. I enjoy the Harry Potter series, but I don’t love it, and I don’t take it terribly seriously, in large part because of its various world-building quirks and failures (see also: its lack of explanation for the absence of any form of post-secondary education). A lot of people really love the Harry Potter series for various reasons, though, and some of those people are also seriously invested in Science. These seem to be the majority of the HPMOR fans in my social circle. For a lot of them HPMOR seems not just interesting on its own merits, but also to serve as a form of fix fic for this dissonance between the story they love and its nonsensical portrayal of science. It need not be great literature (and again, I can’t say for certain whether it is or not) because it scratches a pretty nasty itch.

I can see that, and I think I like those aspects of HPMOR too – there’s just a lot of other stuff going on in the story that makes it hard for me to see as some sort of substitute for HP with better handling of science.  In particular, the character of Harry is pretty much unrecognizable (and incredibly irritating, though I get the sense that’s deliberate, and may change over time).

It’s not so much “Harry Potter without a major worldbuilding flaw” as it is “a Harry Potter AU where (among other things) Harry is a totally different person, which changes everything, and one of the many consequences is that Harry notices and points out a major worldbuilding flaw.”

Which I guess could scratch the same itch – it’s just way more different than it would need to be to do so.  (You tell the waiter that your soup is too watery, and he replaces it with … a hamburger.  Is the hamburger too watery?  Well, no, but … )

(via dagny-hashtaggart)

there’s a _certain type_ of person that likes hmpor, and they are often of they type that everything they like other people will love unconditionally

That seems like a good potential explanation to keep in mind. Thanks.

I’ve finally been trying to sit down and actually read Harry Potter and the Methods of Rationality, both out of Yudkowsky curiosity and (more importantly) because three or four people have independently recommended it to me over the years

I may be biased against it because of my opinions of the author, and I’m also not very far at all (Chapter 7), but so far it is really not very good and is making me worry about what it means that multiple people have independently told me “you would really like this”

i. wait. what? wow. “donate as much money as you can to me so i can write my harry potter fanfic, which is crucial to the future of mankind”??! still, as unpleasant and shady as that sounds, it seems pretty garden-variety crank stuff in a lot of ways

Yeah.  I think where it gets interesting is that this guy and his followers are actually very sincerely, and in some ways successfully, devoted to self-reflection and avoiding groupthink and stuff like that.  The celebration of an arrogant leader and the “donate all your money” thing make them sound a lot like a cult, but half their writing is actually pretty good stuff about a topic you could describe as “how to be less like people in cults.”  So if you’re interested in cults, they’re a really bizarre border case

If you want to learn more, I recommend reading this and all of the links on the upper right, and the pages linked as sources if you want corroboration (since RationalWiki itself has a pretty obvious negative slant on this stuff)