Install Theme

another twist of the LW hilarity kaleidoscope

I suggested that there was something to be gained from staying in school, reading great works of literature and philosophy, and arguing about ideas with people who have different views. After all, this had been the education of Peter Thiel. In “The Diversity Myth,” he and Sacks wrote, “The antidote to the multiculture is civilization.” I didn’t disagree. Wasn’t the world of libertarian entrepreneurs one more self-enclosed cell of identity politics?

Around the table, the response was swift and negative. Yudkowsky reported that he was having a “visceral reaction” to what I’d said about great books.

(George Packer, “No Death, No Taxes: The libertarian futurism of a Silicon Valley billionaire”)

I have something of the same visceral reaction myself – but in Yudkowsky’s case I actually do think he might do well to crack open some Dostoyevsky once in a while.  After all, he professes an interest in saving the world, and attracting to the most capable people in the world to his cause.  Surely he would best be able to do so if he had a command of the cultural references that traditionally signify intellectual and moral weight?

Instead, when he deploys cultural references, they tend to be a little more like …

  • this meditation upon the Problem of Catgirl Harems
  • this post, in which he refers vaguely to “Japanese fiction,” by which he appears to mean “anime”
  • his famous Harry Potter fanfic
  • his many fanfic recs – he reads a huge amount of fanfic and his taste skews exclusively towards poorly written stories about cartoonish superheros slinging magic powers at each other for hundreds of monotonous pages (this is a typically unreadable example)
  • his downright chilling comparison of people who don’t share his values to non-player characters in a video game: “And remember:  To be a PC, you’ve got to involve yourself in the Plot of the Story.  Which from the standpoint of a hundred million years from now, is much more likely to involve the creation of Artificial Intelligence or the next great advance in human rationality (e.g. Science) than… than all that other stuff.  Sometimes I don’t really understand why so few people try to get involved in the Plot.”
  • and of course:
    image

This is all significant for two reasons.  First, it indicates that his worldview has been shaped by bad fiction.  He tries to pass off these references as jokes or amusing diversions from the substance of his writing, but if you read enough you’ll eventually notice that he makes these “jokes” constantly and persistently, and often in lieu of substance, and you’ll realize the simplest explanation is that really means them.

Second, it indicates that he’s either unable or unwilling to realize how poorly he’s poised to reach the vast majority of humanity.  He must know that most people in existence would find these persistent references incomprehensible, repulsive or both.  (What the fuck is a “catgirl”?)  So he must simply think that only the people who get the references are worth reaching.  That sounds absurd, and it is.  But it’s an understandable attitude for someone who never went to high school or college, started using the internet early on, and indeed essentially socialized himself through the internet.  Such a person is likely to have never encountered many smart people who are not stereotypical nerds, because the majority of heavy internet users are stereotypical nerds.  Such a person could actually come to the absurd conclusion that people who aren’t stereotypical nerds aren’t worth reaching, and close off their supposedly world-changing gospel from anyone who doesn’t fit that description.

If they don’t know what My Little Pony is, how crucial could they be to the future of humanity, really?

I wonder why Less Wrong isn’t more worried about climate change, another case where there’s a small but nonzero probability of truly extreme catastrophic effects (which will never be eliminated by better data and theory, because it’s a basic consequence of the way feedback works)

According to Eliezer Yudkowsky, the expected disutility (negative utility) of learning about the concept known as Roko’s Basilisk is enormous.

According to Eliezer Yudkowsky, the expected disutility (negative utility) of learning about the concept known as Roko’s Basilisk is enormous.

Roko’s Basilisk: The Argument Which Must Not Be Named

Roko’s Basilisk: The Argument Which Must Not Be Named

blurds asked: I understand this (to the extent that I am able to, of course.) The thing is I'm sitting here and hoping that he IS wrong about this particular tribe, and that there's a way for you to be in community with folks like you without having to subscribe to a lot of batshit opinions - that there's a disconnect between content and form, and that there are people like this woman who are trying to transform this discourse for the better without diluting it!

(Sorry, I didn’t respond to this at the time because I was half-asleep and then I forgot about it)

I understand what you’re saying and I have the same hope — but there is an important piece of subtext here that I think is necessary for understanding why I get so weirdly touchy and defensive of “that particular tribe”

I had a whole long post wrote up about this, but it sucked in many ways so I deleted all of it.  The gist is: people (surprisingly many people) used to treat me like I was an android from a sci-fi story, and assume bizarre things about me as a consequence (“Rob doesn’t have emotions,” “Rob does not understand social nuances,” “Rob has no sense of aesthetics beyond brute functionality,” “Rob is asexual or aromantic or both,” etc. — not all of these were negative, although “Rob has encyclopedic knowledge” got weird pretty fast).  All of this was pretty obviously a consequence of the way I talked and had nothing to do with my actual views, as I could tell since it went away when I changed the way I talked without changing my views.  I’ve become very wary of rhetoric like “doesn’t see shades of grey” or “overly literal” or “overly earnest” because in my personal experience these terms, while frequently accurate (who can see all of the nuances in the real world, really?) only seem to get used when the subtext is “you are talking in a style that makes me not see you as a Real Person.”  That sounds pretty extreme but I’ve found it to be true more often than not in my own life

And what’s really scary about that subtext is that “sounding like a robot” is still my default state — although it’s something that I’ve been able to paper over pretty successfully, I will lapse back into it if I’m tired or in a severe emotional state.  This makes this kind of subtext especially scary, because it appears to say, to me, “if you ever enter an unusually vulnerable state, I will, as a consequence, stop seeing you as a Real Person,” which is of course not a helpful thing for anyone in a vulnerable state

So like yes I agree that the Less Wrong people are wrong about some stuff but I am not convinced that way people express this, or the vehemence with which they express it, is not largely stylistically driven — because it sounds exactly, down to specific phrases, like the kind of entirely stylistically driven criticism I used to get, and which I will presumably get again if I ever forget to put on the human suit before leaving the house

I understand this is not the most nefarious stereotype in the entire world, but as someone who is not actually a robot, I reserve the right to care to an “irrational” (ha ha) extent about things that have come up a lot in my life even if they are not that big a deal in the grand scheme of things

Don’t know what to say about this but it should be linked given what my blog has become lately – more aporia-inducing (for me) weirdness about the “rationalist community”

Following Big Yud on Facebook has been an … experience

Following Big Yud on Facebook has been an … experience

Friendly AI Theory is like Aspect Inversion Theory, except about real life

On view at Mr. Arnold’s event, which was funded by a Kickstarter campaign that raised $8,330, was a distinctly millennial brand of nonbelief. (NYT)

I read this first thing in the morning and mistook “millennial” for “millenarian,” which would have been 1) more interesting and 2) also true

Given what I’ve been posting about lately I feel pretty much obligated to link this