Install Theme
oddly, the pre-MIRI days were in some ways more sensible

oddly, the pre-MIRI days were in some ways more sensible

Thinking about this kinda stuff reminded me of the page about Countersphexist Algernons which is a bit of Yudlore worth rememebering

all work and no play makes rob write dumb posts

I’ve been productive enough today to indulge in some self-promotion, so:

Reasons to read Floornight (if you have not already done so):

  • curiosity about what horrors the monster behind noted bad blog “nostalgebraist dot tumblr dot com” could create if given free reign to create a fictional world filled with fake people
  • shamelessly derivative of a bunch of stuff I like and you may also like
  • wacky bickering scientists!!!
  • robot character who speaks in verbose purple prose has been described as “very charming,” “adorable,” and “a likable guy”
  • submarines
  • endorsed not only by several of tumblr dot com’s finest posters, but also by noted Catgirl Volcano Theorist / Abysmal Self-Insert Fanfic-Writer Extraordinaire / Destined Savior of Humanity Eliezer Yudkowsky Himself (is this a pro or a con? you decide!!!)
  • categorized as “rational fiction” by reddit (?????)
  • someone once compared a character in it to BBC Sherlock but they were wrong, that guy is nothing like BBC Sherlock
  • includes no characters who are like BBC Sherlock
  • your readership will help my vulnerable ego recover from that one time when I was a teenager and tried to get everyone to read my sci-fi epic and no one did because it was terrible (except one guy, bless his heart, who read the whole thing and then explained to me why it was terrible, leading to a sudden moment of tragic recognition)

This is why Eliezer Yudkowsky can’t take over the world just by promoting his simulation of Professor Quirrell to be in charge of his brain, as several earnest people have proposed to me. Look, I don’t mean to sound immodest here, but that would in fact be an enormous step down for me.

Somehow in the process of producing that last post, I started “The Ballad of Big Yud” playing in some Chrome netherworld that doesn’t seem to be associated with any of my tabs and so I can’t get it to stop even after closing tumblr and Youtube

Owned

duder i know u think u are tough on yudkowsky b/c u used to like him a lot more, but u are still way easier on him than anyone else i interact w/ on the internet

No I know, this is kind of my schtick

I like joking around about him as much as the next guy but I also find him and his mistakes legitimately interesting and worth talking about in terms that aren’t just black and white

Follow-up to the previous post: having read all of the new Yudkowsky writing essays I am now actually confused about why he thinks writers should write “intelligent characters”?

It seems to be a mixture of “they’re more interesting” (see previous post) and “they’re more realistic” (stance taken here)

His response to characters who make egregiously bad, nonsensical, or irrational decisions is always negative, but the justification seems to waver between “that’s boring because I already know people behave that way” and “no one really behaves that way.”  Those can’t both be true.

(I guess this is not a new problem – compare it to: “people are prone to the conjunction fallacy and don’t realize it, and Tetlock showed that even domain experts suck at prediction; by the way, you should take my detailed futurological speculations very seriously”)

As usual, the new Yudkowsky writing advice takes swipes at “literary fiction,” and while it’s tempting to just classify this as him indicating that he belongs to one subculture and not another, I think it actually makes internal sense.

Yudkowsky seems to think that fiction should be about philosophy or science and not about psychology.  A lot of what is called “literary fiction” is about trying to depict people very accurately, including all the little oddities and shades of gray that distinguish real people from stylized or simplified characters.  If you try to read this as “making a point,” the point is going to be a pretty banal one, like “people are complicated.”  But it isn’t making a point, it’s just striving for reality, and some people like that.

He’s a bit like a conceptual art fanatic looking at a piece of detailed representational art and saying, “what, like you think I don’t know clothing has all those folds and stuff?”

So, for instance, this piece has a good point to make – conflicts between two sides that both have strong moral cases to make are cool and not written often enough – but insists that making both sides realistically flawed is bad because it “weakens the conflict.”  But for many people, realism is a virtue in itself.  Yudkowsky simply doesn’t recognize this; he wants characters to teach him things or make clever arguments he hasn’t heard before, not to merely be recognizably human, which would mean sometimes simply being stupid or ignorant or incompetent.  (“But I already know what people are like!  I already know clothing has folds!”)

I feel contractually obligated to inform you that there is now more Yudkowsky writing advice

It’s not quite as bad as the thing about “envisioning awesome destinies” but then how could it be, really

queenshulamit-deactivated201602 asked: What is your response to people who say MIRI diverts people's attention from the more immediately pressing concern of climate change?

youzicha:

nostalgebraist:

slatestarscratchpad:

I wonder if these same people ever worry that, let’s say, poverty relief or feminist activism diverts people’s attention from the more immediately pressing concern of climate change. If so, they get +2 consistency points - but then I wouldn’t expect them to talk about MIRI, since in terms of totally number of dollars / hours of effort put in it’s about 0.01% of the other two.

(I wonder if saying “MIRI diverts people’s attention from the more immediately pressing concern of climate change” diverts people’s attention from the more immediately pressing concern of poverty relief diverting attention from the more immediately pressing concern of climate change.)

But I actually think the situation is even better than that. I think that something like feminist activism funges strongly against climate change, since it’s using the time of political activists who are good at raising awareness in the general public and getting political stuff done.

Something like MIRI funges very weakly against climate change, because it’s getting meta-mathematics geeks to write proofs and maybe a few people to donate money. At this point I don’t think the climate change movement really needs either of those things. It’s so well-funded that MIRI’s million or so would be a tiny drop in the barrel, and although it’s possible that meta-mathematics geeks could, with some retraining, become climate simulation geeks, I don’t really think the lack of sufficiently good climate simulations is what’s holding global action against climate change back.

In other words, this seems a lot to me like motivated reasoning - “MIRI is weird, therefore MIRI is bad, therefore let me find some reason MIRI is bad, even if I would never consistently apply that reasoning to anything else.”

The size/influence of MIRI is just not relevant at the margin I’m considering.  The question I’m asking is “what should one person who’s good at math choose for their career?”, and that one person will be spending 100% of their work time working for MIRI if they choose to work for MIRI.  At this margin it is, yes, worth worrying about whether MIRI is the right choice or not.  If you make a suboptimal choice, the utilitarian analysis doesn’t care whether your particular suboptimal choice is taken rarely or often.

But the existing number of researchers does influence the expected value of adding one more researcher. E.g. suppose there are currently ten people studying the theory of friendly AI, and a thousand studying the theory of climate models. If I decide to be the 11th FAI researcher there is a pretty good chance that some of the ideas I come up with would have eluded the other 10 people, but it’s unlikely that I will come up with anything important that all the other 1000 climate people missed.

Another way to think of it is that there are more high-value low-hanging fruits when a field is small. E.g. if I join a crystallography group in the 1950s and spend 5 years determining the structure of DNA, that’s really valuable and enables lots of technology and further research. If I join a crystallography group in 2014, and determine the structure of yet another obscure protein to be added to the Protein Database, that’s less valuable. There’s a reason they did DNA first.

Sure, and I’m taking diminishing marginal returns that into account when I make the comparison.  Theoretical climate science is both big and understaffed – no contradiction there, since it’s a very important and very difficult problem, and a fairly “parallelizable” one (it’s useful to try many different approaches at the same time and see which works best).

Also, I think one has to be careful about one’s assumptions when making this kinds of marginal returns analysis.  If you deem a discovery to be valuable because it “enables lots of technology and further research,” then that value only happens on the assumption that the further developments also happen, even though they’ll be “boring,” non-foundational, incremental research.  The discovery of DNA may have been great because it enabled the creation of the field of molecular biology, but it didn’t directly create molecular biology – other people had to do that, choosing to join a maturing (and, eventually, mature) field rather than going off to try to be the Watson and Crick of some new field.

In other words, this strikes me as the kind of logic that would lead an entrepreneur to endlessly found companies and then abandon them, because the “marginal return” of hiring employee #1 at company #(n+1) was greater than the “marginal return” of hiring employee #2 at company #n.  Marginal returns analysis depends on some way of attributing responsibility for an end result to different steps in the process of producing that result, and giving most of the credit to the most “foundational” steps may produce bad advice.  (It could lead to a weird, bad world in which no project ever reaches a mature stage because everyone’s aiming to become the legendary founder of some mature project; a world where every low-hanging fruit is picked and no one picks any other fruit; a world where DNA is discovered but no one bothers to create molecular biology.)

Or: advising an otherwise completely generic person who is “good at math” to aim at becoming a foundational scientist is probably bad advice, because very little of the (useful) science that gets done is foundational science.  We are probably somewhat blind to this because successful founders are more famous than the (equally necessary) incrementalists that followed them, and because unsuccessful founders are forgotten.

(Is the latter part true/important?  I’m not sure.  I’m trying to think of examples of failed and now largely forgotten attempts to create academic fields, and all I’m coming up with is Catastrophe Theory.  Of course, if they’re forgotten, that would explain why I can’t think of many examples … but once I’m invoking that, I’ve created an unfalsifiable theory that can handle any datum.  I dunno.)