Install Theme

queenshulamit-deactivated201602 asked: What is your response to people who say MIRI diverts people's attention from the more immediately pressing concern of climate change?

slatestarscratchpad:

nostalgebraist:

slatestarscratchpad:

I wonder if these same people ever worry that, let’s say, poverty relief or feminist activism diverts people’s attention from the more immediately pressing concern of climate change. If so, they get +2 consistency points - but then I wouldn’t expect them to talk about MIRI, since in terms of totally number of dollars / hours of effort put in it’s about 0.01% of the other two.

(I wonder if saying “MIRI diverts people’s attention from the more immediately pressing concern of climate change” diverts people’s attention from the more immediately pressing concern of poverty relief diverting attention from the more immediately pressing concern of climate change.)

But I actually think the situation is even better than that. I think that something like feminist activism funges strongly against climate change, since it’s using the time of political activists who are good at raising awareness in the general public and getting political stuff done.

Something like MIRI funges very weakly against climate change, because it’s getting meta-mathematics geeks to write proofs and maybe a few people to donate money. At this point I don’t think the climate change movement really needs either of those things. It’s so well-funded that MIRI’s million or so would be a tiny drop in the barrel, and although it’s possible that meta-mathematics geeks could, with some retraining, become climate simulation geeks, I don’t really think the lack of sufficiently good climate simulations is what’s holding global action against climate change back.

In other words, this seems a lot to me like motivated reasoning - “MIRI is weird, therefore MIRI is bad, therefore let me find some reason MIRI is bad, even if I would never consistently apply that reasoning to anything else.”

I have made this criticism before, so I want to explain myself a bit.

You’re right about the fact that better climate models won’t in themselves spur global action on climate change.

However, I think it’s important to be aware of the fact that making better climate models is very much a problem that requires math/physics type talent (not really in overlapping areas with what MIRI does, but the kind of thing I’m sure MIRI’s people could work on competently), and that in some ways this is an understaffed problem.  (Don’t particularly want to get into an argument about that claim right now — just stating that it is what I believe, and it is what some people in the field believe)

The margin at which the critique make sense is not “who should I give money to?” but “it seems that I am good at math; what I should do with this ability?”  MIRI and its supporters argue that MIRI is not just a fun job to take, but an important one, because it’s about an important understaffed problem.  But even if that’s true (arguable, but that’s a whole distinct argument), making better climate models is also an important understaffed problem.

(I know that sounds weird, because climate change itself is very well known.  But it’s also extremely difficult theoretically, and not the kind of thing whose theoretical side is well-popularized in the way that, say, the theoretical side of fundamental physics is well-popularized.  Young physics students dream of accelerators, not cloud parameterizations.)

Finally, I think the point about “better climate models won’t in themselves spur global action on climate change” applies to some extent to MIRI as well — building the theory in itself does nothing to ensure that people will use it.  But I would argue that improving climate simulations is a very important task even if the world isn’t taking action on them — because even if we just sit back and passively react to climate change rather than trying to stop it, it will be very useful to be able to predict it.  Even if, say, coordination problems prevent everyone from getting their shit together enough to stop a certain area from becoming unlivable, it would be very helpful — for governments and for people — to know with high certainty, and as far in advance as possible, that that area will become unlivable.  

By contrast, I don’t think Friendly AI theory has the same kind of applicability in a world in which it is not used preventatively.  (In fact, I think in Yudkowksy’s view it explicitly wouldn’t be: an Unfriendly seed AI, once built, would quickly grow beyond our comprehension or control, so no theory will be able to give us predictions of the “yeah we ended up in the Bad Timeline but here’s exactly how the badness will unfold, use this to protect your loved ones” type.)

Thanks for the explanation. What you say about trying to predict global warming, even if you can’t stop it, seems right, and makes much more sense than what I thought queenshulamit’s hypothetical person was saying.

But I still disagree with you. How much high-level math talent do you think graduates a year? 20,000 people? 30,000? The NSA gobbles up a few thousand for cryptography, Jane Street gobbles up a few thousand for investment banking, Google gobbles up a few thousand for software engineering, academia gobbles up a few thousand for pure research. In what world is it worth worrying about MIRI getting, like, one person per year for a cause that’s probably better than any of those guys?

The strongest counterargument I could think of is that only a few of those thousands of mathematicians are interested in doing good, and those are the ones passing up jobs with Google to look at things like global warming or AI risk.

But I don’t really think that’s how it works. Except for a couple of very special people on GiveWell, I don’t think people are inspired to do good, and then they look for the most efficient charitable way to pursue that goal. I think that get interested in a specific cause, and then their attachment to that cause inspires them to separate from the packs going to Google and the NSA and accept a lower salary pursuing their dream.

Eliezer’s already said that if he didn’t believe in AI risk, he’d be a physicist. A lot of the people who come to MIRI come to it from Google, and a lot of the people who leave MIRI leave it for Google. Nate’s bio ( http://lesswrong.com/lw/jl3/on_saving_the_world/ ) doesn’t really leave a lot of room for expecting him to become a climatologist. On the other hand, how many climate modellers do you know who seriously considered whether they should work in the field of AI risk instead?

I think that, with the possible exception of you, it’s very likely that there is not a single mathematician in the world who has ever seriously considered, even for the space of a single thought, whether to work for AI risk or climate modelling. There are a lot of mathematicians who have considered whether to work for Google or climate modelling, and a lot who have considered whether to work for Google or MIRI, but just by statistics alone - let alone the different personalities both causes attract - we wouldn’t necessarily expect them to overlap.

Three quick points here — then I’ll have to bow out for the rest of the day because I need to get myself off tumblr and work.  (Usual work void rules apply: yell at me if you see me tumbling)

Point 1: The size/influence of MIRI is just not relevant at the margin I’m considering.  The question I’m asking is “what should one person who’s good at math choose for their career?”, and that one person will be spending 100% of their work time working for MIRI if they choose to work for MIRI.  At this margin it is, yes, worth worrying about whether MIRI is the right choice or not.  If you make a suboptimal choice, the utilitarian analysis doesn’t care whether your particular suboptimal choice is taken rarely or often.

Point 2: I think you are underestimating the transferability of skill between the relevant domains here.  When EY says he “would have been a physicist” he is saying what many thousands of people like me say before getting degrees in physics and then going on to do something else.  ”Physics” in the narrow sense is vastly overstaffed, and the skills you learn when you get a physics degree largely overlap with many other, less overstaffed domains.  (I briefly considered going to engineering grad school, and was astonished when the head of my undergrad physics department told me that my lack of an engineering degree would help rather than hurt me in admissions, because a physics degree is treated roughly like a better version of an engineering degree.)  Many, many people who once “wanted to be physicists” now work in engineering, software, data science, etc. (su3su2u1 can probably talk more informatively about this.)

I realize this is kind of ironic given the stance I was taking in the IQ debate, but the longer I’ve been in math-related fields, the more I realize “being good at math” really is a transferable and widely applicable skill.  I learned all of the climate-related stuff I know in grad school because few people teach it undergrads; many of the professors I work with on climate stuff have physics degrees, in some cases even physics Ph.Ds.  Paul Christiano has a BA in math from MIT and is now a Ph.D. student in computer science and working at MIRI; I’m sure he could learn climate fundamentals in a few years just like I did (and would likely be better at it than I am).

3.  I think your response mixes together facts and values in a confusing way.  Sure, many people don’t actually ask themselves “should I do AI risk or climate?”, but maybe they should.  At this point I admit we are getting away from the original question a bit, since we’re no longer talking about what MIRI actually does.  My point is that altruistically motivated people with math skills should think about stuff like climate as an alternative to working for MIRI, and (see point #2) they probaly could do either.

That they aren’t aware of both alternatives is a fact about social networks and about the way neither of these projects is not especially well publicized.  (To the extent that they’re publicized, it isn’t in the same social networks; among other things I think there’s a literal west coast / east coast split here.)  It doesn’t really bear on the ethical point about which of these is a better thing to do.

(It feels a bit like someone saying that MIRI is worthless because, after all, “very few skilled people ever think about working for MIRI.”  Surely true, but that’s merely a fact about MIRI’s lack of publicity.  The question is, should people think about working there?)

queenshulamit-deactivated201602 asked: What is your response to people who say MIRI diverts people's attention from the more immediately pressing concern of climate change?

slatestarscratchpad:

I wonder if these same people ever worry that, let’s say, poverty relief or feminist activism diverts people’s attention from the more immediately pressing concern of climate change. If so, they get +2 consistency points - but then I wouldn’t expect them to talk about MIRI, since in terms of totally number of dollars / hours of effort put in it’s about 0.01% of the other two.

(I wonder if saying “MIRI diverts people’s attention from the more immediately pressing concern of climate change” diverts people’s attention from the more immediately pressing concern of poverty relief diverting attention from the more immediately pressing concern of climate change.)

But I actually think the situation is even better than that. I think that something like feminist activism funges strongly against climate change, since it’s using the time of political activists who are good at raising awareness in the general public and getting political stuff done.

Something like MIRI funges very weakly against climate change, because it’s getting meta-mathematics geeks to write proofs and maybe a few people to donate money. At this point I don’t think the climate change movement really needs either of those things. It’s so well-funded that MIRI’s million or so would be a tiny drop in the barrel, and although it’s possible that meta-mathematics geeks could, with some retraining, become climate simulation geeks, I don’t really think the lack of sufficiently good climate simulations is what’s holding global action against climate change back.

In other words, this seems a lot to me like motivated reasoning - “MIRI is weird, therefore MIRI is bad, therefore let me find some reason MIRI is bad, even if I would never consistently apply that reasoning to anything else.”

I have made this criticism before, so I want to explain myself a bit.

You’re right about the fact that better climate models won’t in themselves spur global action on climate change.

However, I think it’s important to be aware of the fact that making better climate models is very much a problem that requires math/physics type talent (not really in overlapping areas with what MIRI does, but the kind of thing I’m sure MIRI’s people could work on competently), and that in some ways this is an understaffed problem.  (Don’t particularly want to get into an argument about that claim right now — just stating that it is what I believe, and it is what some people in the field believe)

The margin at which the critique make sense is not “who should I give money to?” but “it seems that I am good at math; what I should do with this ability?”  MIRI and its supporters argue that MIRI is not just a fun job to take, but an important one, because it’s about an important understaffed problem.  But even if that’s true (arguable, but that’s a whole distinct argument), making better climate models is also an important understaffed problem.

(I know that sounds weird, because climate change itself is very well known.  But it’s also extremely difficult theoretically, and not the kind of thing whose theoretical side is well-popularized in the way that, say, the theoretical side of fundamental physics is well-popularized.  Young physics students dream of accelerators, not cloud parameterizations.)

Finally, I think the point about “better climate models won’t in themselves spur global action on climate change” applies to some extent to MIRI as well — building the theory in itself does nothing to ensure that people will use it.  But I would argue that improving climate simulations is a very important task even if the world isn’t taking action on them — because even if we just sit back and passively react to climate change rather than trying to stop it, it will be very useful to be able to predict it.  Even if, say, coordination problems prevent everyone from getting their shit together enough to stop a certain area from becoming unlivable, it would be very helpful — for governments and for people — to know with high certainty, and as far in advance as possible, that that area will become unlivable.  

By contrast, I don’t think Friendly AI theory has the same kind of applicability in a world in which it is not used preventatively.  (In fact, I think in Yudkowksy’s view it explicitly wouldn’t be: an Unfriendly seed AI, once built, would quickly grow beyond our comprehension or control, so no theory will be able to give us predictions of the “yeah we ended up in the Bad Timeline but here’s exactly how the badness will unfold, use this to protect your loved ones” type.)

It seems like a general rule that I distrust bloggers (or more generally “writers” or even “people,” but it comes up most often with bloggers, not surprisingly) who present themselves as having some sort of exceptional level of experience, knowledge, or the like without being transparent about exactly how they got this stuff and how it might tie in to the experience or knowledge possessed by the reader.

It’s not that I don’t trust people without credentials.  If you don’t have a degree in physics but you talk about physics, it will quickly become clear whether you know your shit or not.  I can see whether you get physics things right when I happen to know about them, and make some pretty reliable inferences on that basis about the other things you say about physics.

The problem is when someone presents themselves as sort of a general expert, one who just knows things, or tends to.  This is almost always a somewhat deceptive front, and I find myself unable to know whether to trust anything the person says, even about fairly mundane things.  Someone writing about, say, their experience having a particular job will be credible, but someone who seems to conceive of themselves as writing about their experience “being more perceptive and awesome than you” plants seeds of doubt in everything they say, to the point that even good points (even points that would seem obvious if made by someone else) become hard to believe when said by these people.  If someone asks me to treat them like a wise guru, and I (quite reasonably, since this is just some person on the internet) say “no,” it then becomes much harder for them to convince me of anything than it would have been if they had never made the guru request to begin with.

Examples of people who are pretty much unreadable to me because of this (among other things): Moldbug, The Last Psychiatrist.  Examples of people who have a bit of this tendency, but temper it by citing sources and being explicit about their reasoning: Cosma Shalizi, Scott Alexander.

Anonymous asked: Can you please liveblog HPMOR? I know your friend su3su2u1 is doing it, too, but you have your own unique style.

Sorry, but no, for a few reasons.

For one, I’m pretty busy these days, and I don’t need another regular/ongoing project (as opposed to spontaneous tumblr posts I write whenever I feel like).

Second, I think su3su2u1’s liveblog is good and does what I want out of an HPMOR liveblog and would try to do with one if I wrote it, so it’d feel kind of superfluous to me.

Third, I just really dislike HPMOR.  Even when I was reading it casually and somewhat inattentively, without an eye toward writing something like a liveblog which requires full comprehension, I still found it both frustrating and boring.  It was a slog.  So a liveblog would feel like a chore, and I don’t really want to do that.  (This would probably also make it kind of bad as a liveblog, as it would be full of “remind me why I am wasting my time reading this” type stuff.)

mttheww asked: I'm having fun reading yudkowsky although he comes across a bit pompous sometimes

also he seems given to stating stuff I don’t think he can prove as if it’s the most obvious thing in the world (eg implying that human beings are the only animals who are aware of their own mental processes)

He is a fun writer!  The bad parts of his writing are when the stuff he can’t prove is most of the content, or at least most of its foundation.

(Or HPMOR which is its own brand of terrible.  Though I guess HPMOR emphasizes something that is also a problem in his other writing, which is a fixation on becoming powerful and manipulating people, and this strange idea that fairly obvious self-improvement ideas, referred to as “rationality techniques,” can let you do this.)

mttheww asked: what would you say is the best way for someone who doesn't give a fuck about, say, cryogenics or singularity--or transhumanism in general, really--to read lw and get something useful out of it? or is stuff like that not really that big a part of lw anyway? (I've never read lw before and pretty much all I know about it is what I've gleaned from tumblr)

kadathinthecoldwaste:

nostalgebraist:

Stuff like that is not a huge part of it.

The core material of Less Wrong is “The Sequences,” a giant set of blog posts by Eliezer Yudkowsky that range from fundamental (arguably pretty trivial) philosophical arguments to polemical expositions of quantum mechanics to “fun theory” (attempts to speculate about how to make a transhuman-ish future enjoyable and not boring) to weird aphoristic advice about self-improvement and “becoming a better rationalist” that often ends up sounding either like Yoda or a wise mentor character in a shonen anime.

It’s a massively mixed bag and which parts of it, if any, will be useful or interesting to you depend on your preferences.

A few starting points: “37 ways words can be wrong” (and links therein) is kind of a hub for the fundamental philosophy stuff, which I generally find pretty agreeable and sensible, though you may or may not find it useful.  And the Quantum Physics Sequence is a cool, unconventional way of explaining quantum physics, written from an adamant “many-world interpretation” perspective.  It’s a controversial sequence because actual physicists are divided on whether the many-world interpretation is correct or not, but if you’re not a physicist I think it can give you a nice alternative perspective to the stuff you’ve probably heard about particles and waves and uncertainty.  (Just don’t take it too seriously.)

Welp, only took me to entry #3 of 37 (and, by association, the Parable of Hemlock) to find something clearly wrong. Socrates failing to die from drinking hemlock is not necessarily a refutation of the statement “Socrates is mortal.” Mortality does not require that one be subject to death from all things, merely that one be subject to death at all. The empirical evidence could as easily be a refutation of the unstated premise (well, stated briefly in the parable, but not addressed in any meaningful way) that all mortal beings that consume hemlock die. Doesn’t really invalidate the thesis, but does mean that it’s pretty damn poorly argued.

After reading the rest of it, I’d say there are a few members of the list that displayed a genuinely novel and interesting way of thinking, a whole lot that were obvious to me, but might not to be everyone else (especially younger readers), several that were all well and good if one wants to stick one’s fingers in one’s ears and singsong “la la la I can’t hear you” whenever the subject of the political and ethical ramifications of speech acts comes up, and a couple that contained basic logical errors. It’s not the sort of thing that’s likely to make me a convert, but it definitely has some merit.

Yeah, that’s pretty much what I think of it.  Much of LW is well described by the old quip “there are true and new things here, but what’s true is not new, and what’s new is not true.”

I think a lot of the “obvious” material is really the most valuable, since as you say it’s not going to be obvious to everyone.  I remember really enjoying Overcoming Bias (the blog these posts were originally written on) in college ca. 2008, simply because I felt like in the college environment I kept talking to people who were clearly knowledgable about many things and yet didn’t align with me on this basic, “obvious” philosophy stuff.  My anecdotal experience is that pages like these really have changed some people’s lives.

Part of me says it would be nice to have a collection of pages that sum up this basic philosophical stuff I agree with, in plain language (like these ones), but without the errors, Yudkowskian bombast, aversion to ethical/political nuances, and ties to the whole rest of the bizarre LW enterprise.  But then, maybe no one would read that site; LW gets a lot of its traffic precisely from its bizarre facets.  (A lot of people end up reading these essays because of a Harry Potter fanfic, for instance — a Harry Potter fanfic that strikes me as being uniquely, staggeringly awful, and reflecting of the very worst sides of the LW author’s personality, which also affect some of the blog posts for the worse.  Which gives me no useful cues whatsoever about how to get people to read my hypothetical “LW without the bad stuff”)

Quick note on the su3su2u1 / slatestarscratchpad Bayesianism debate of the last few days:

Bayesian philosophy of science has been one of my pet interests for a while, and I’ve had many (fun, non-acrimonious) arguments with several Bayesians on here.  (Note: none of this makes me any kind of expert – I really know pretty little about all this though it all makes me curious.)

And the more I read and talk about this stuff, the more it seems that the biggest gulf between “philosophical Bayesians” and their opponents is over the fact that “philosophical Bayes" requires you to have a prior.  The fact is, Bayesianism is a recipe for thinking when you have a prior.  If you don’t have a prior, or only have something that’s not quite a prior (like "these are my candidate theories, and I have no idea what probability to assign to all the ones I haven’t thought of yet”), Bayesianism tells you “come up with a prior so you can be more like me, the Ideal Account of Reasoning.”

It can be very easy to forget that this is a problem, for several reasons:

1. There really are cases (like the medical statistics problems) where you really do have a prior and knowing how to use Bayes’ theorem really does help you.  But that doesn’t mean you should build your whole philosophy of science on assuming that every question is like a medical statistics problem, even the ones that aren’t.

2.  Because the prior is this potentially huge set of “free parameters,” this thing that can be anything you wish (as long as it’s a probability measure), it’s easy to reconstruct almost any act of sensible reasoning as an act of “Bayesian reasoning” with some prior.  John von Neumann famously said “with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.”  If you are allowed to make up a story about what someone’s prior was, then you can almost always recast their thinking as a “Bayesian update.”  Deborah Mayo has a vivid way of describing this procedure:

Bayesian reconstructions of episodes in the history of science, Mayo says, are on a level with claiming that Leonardo da Vinci painted by numbers since, after all, there's some paint-by-numbers kit which will match any painting you please. (Source)

Or, to put it another way, in some weird sense “all painting is a special case of painting by numbers” – you can take any act of painting and say, “ah, you say you aren’t painting by numbers, but I can reconstruct this as an instance of painting-by-numbers in which you had the paint-by-numbers kit that told you to produce exactly the painting you produced.”

The problem is of course that this gives you no good advice about how to create new paintings.  "All painting is a special case of paint-by-numbers, so to create a good painting I should find a good paint-by-numbers kit and use it" is actively bad advice for fledgling painters.

Or, to translate back to Bayes: “any act of good reasoning can be reconstructed as a Bayesian update with some prior” does not imply “to reason well, find a good prior and then update.”

Ultimately, the core issue for philosophical Bayesianism (IMO) is whether forcing yourself to have a prior is a good idea.  In situations where you already have one, like the medical statistics problems, this isn’t an issue.  If your state of knowledge looks nothing like a probability measure over hypotheses, and you say “Bayesian reasoning is ideal so I should be more like it by turning my state of knowledge into a probability measure over hypotheses,” this could have good or bad effects.  If it generally has bad effects, then Bayes isn’t a very good ideal.

This means that the “philosophy vs. engineering perspectives” distinction is somewhat misleading.  In the ideal fantasy land where you always have a prior, Bayesianism is not only ideal, it is sort of trivial.  It’s just what you do.  In the real world, trying to approximate the Bayesian ideal means forcing yourself to have a prior when you don’t start out with one.  If this is generally a bad idea, then Bayesianism isn’t an ideal, not even a philosophical one.

Even idealized philosophical realms have to make some contact with reality, or else everything quickly becomes absurd: it’s no use to, say, postulate an “idealized” world in which the only thing anyone cares about is having sex (defensible as an idealization since having sex really is very important to many people), and then derive real-world consequences like “we should direct all human behavior toward having as much sex as possible, ignoring all other activities, until everyone dies of starvation because the food has run out.”

So, if you make an idealization, and it works in ideal fantasy land, but the “engineering” consequences of trying to reach that ideal suck, then maybe it wasn’t such a good idealization.

(ETA: technical note – I said “as long as it’s a probability measure” above but sometimes it doesn’t even need to be that, as in the case of “improper priors.”  This isn’t really relevant to this post at all, I just wanted to mention it for technical accuracy)

aprilwitching-deactivated201808 asked: hello rob nostalgebraist im reading ur friends liveblog of hpmor and let me tell u that was a mistake i am now actively squicked. id like 2 state 4 the record that 2 of my very least favorite human traits are the thing where ppl believe theyre better and more important than other ppl and the thing where ppl want 2 have power over other ppl. i also do not like condescending didacticism or rape threats. this is terrible and i regret everything, rob nostalgebraist. how is this fanfic appealing. ugh

I’m sorry D:

It has always been a huge point of confusion for me why people find HPMOR appealing.  When I tried to read it I couldn’t get past a fairly early chapter and that was long after I had started to actively dislike it and was just reading for interest in understanding Yudkowsky/LW (and my interest there is pretty strong, cf. my entire blog).

So, like, I can’t help you there.  It is a total mystery to me.  HPMOR is really that bad and people like it for some reason.  Originally I thought the appeal was something like “exploring Harry Potter magic using the scientific method,” which sounds cool and is how people always pitch HPMOR, but as su3 keeps saying, it actually contains almost none of that.

Also a total of I think four (!) people have recommended HPMOR to me over the years with a pitch that included something like “Rob, I think you in particular would really enjoy this,” which in retrospect is just … I don’t even know.

(Note: I know some HPMOR fans follow me — I don’t really want to get into a discussion about its quality right now, sorry.)

werewolfbarmitzvacant:

essaymarking:

can somebody give me a brief explanation of “harry potter and the methods of rationality” please

paging rob nostalgebraist

i pass the buck to su3su2u1 (specifically their liveblog)

(via archiveev-blog)