queenshulamit-deactivated201602 asked: What is your response to people who say MIRI diverts people's attention from the more immediately pressing concern of climate change?
I wonder if these same people ever worry that, let’s say, poverty relief or feminist activism diverts people’s attention from the more immediately pressing concern of climate change. If so, they get +2 consistency points - but then I wouldn’t expect them to talk about MIRI, since in terms of totally number of dollars / hours of effort put in it’s about 0.01% of the other two.
(I wonder if saying “MIRI diverts people’s attention from the more immediately pressing concern of climate change” diverts people’s attention from the more immediately pressing concern of poverty relief diverting attention from the more immediately pressing concern of climate change.)
But I actually think the situation is even better than that. I think that something like feminist activism funges strongly against climate change, since it’s using the time of political activists who are good at raising awareness in the general public and getting political stuff done.
Something like MIRI funges very weakly against climate change, because it’s getting meta-mathematics geeks to write proofs and maybe a few people to donate money. At this point I don’t think the climate change movement really needs either of those things. It’s so well-funded that MIRI’s million or so would be a tiny drop in the barrel, and although it’s possible that meta-mathematics geeks could, with some retraining, become climate simulation geeks, I don’t really think the lack of sufficiently good climate simulations is what’s holding global action against climate change back.
In other words, this seems a lot to me like motivated reasoning - “MIRI is weird, therefore MIRI is bad, therefore let me find some reason MIRI is bad, even if I would never consistently apply that reasoning to anything else.”
I have made this criticism before, so I want to explain myself a bit.
You’re right about the fact that better climate models won’t in themselves spur global action on climate change.
However, I think it’s important to be aware of the fact that making better climate models is very much a problem that requires math/physics type talent (not really in overlapping areas with what MIRI does, but the kind of thing I’m sure MIRI’s people could work on competently), and that in some ways this is an understaffed problem. (Don’t particularly want to get into an argument about that claim right now — just stating that it is what I believe, and it is what some people in the field believe)
The margin at which the critique make sense is not “who should I give money to?” but “it seems that I am good at math; what I should do with this ability?” MIRI and its supporters argue that MIRI is not just a fun job to take, but an important one, because it’s about an important understaffed problem. But even if that’s true (arguable, but that’s a whole distinct argument), making better climate models is also an important understaffed problem.
(I know that sounds weird, because climate change itself is very well known. But it’s also extremely difficult theoretically, and not the kind of thing whose theoretical side is well-popularized in the way that, say, the theoretical side of fundamental physics is well-popularized. Young physics students dream of accelerators, not cloud parameterizations.)
Finally, I think the point about “better climate models won’t in themselves spur global action on climate change” applies to some extent to MIRI as well — building the theory in itself does nothing to ensure that people will use it. But I would argue that improving climate simulations is a very important task even if the world isn’t taking action on them — because even if we just sit back and passively react to climate change rather than trying to stop it, it will be very useful to be able to predict it. Even if, say, coordination problems prevent everyone from getting their shit together enough to stop a certain area from becoming unlivable, it would be very helpful — for governments and for people — to know with high certainty, and as far in advance as possible, that that area will become unlivable.
By contrast, I don’t think Friendly AI theory has the same kind of applicability in a world in which it is not used preventatively. (In fact, I think in Yudkowksy’s view it explicitly wouldn’t be: an Unfriendly seed AI, once built, would quickly grow beyond our comprehension or control, so no theory will be able to give us predictions of the “yeah we ended up in the Bad Timeline but here’s exactly how the badness will unfold, use this to protect your loved ones” type.)
Thanks for the explanation. What you say about trying to predict global warming, even if you can’t stop it, seems right, and makes much more sense than what I thought queenshulamit’s hypothetical person was saying.
But I still disagree with you. How much high-level math talent do you think graduates a year? 20,000 people? 30,000? The NSA gobbles up a few thousand for cryptography, Jane Street gobbles up a few thousand for investment banking, Google gobbles up a few thousand for software engineering, academia gobbles up a few thousand for pure research. In what world is it worth worrying about MIRI getting, like, one person per year for a cause that’s probably better than any of those guys?
The strongest counterargument I could think of is that only a few of those thousands of mathematicians are interested in doing good, and those are the ones passing up jobs with Google to look at things like global warming or AI risk.
But I don’t really think that’s how it works. Except for a couple of very special people on GiveWell, I don’t think people are inspired to do good, and then they look for the most efficient charitable way to pursue that goal. I think that get interested in a specific cause, and then their attachment to that cause inspires them to separate from the packs going to Google and the NSA and accept a lower salary pursuing their dream.
Eliezer’s already said that if he didn’t believe in AI risk, he’d be a physicist. A lot of the people who come to MIRI come to it from Google, and a lot of the people who leave MIRI leave it for Google. Nate’s bio ( http://lesswrong.com/lw/jl3/on_saving_the_world/ ) doesn’t really leave a lot of room for expecting him to become a climatologist. On the other hand, how many climate modellers do you know who seriously considered whether they should work in the field of AI risk instead?
I think that, with the possible exception of you, it’s very likely that there is not a single mathematician in the world who has ever seriously considered, even for the space of a single thought, whether to work for AI risk or climate modelling. There are a lot of mathematicians who have considered whether to work for Google or climate modelling, and a lot who have considered whether to work for Google or MIRI, but just by statistics alone - let alone the different personalities both causes attract - we wouldn’t necessarily expect them to overlap.
Three quick points here — then I’ll have to bow out for the rest of the day because I need to get myself off tumblr and work. (Usual work void rules apply: yell at me if you see me tumbling)
Point 1: The size/influence of MIRI is just not relevant at the margin I’m considering. The question I’m asking is “what should one person who’s good at math choose for their career?”, and that one person will be spending 100% of their work time working for MIRI if they choose to work for MIRI. At this margin it is, yes, worth worrying about whether MIRI is the right choice or not. If you make a suboptimal choice, the utilitarian analysis doesn’t care whether your particular suboptimal choice is taken rarely or often.
Point 2: I think you are underestimating the transferability of skill between the relevant domains here. When EY says he “would have been a physicist” he is saying what many thousands of people like me say before getting degrees in physics and then going on to do something else. ”Physics” in the narrow sense is vastly overstaffed, and the skills you learn when you get a physics degree largely overlap with many other, less overstaffed domains. (I briefly considered going to engineering grad school, and was astonished when the head of my undergrad physics department told me that my lack of an engineering degree would help rather than hurt me in admissions, because a physics degree is treated roughly like a better version of an engineering degree.) Many, many people who once “wanted to be physicists” now work in engineering, software, data science, etc. (su3su2u1 can probably talk more informatively about this.)
I realize this is kind of ironic given the stance I was taking in the IQ debate, but the longer I’ve been in math-related fields, the more I realize “being good at math” really is a transferable and widely applicable skill. I learned all of the climate-related stuff I know in grad school because few people teach it undergrads; many of the professors I work with on climate stuff have physics degrees, in some cases even physics Ph.Ds. Paul Christiano has a BA in math from MIT and is now a Ph.D. student in computer science and working at MIRI; I’m sure he could learn climate fundamentals in a few years just like I did (and would likely be better at it than I am).
3. I think your response mixes together facts and values in a confusing way. Sure, many people don’t actually ask themselves “should I do AI risk or climate?”, but maybe they should. At this point I admit we are getting away from the original question a bit, since we’re no longer talking about what MIRI actually does. My point is that altruistically motivated people with math skills should think about stuff like climate as an alternative to working for MIRI, and (see point #2) they probaly could do either.
That they aren’t aware of both alternatives is a fact about social networks and about the way neither of these projects is not especially well publicized. (To the extent that they’re publicized, it isn’t in the same social networks; among other things I think there’s a literal west coast / east coast split here.) It doesn’t really bear on the ethical point about which of these is a better thing to do.
(It feels a bit like someone saying that MIRI is worthless because, after all, “very few skilled people ever think about working for MIRI.” Surely true, but that’s merely a fact about MIRI’s lack of publicity. The question is, should people think about working there?)
