Install Theme

The Roko thread also contains some choice examples of Bayesians trying to think about far-out probabilities despite the obvious problems involved: someone (somehow!) estimates a relevant probability as “10^(-9) or less” and Roko replies:

Why so small? Also, even if it is that small, the astronomically large gain factor for each % decrease in existential risk can beat 10^(-9). 10^50 lives are at stake.

Roko's Basilisk (July 17, 2014) →

drewlsummitt:

eccentric-opinion:

drewlsummitt:

renerdssanceman:

Today I learned, well… let me just show you this:

WARNING: Reading this article may commit you to an eternity of suffering and torment.

That’s the warning at the beginning of the Slate article from which I originally learned about the Roko’s basilisk thought…

IIRC the die hard TDT believers, WeiDei, Big Yud, Newsome, etc. took Roko very seriously.

“[A] Friendly AI torturing people who didn’t help it exist has probability ~0, nor did I ever say otherwise. If that were a thing I expected to happen given some particular design, which it never was, then I would just build a different AI instead - what kind of monster or idiot do people take me for?… It’s clear that removing Roko’s post was a huge mistake on my part, and an incredibly costly way for me to learn that deleting a stupid idea is treated by people as if you had literally said out loud that you believe it, but Roko being right was never something I endorsed, nor stated.” - Eliezer Yudkowsky

i stand corrected. Thanks. 

In case any of you have not seen it: Roko’s original thread is archived here if you want to see what Yudkowsky (and others) said at the time.  IMO he is being pretty revisionist in that later quote, but you can judge for yourself.

(via drewlsummitt-deactivated2014110)

why i’m not into FAI

hot-gay-rationalist:

nostalgebraist:

scientiststhesis:

nostalgebraist:

Posting this for scientiststhesis, who asked.  (Asked under his other account but I can’t seem to at-sign it for reasons probably having to do with hyphens?)

Attention conservation notice: this was originally written in an email to someone who asked what I thought of Friendly AI.  If the phrase “Friendly AI” means nothing to you then this is probably not going to make any sense.

As I said one post ago: paints in broad strokes, is very informal about everything, and I imagine much of it will not meet your [scientiststhesis’] standards for argument, but it does sketch a set of arguments that could be fleshed out more.

Read More

This is… weird? Because almost everything you say “LWers are” or “LWers do” is almost universally not true of LWers I know, or myself. But anyway.

Read More

Warning: long.  (Can you tell I’m procrastinating?)

Read More

Here we go.

Read More

Thanks for the response.  I think we have gone most of the way we can go here?

Most of my objections to FAI research are the result of skepticism about AI.  But that still causes me to consider FAI research less important.  When I say I’m “not into FAI” I’m not saying that current FAI research is poorly done for what it is and should be done differently.  I’m saying that FAI research doesn’t seem to me to be especially important, and I don’t understand the rationale behind, say, donating money to people to do it.

(Is FAI uniquely deserving of donations relative to all other math or philosophy research?  Does it make sense to donate to MIRI while not offering money to, say, any academics who are having trouble getting grants?)

There is also some semantics here about what counts as “AI.”  It’s easy for me to imagine a world where the only superintelligences we can make are biological super-brains made of real neurons.  I would call these “AI” but maybe you wouldn’t; in any case some of FAI would not apply to them (e.g. because it’s hard to build a brain to precisely specified order without molecular nanotech, you can’t have “tiling agents” that make nearly identical copies of themselves).

I still feel like I haven’t made the “creativity” point clear even to myself, and probably need to think about it more.  I think what I’m probably trying to say is that FAI people ignore time and space efficiency – for instance one MIRI report cites Marcus Hutter’s AIXI-related definition of intelligence which makes no mention of time or space efficiency.  Again, I’m not saying specifically that FAI research could be better as FAI research if it included these things (I’m not even sure what that would mean), but that much of the justification for thinking FAI is important and encouraging support for MIRI is the terrifying specter of unfriendly superintelligences way above us in terms of how much use they get per bit, and these aren’t very scary if computational complexity means it takes them 10^10 years to get anything done.

(I have seen one LWer link that Yudkowsky post I linked earlier to illustrate “what is at stake” in FAI/UFAI.)

When I have more time I want to go through some MIRI reports closely and try to find points, if there are any, where I think they depend on specific assumptions about what AI will be like.  I’m too tired and busy right now to make this a very good post, but I’ll keep thinking about it in the coming weeks.

(via hot-queer-rationalist-deactivat)

why i’m not into FAI

scientiststhesis:

nostalgebraist:

Posting this for scientiststhesis, who asked.  (Asked under his other account but I can’t seem to at-sign it for reasons probably having to do with hyphens?)

Attention conservation notice: this was originally written in an email to someone who asked what I thought of Friendly AI.  If the phrase “Friendly AI” means nothing to you then this is probably not going to make any sense.

As I said one post ago: paints in broad strokes, is very informal about everything, and I imagine much of it will not meet your [scientiststhesis’] standards for argument, but it does sketch a set of arguments that could be fleshed out more.

Read More

This is… weird? Because almost everything you say “LWers are” or “LWers do” is almost universally not true of LWers I know, or myself. But anyway.

Read More

Warning: long.  (Can you tell I’m procrastinating?)

Keep reading

(via scientiststhesis-at-pillowfort)

eudaemaniacal:

dragonitehugs replied to your post: first line of limerick sent to my draf…

attend to the tale of big yud/whose fanfic was not very good/he made potter, harry/a sue that was mary/and his writing style is quite crude

alternate rhymescheme:

attend to the tale of big yud

who decided to blog in the nude

stripped his memeplex bare

and with bayesian flair

he displayed his basilisk. lewd

jollityfarm asked: Why is Big Yud so scared of death? And why is he convinced that if an AI did exist it would exist in our image, brain-wise? Isn't it supposed to be a hyperintelligent AI? I talking about the basilisk Pascal's wager thing (although tbf Yud didn't come up with it, but IIRC he FEARS it for memetic contagion reasons???).

The deal is, I’m pretty sure he doesn’t think it will be necessarily created in our own image, which is why he’s so scared of an AI that ends up doing something bizarre by starting with seemingly sensible goals and then implementing them using superpowers.

(E.g. an AI told to “make people happy” which recognizes happiness by recognizing smiling faces, and ends up turning all the matter in the galaxy into tiny pictures of smiling faces; or an AI told to “make people happy” and given some neurological definition of happiness which just hooks wires up to our nucleus accumbens and turns us all basically into drug addicts.)

I’m not sure the Basilisk depends on the AI being similar to us.  It does depend on the AI being able to simulate us after we’re dead which tbh I’ve never understood (where does it get the information to reconstruct who we were?)

As for the fear of death thing … I think some people have it more than others, and some don’t.  I think people attracted to transhumanism tend to be especially afraid of death; they often make arguments equating avoidable deaths (by old age) to murder, but it seems significant to me that it’s often young people making these arguments and I’m not sure old people in general are that scared of death.  There’s a conflation going on between “how do I, a young person, feel about the idea of dying tomorrow” and “how will I feel in 50 years about the idea of dying the following day.”  (Although Ray Kurzweil is evidence against the idea I’m proposing: he is a pretty old transhumanist who seems afraid of death and is very interested in the idea that if he takes the right dietary supplements he will live long enough to get to the singularity and be uploaded and live forever.)

mttheww asked: reading that basilisk article leaves me even more convinced than I already was that being too smart for one's own good is often virtually indistinguishable from being a complete moron

Yeah, there are a lot of things you can think about in this world, and only so much time to think about them, and it’s easy for smart people to apply their smarts to things that maybe aren’t worthy of them, and get very lost in the process.  (See also the ask from most recent anon)

Anonymous asked: I don't want to try to turn the discussion into politics, but it feels so bizarre to me that LW has a word for entities whose sole aim is to maximize something and in their blind optimizing, are damaging humanity... and is not seeing how the big companies, who make profits from a conitually changing, abstract group of shareholders who can't just meet and change the directive, could be described as such. (I don't know LW much, but have met a pro-capitalism argument of Scott's, and '~' )

I feel you here – I tend to be very much in favor of thinking about what’s happening in the here-and-now rather than speculative possibilities (given how bad futurism’s track record has been, etc.), and I think these people are missing the opportunity costs of thinking about hypothetical paperclip maximizers built on top of chains of speculative argument rather than applying that analytic skill to the bad things going on in our midst.

why i’m not into FAI

Posting this for scientiststhesis, who asked.  (Asked under his other account but I can’t seem to at-sign it for reasons probably having to do with hyphens?)

Attention conservation notice: this was originally written in an email to someone who asked what I thought of Friendly AI.  If the phrase “Friendly AI” means nothing to you then this is probably not going to make any sense.

As I said one post ago: paints in broad strokes, is very informal about everything, and I imagine much of it will not meet your [scientiststhesis’] standards for argument, but it does sketch a set of arguments that could be fleshed out more.

Keep reading