Install Theme

slatestarscratchpad:

[trigger warning: Pascalian reasoning]

I understand why Pascal’s Wager is the sort of thing you would want to avoid, but I don’t understand how you can principled-ly avoid it.

Suppose there is an x-risk with a one in a million chance of destroying the world, that will cost $1 million to fix. This sounds like a good example of Pascal’s Wager. People have accused MIRI’s arguments of being Pascal-esque, and they use a budget of $1 million to fight an x-risk that seems to have more than a one in a million chance.

So everyone says “Nah, that’s Pascal’s Wager” and doesn’t fight the x-risk.

Now suppose there are one million independent x-risks like that. There’s about a 75% chance of at least one of them destroying the world. And the total cost to fix them all is $1 trillion.

(for comparison: $1 trillion is a little more than the US defense budget, and somewhere in the wide range of estimates of the annual cost of stopping global warming)

It seems like a no brainer that it would be a good idea for the governments of the world to get together to cough up $1 trillion to prevent a 75% chance of the world being destroyed.

I don’t know much probability theory, but it seems like a problem to say that it is a bad idea to make choice C once, but a good idea to make choice C one million times.

I can see a place for a game theoretic rejection of Pascal’s Wager in cases where taking it would incentivize people to make up fake risks in order to get money from you (“Pascal’s mugging”), but this seems different from a fully general argument against taking very small chances of very large losses seriously. Like, if you had an accurate asteroid course prediction algorithm, and the algorithm said there was a one in a million chance of an asteroid hitting, that wouldn’t cause the same game theoretic difficulties as if someone tried to make up a clever argument.

Does anyone have a good solution for this?

I think you are right about the asteroid, but in the absence of such trustworthy prediction devices, I think a lot of versions of Pascal’s Wager have trouble dealing with an objection that also works against the original one: “but what if God wants me to not believe in Him?”

It’s hard to reason about low-probability events while keeping both sides of the balance sheet in mind.  If we actually know that the probability of something is “one in one million” – Knightian “risk” as opposed to “uncertainty” – that’s one thing.  But when talking about things that are very far from being currently understood, we face Knightian uncertainty.

It’s common in these cases to rephrase the uncertainty as very large risk, by saying things like “well, suppose there’s a one in a million chance that MIRI is right about everything, and then … ”  But in such a case, we understand things so poorly that we have no idea much weight to give this relative to the possibility “MIRI is right about everything except X, where X means that a successful MIRI would design an almost-Friendly AI that turns out to be Unfriendly and ruin everything.”

In that area of possibility-space, there are many possibilities, some of them good and some of them bad, and it’s hard to have any idea how to weigh them against each other.  Just as, in Pascal’s original Wager, if someone’s already skeptical enough of any God that they need this kind of pragmatic argument to convince them, it’s hard to say why they should give weight to any particular God when there are so many and you can’t have them all at once.

Anonymous asked: What do you think of the Definability of Truth in Probabilistic Logic MIRI paper? I'm not near that subfield of math, but it seems to me that they are just re-inventing fuzzy logic.

I’m not near it either.  As far as I can tell, the paper takes the concept of “probabilistic logic” (of which I think fuzzy logic is one variety?) as a given, and concerns itself with the idea that a probabilistic logic can be given a “truth predicate” that avoids Tarski’s undefinability theorem (which applies to non-probabilistic logics).

I have no idea if this is a new concept, and it may not be, but I don’t think it’s something you'd have to do when coming up with a probabilistic logic.

On SL4, arbitrarily high levels of rationality are permitted. If you’ve ever been shot down in a conversation for being too rational, you know what I mean.

I can laugh maniacally all I want, so long as I still get the answers right on questions of simple fact. 

But y'know, this shiny new model of Friendly AI *does not require* that I be Belldandy, or even that I *approximate* Belldandy.

Effective Altruism

su3su2u1:

Saw some tumblr people talking about this movement.  My biggest problem with effective altruism is that most everyone I know who identifies them self as an effective altruist donates money to MIRI.  (its possible this is more a comment on the people I know than the effective altruism movement, I guess). Based on their output over the last decade, MIRI is primarily a fanfic and blog-post producing organization.  That seems like spending money on personal entertainment. 

I’m a big believer in donating to effective charities, but I think a lot of the focus on existential risk opens the door for variants of Pascal’s wager. 

 There is a 10^(-80) probability I can open a portal to dimension-X-Earth in my basement, if you give me $80,000 a year for the next decade.  We can colonize that extra-dimensional Earth, which will safeguard humanity in the event Earth gets destroyed, so extrapolating into the future thats at least pi*3^^^^^^^^^^3 utilons, so I’m the most effective charity you can donate to ever.  

I think you are getting a pretty biased sample here, insofar as you seem to only know of EA through its intersection with Less Wrong?

Givewell, which seems like the most central opinion-setter for the EA movement, does not focus on existential risk (its top charities are not existential risk charities) and have actively stated that they don’t recommend giving to MIRI, explaining why at length (including an explicit disavowal of the Pascal’s mugging type argument).

So any effective altruist who gives to MIRI must justify to themselves why they’re going against what is considered the “expert view” in their community (assuming that Givewell’s views are seen that way, which I think is roughly true).  And anyone first getting into effective altruism will immediately encounter Givewell and be directed toward health charities, not existential risk charities.

(via su3su2u1-deactivated20160226)

flowercuco replied to your post “i, uh”

I DONT KNOW WHAT I WANT FROM LIFE, I DONT KNOW WHAT ANSWER IS BETTER,

[furiously repeats “i’m not owned” again and again]

image

i,

uh

drmaciver:

The motte of “motte and bailey doctrine” is that some people engage in a form of dishonest bait and switch tactics.

The bailey is that if there are a diverse range of opinions within a group then all opinions associated with that group are invalid

I wake up to find that someone has recommended Floornight on /r/rational as an instance of “rational fiction”

I bet the people writing the sitcom that is my life are really proud of themselves right now