[trigger warning: Pascalian reasoning]
I understand why Pascal’s Wager is the sort of thing you would want to avoid, but I don’t understand how you can principled-ly avoid it.
Suppose there is an x-risk with a one in a million chance of destroying the world, that will cost $1 million to fix. This sounds like a good example of Pascal’s Wager. People have accused MIRI’s arguments of being Pascal-esque, and they use a budget of $1 million to fight an x-risk that seems to have more than a one in a million chance.
So everyone says “Nah, that’s Pascal’s Wager” and doesn’t fight the x-risk.
Now suppose there are one million independent x-risks like that. There’s about a 75% chance of at least one of them destroying the world. And the total cost to fix them all is $1 trillion.
(for comparison: $1 trillion is a little more than the US defense budget, and somewhere in the wide range of estimates of the annual cost of stopping global warming)
It seems like a no brainer that it would be a good idea for the governments of the world to get together to cough up $1 trillion to prevent a 75% chance of the world being destroyed.
I don’t know much probability theory, but it seems like a problem to say that it is a bad idea to make choice C once, but a good idea to make choice C one million times.
I can see a place for a game theoretic rejection of Pascal’s Wager in cases where taking it would incentivize people to make up fake risks in order to get money from you (“Pascal’s mugging”), but this seems different from a fully general argument against taking very small chances of very large losses seriously. Like, if you had an accurate asteroid course prediction algorithm, and the algorithm said there was a one in a million chance of an asteroid hitting, that wouldn’t cause the same game theoretic difficulties as if someone tried to make up a clever argument.
Does anyone have a good solution for this?
I think you are right about the asteroid, but in the absence of such trustworthy prediction devices, I think a lot of versions of Pascal’s Wager have trouble dealing with an objection that also works against the original one: “but what if God wants me to not believe in Him?”
It’s hard to reason about low-probability events while keeping both sides of the balance sheet in mind. If we actually know that the probability of something is “one in one million” – Knightian “risk” as opposed to “uncertainty” – that’s one thing. But when talking about things that are very far from being currently understood, we face Knightian uncertainty.
It’s common in these cases to rephrase the uncertainty as very large risk, by saying things like “well, suppose there’s a one in a million chance that MIRI is right about everything, and then … ” But in such a case, we understand things so poorly that we have no idea much weight to give this relative to the possibility “MIRI is right about everything except X, where X means that a successful MIRI would design an almost-Friendly AI that turns out to be Unfriendly and ruin everything.”
In that area of possibility-space, there are many possibilities, some of them good and some of them bad, and it’s hard to have any idea how to weigh them against each other. Just as, in Pascal’s original Wager, if someone’s already skeptical enough of any God that they need this kind of pragmatic argument to convince them, it’s hard to say why they should give weight to any particular God when there are so many and you can’t have them all at once.

