…am I just too Bayesian to understand why this is supposed to be weird?
Yes. Imagine if you believed in the frequentist or propensity definitions of probability. Almost all of these questions are nonsensical then (although you should be able to handle the aliens question with propensities).
Alternatively: most of these questions are about the macro structure of the universe. How do you answer such questions without some kind of universal prior?
Ok but then how do you
have beliefs of any kind about reality everbet?My opinions about things like this sometimes change, and are sometimes vague/confused, and I’ve rambled a lot about it
But my current opinion is something like:
I don’t have a prior with support over every conceivable outcome (using “outcome” in the broadest sense, so that things like those in the screenshot apply). I don’t think anyone actually does.
What we have is more like a mental function we can query that outputs “how likely does this feel to me?” We can, if we wish, try to translate these feelings into numbers in [0,1]. But calling these numbers “probabilities” is inappropriate in most cases, because the mental function isn’t consulting some underlying distribution obeying the probability axioms, except in toy problems like rolling fair dice (where the function will, so to speak, call another function that actually does the math).
In particular, the mental function generally doesn’t even use a consistent outcome space, and e.g. if A and B are both things I “have no idea about” it will also tell me that I equally “have no idea about” the event A&B. (It makes the conjunction fallacy because it has no picture of outcome space with regions that could be labeled “A,” “B” and “A&B.”)
How does this relate to betting? Well, I’m wary of making bets by using credences which generally will not (except by coincidence) obey the probability axioms, for the usual reasons. (I don’t exactly mean “Dutch books” because I think that issue is a bit different from how it’s usually presented, but I think that human biases make it easy to get tricked into bad bets and incoherent credences only make it easier.)
One could then object that surely I’d take some sufficiently skew bets on any given question. Would I really turn down a bet that costs me $1 if MWI is false and pays me $1 billion if MWI is true? And couldn’t you back out a “revealed probability” from this? I talk about this issue here – the upshot is that while I might take such a bet, this has nothing to do with the specific concept I’m being asked about, but is simply an instance of my generic, default betting behavior in response to questions where my mental function outputs “oh god who even knows.”
Even then, I’d probably reject all such bets in real life. Partially because I’d be suspicious about why the other side is willing to offer them, but more fundamentally, because I try to do things that are designed for actual probabilities – like expected value calculations – only when I feel like my credences come from some actual knowledge about the underlying outcome space.
That is, I won’t pay a Pascal’s Mugger, not because I “believe there is probability zero that the mugger has magical powers,” but that I don’t have any informed breakdown of how the world could be, such that some parts of it are labelled “these magical powers are possible” – specifically such a breakdown I would have been able to give you before I ever encountered the mugger. I file “the mugger has magical powers” under “hey, anything’s possible,” rather than under “I have a number of theories of how the world might work, and under this subset of them, the mugger could have magical powers.”
To sum up, I make decisions in various ways, and use ways that approximate EV maximization when I think I’m in the sort of domain where I can construct something like a probability distribution on a well-defined outcome space. I think it’d be actively irrational, or at least totally without rational justification to use that sort of technique when I can’t do this even approximately.
This includes cases like “MWI is more or less correct” and “God exists.” I do have opinions about these questions – my sense is that MWI would at least need substantial revision to be correct, and that God almost certainly doesn’t exist. But I have nothing like a probability space associated with these questions. (For instance, it’s conceivable that the problem of evil is correctly resolved by “it’s all God’s plan and was all a good idea for some reason beyond our current understanding,” but I don’t have a picture of all the ways in which this could be true nor any sense about “how likely” it is for “a typical universe” to be configured in any one of these ways; again, for me this falls under “hey, anything’s possible.”) Thus, I can’t provide numbers that I could justly call “probabilities.”
ETA: I don’t personally frame this “frequentist defn. vs. Bayesian defn. of probability,” but rather as a distinction between beliefs about how to do inference correctly in real life. It’s not that I think “degrees of belief” can’t be probabilities by definition, but rather that treating my degrees of belief like probabilities in all cases would be bad practice.
Longish response:
(Responding the most recent reply, the one under a cut)
I worry this will sound arrogant or hostile, but I’ve been reading/thinking/talking about these issues for a long time, so I don’t think I’m just making some basic misunderstanding about what Bayesians mean by certain terms. Relatedly, most of the issues you raise are things I’ve talked about on tumblr before at some point – see my Bayes tag (which I realize is long and disorganized, I just don’t want to repeat myself).
A few points (again, there is more in the tag):
I understand that the “degrees of belief as coherent probabilities” is an ideal for rational agents rather than a description of human psychology. The practical question is then “what should I do, given that I have degrees of belief that don’t work like probabilities?” For instance, should I still do expected utility calculations (pretending my degrees of belief are probabilities)? In all cases, or only in some?
In some cases our “failures of coherence” are just due to simple mistakes that can actually be patched in practice, with stuff like “don’t neglect the base rate.” In other cases it has the much deeper cause that we don’t know what the outcome space looks like, so we can’t put a distribution over it, even a flat one. (One consequence is that it is basically impossible to deal with conjunctions sensibly in these cases – I made some posts with more detail about this a while back)
Since we are so very far from being coherent rational agents, it’s not clear that behaving more like those agents in any single, particular way will be good rather than bad for us. In optimization terms, the ideal is far enough away that it doesn’t tell us much about the local gradient, so to speak. I think the use of the word “probability” in things like the OP picture comes from a belief that in fact we are sufficiently close to the ideal that “moving towards the ideal” approximates “moving along the gradient,” i.e. “these aren’t probabilities, but they’re sort of close to being probabilities and we rational folks are trying to make them even closer.”
Incidentally, I think the Dutch book argument for coherence has serious problems, although there are other arguments for the same conclusion.
(via just-evo-now)




