bayes: a kinda-sorta masterpost
I don’t think the fact that humans are bad at thinking up logical implications is a very strong argument against bayes, in the same way that “But Harold, you said you loved Chocolate earlier!” is an argument against preferences.
So, I will agree that there’s this non-monotonic thing. This is indeed a very good point against using Bayes as a mental tool! I am not disagreeing with that!
What I do disagree with is the idea that it’s ipso facto problematic. I think the correct way to do this is throw out your first estimate as a preliminary one, and then use the other logical implication questions as a way to generate a battery of knowledge in a kinda organic fashion. To use the original “California succession” thing, let’s say I think it’s unlikely, so I throw out 98% as my likelihood, then some else asks me the “USA still together” so I also generically throw out 98% but A HA!!!!!! THIS SEEMS WRONG, because the set of situations involving the US together but California leaving seems I dunno small or whatever, so I end up adjusting the probabilities as, repeating until I’ve thought of all “relevant” probabilities.
But logically speaking isn’t this troublesome? Isn’t it terrible that in theory an adversary can choose a sequence of questions which allows them to set my probabilities? Well, not really. My claim is that thoughts of these logical implication things provide information because humans are really bad at accessing all the information they have, and that, yeah sure if the adversary controls how a person accesses their information, of course the person is screwed? So you hope that people have good internal “implication generating” machinery, such that by the time that they have worked through a bunch of subset questions, they have dumped out all relevant information, and the ordering effects are washed out.
Which is a much more elaborate way of saying “guys stop throwing out random probabilities and sticking to them if you don’t have good intuition/facts doing cognitive work aaaaaaaahh”
I guess I can agree that nothing I said above is specifically motivated by Bayes, except for this vague feeling of “well, shit it turns out I’m actually really bad at incorporating all relevant information” and I think it’s really just unavoidable.
I don’t think this is a problem with humans, I think it’s much more fundamental. The real issue is that these kinds of “obviously nested” statements have a “easy to check, hard to find” property, like with NP-complete problems.
Let’s define “A is obviously nested in B” as “if you describe both A and B to me, it’ll be immediately obvious to me that A is sufficient but not necessary for B.” And let’s define an “obviously nested pair” as A, B where one is obviously nested in the other.
The “US in 2100″ statements mentioned earlier are all obviously nested pairs with one another. But the ones mentioned are just a few examples; there are infinitely many statements of the same form, asking about slightly bigger or smaller regions of the US, that also form obviously-nested pairs with all other such statements.
And that whole infinite chain is just one “direction” in hypothesis space. You can think about any other subject – existence of various markets and sub-markets (will candy be sold? will lollipops?), demographics and sub-demographics, scientific ideas and special cases thereof, you name it – and produce an infinite obviously-nested chain like this.
In finite time (much less polynomial time), you can only explicitly think about some vanishingly small subset of these statements. Yet you implicitly know infinitely many facts about them (about each chain, in fact, of which there are infinitely many). There’s no way to sit down and think enough beforehand that all of the obvious-nesting information has been dumped out into an explicit representation (and that representation would take infinite space anyway).
Now, maybe there is a way to handle this in practice so that it doesn’t hurt you too much, or something. Such a theory would be very interesting, but as far as I know it doesn’t exist, and it would have to exist for us to begin talking about how a finite being could faithfully represent its implicit knowledge in a prior.
(This is a human problem in the sense that you could make a machine which would lack all this implicit knowledge. That machine would not have this problem, but it would know less than we do, so we’d be throwing away information if we tried to imitate it.)
(via lostpuntinentofalantis)

