Install Theme

ogingat replied to your postNow that I think about it, I think probabilistic…

For the evidentialist, at least, it’s not clear that it matters how a predictor works. You might say, “Okay, then let’s not be evidentialists”; that’s certainly what David Lewis thought you should say! But it’s still “trippy” if it cleaves theories.

I may be misunderstanding this, but I think the evidentialist does care?

With a “detailed simulation” predictor, the evidentialist one-boxes (this is the standard result).

With a “demographic statistics” predictor, what does the evidentialist do?  It seems to me like the evidentialist will two-box here if they know everything the predictor does.  That is, if the evidentialist knows that it’s a cat, and that X% of cats one-box, and that this is the sole and complete basis for the predictor’s prediction, then it doesn’t learn anything new about the boxes if it supposes that it chooses to one-box or two-box.  In other words P(money in both boxes | I one-box & I am a cat)  =  P(money in both boxes | I two-box & I am a cat)  =  P(money in both boxes | I am a cat).

The predictor doesn’t know whether it’s going to one-box or two-box, only that it’s a cat, so conditioning on extra information beyond “I am a cat” has no effect.  Hence it will two-box, because the probabilities are the same in either case, and two-boxing gets you both boxes.

On the other hand, there could still be an evidentialist argument here for one-boxing, based on the coordination problem.  If you get a bunch of evidentailists together, they may make a pact to one-box, so that the predictor learns “this demographic tends to one-box” and they get more money.  This depends on the existence of other agents and some sort of way to get around coordination problems and stop agents from free-riding, none of which is present in the original problem statement.  But assuming all of that is possible (and life would sure be depressing if it weren’t), one can imagine a big community of evidentialists one-boxing with a “demographic statistics” predictor.

In short, evidentalists always one-box with a “detailed simulation” predictor, but they may either one-box or two-box with a “demographic statistics” predictor depending on whether they can coordinate.

(ETA: if the individual evidentialist doesn’t know how the predictor is predicting, only that it’s using demographic information with some success rate, I think the evidentialist will one-box?  But again, this shows that the evidentialist cares about how the predictor works.  It will switch from one-boxing to two-boxing if it gets complete information about what the predictor is doing.)

Now that I think about it, I think probabilistic Newcomb still has the problem that perfect predictor Newcomb has: you are not told how the predictor achieves its predictions, only that it always has a certain success rate.  This leaves you to conclude that it has that success rate even when you do strange things like run a copy of it and do the opposite of what the copy says.

All of the trippy aspects of the thought experiment have to do with this assumption.  The predictor is a black box that somehow anticipates everything you could possibly think about it.  (Even in the probabilistic version, it achieves the same success rate no matter what you think, which is sort of the same thing.)

Once you specify what the predictor is actually doing, the problem dissolves.  If it is using some sort of detailed simulation that actually takes into account all of your thoughts about it, then that means something like backwards causation really is happening (and also means that the predictor cannot be accurate in the infinite regress case – you cannot ask it to faithfully simulate a copy of itself*).  That situation is weird, but also very unlike the situations we tend to face in the real world.  On the other hand, if the predictor is just using demographics or the like, it does not have the detailed information that you have about your decision procedure, and you don’t have to worry about your decision now “causing” suboptimal box fillings in the past.

(You do have to worry about ideas like “most people end up two-boxing because they think it can’t influence the result, which leads the predictor to think you’ll two-box, which you don’t want.”  This is a real problem, but it’s more of a coordination problem than a trippy retro-causality problem.  It’s analogous to a coordination problem with any kind of actuarial prediction, e.g. “insurance for my demographic will cost less if people in my demographic take fewer risks, but since the demographic trait is unchangeable, everyone figures they can’t affect the insurance price themselves, and end up taking a lot of risks that drive the price up for everyone.”  Standard free rider problem.)

*(although if the <100% probabilistic accuracy were achieved by some sort of simplification, it’s possible that an infinite regress of simplified predictors could converge)