This stopping rule problem is tripping me out!
Suppose I’m a second Bayesian, and I don’t see any of the data, just the hired statistician’s final opinion. I know exactly what the hired statistician’s final opinion will be beforehand, so it can’t possibly give me any new information about the effectiveness of the drug.
So now I disagree with the statistician. Why? The statistician says, “I had all of this data, and I updated on it, and this is my considered opinion.” And I say, “yeah, but I knew that would be your considered opinion, so updating on it doesn’t change anything for me.”
Suppose now you give me the data. My opinion surely shouldn’t change – it’s just some data set producing the statistician’s opinion, as I knew it must be. There’s no way it could be especially convincing relative to other possible such data sets, not if the statistician and I started out with the same prior about the drug’s effectiveness. So now the statistician and I disagree, even though we know the same information?
What is now the difference between “me” and “the statistician”? If you put the “me” from this example in the statistician’s place, can “I” reach the right conclusion? How? (I haven’t read all of the posts on this problem, so someone may have already answered this.)
I think the problem here is that you don’t know everything about the ways in which the statistician differs from you. If you’re reaching different conclusions, you must either have different priors or somehow be using different methods (Aumann’s Agreement Theorum)
But the reason the hired statistician’s opinion doesn’t give you information is because you’re working under the assumption that the drug company is cheating through stopping. Either first statistician either doesn’t know this and there is a difference in information between he two of you or doesn’t care and is, therefore, not acting as a bayesian.
I’m still confused about the stopping rule theorem. If I keep adding subjects there is some number of subjects for which the drug will appear to be effective. But there are practical limits. Wouldn’t the unscrupulous company switch to fabricating data way before using a million (or 3^^^3) test subjects and shouldn’t the bayesian somehow work this into zir model? Does stopping actually work out in practice or is there a limit, especially for small effect sizes? And if there is a limit which is related to the magnitude of the effect wouldn’t even a manipulated study be evidence in favor of the drug working?
Either first statistician either doesn’t know this and there is a difference in information between he two of you or doesn’t care and is, therefore, not acting as a bayesian
The point of the example was that “acting as a Bayesian” means not changing any of your beliefs in light of the stopping rule, even if you know about it. (The stopping rule isn’t “evidence” about the drug; if you stop at time N according to the rule, you have exactly the same data about the drug you would have if there had been no stopping rule and it just happened to be time N.) The person who posted about it yesterday presented this as a problem with Bayesianism.
raginrayguns has been posting about the practical vs. theoretical thing and has been doing some simulations. I think he understands that stuff better than I do and reading his posts would be better than reading something I might say about it.
(via angrybisexual)
raginrayguns reblogged this from nostalgebraist and added:
I know this is old and you’ve probably thought of this by now, but if you really had the same prior as the statistician,...
wirehead-wannabe liked this
ghostdunk reblogged this from nostalgebraist and added:
I’ll admit I haven’t seen a good amount of this conversation, but this all seems based on discussing what happens when...
nostalgebraist reblogged this from angrybisexual and added:
This is pretty much what slatestarscratchpad said (the statistician should use a prior reflecting these things).But it’s...
angrybisexual reblogged this from nostalgebraist and added:
“The point of the example was that “acting as a Bayesian” means not changing any of your beliefs in light of the...
urpriest liked this
wirehead-wannabe reblogged this from nostalgebraist and added:
I think the problem here is that you don’t know everything about the ways in which the statistician differs from you. If...
youzicha liked this
shlevy liked this adzolotl said: I think you could formalize this if you convert “the statistician’s final opinion” into a disjunction of every possible, er, microstate. Because P(X|A^B^C) shouldn’t be lower than min(P(X|A),P(X|B),P(X|C)), right?
adzolotl liked this
