
From philosophical Bayesians one hears things like “if you buy this set of axioms about how to describe your beliefs, then you should reason in this way.”
One objection being that sometimes you haven’t thought of everything, and what probability should you assign to the set containing “all things I haven’t thought of”? You want to fail gracefully.
So: would it be possible to do the same kind of axiomatic theory-building with “fail gracefully” as an explicit goal? Imagine people telling others that if they violate this or that principle, they will fail poorly. We all think “knowing you’re probably missing something” is good, but could it be formalized? That sounds like a contradiction in terms, but it would be interesting to see what sorts of monsters would result from the attempt.
(There are probably things like this out there right now that I am just ignorant of.)
Tonight’s quasi-misread: I saw the phrase “anti-Laplacian priors” while scrolling through a post and got briefly excited because I thought it was referring to the Laplacian, as in the operator, rather than Laplace’s views on probability.
What would an “anti-Laplacian prior” even be in that sense? I will never know!
My father: You know, I took all these statistics classes in college, but there were certain things I just didn’t learn. For instance, none of my professors ever really explained Bayes’ Theorem to me.
My father: It might have been that they were just not great teachers. But I’ve been wondering if maybe they wanted to keep certain things – their most powerful techniques – to themselves, and not share them with the world.
Me: …
ogingat recently asked a question about the utilitarianism espoused by Less Wrong rationalists. My sense is that the motivation behind questions like these is a curiosity about whether the standard LW positions, like utilitarianism and Bayesianism, are held in a considered, reflective way, one that’s aware of standard counterarguments.
I find myself a lot more interested in these questions with Bayes than with utilitarianism. This is because I have this feeling that I understand what people mean when they say “I’m a utilitarian” in a sort of casual sense, even if they can’t handle all the paradoxes, and I don’t have that sense with “I’m a Bayesian.”
Take effective altruism. It produces a wide range of responses when people first hear about it. Some people are like “this seems right, and very important.” Some other people are like “okay, I’m not sure this is wrong per se, but it just isn’t how I think.” And certain others say “this seems actively incorrect, or even evil.”
When people say “I’m a utilitarian” I think they’re saying something close to “I am in the first group, that ‘instinctively’ finds EA right and important.” Maybe it’s hard to philosophically ground this response so that one could convince other people to have it, if they don’t to begin with – but in any case the fact that one has the response is significant. Not everyone does, and it says something about your practical, day-to-day approach to ethical thinking, as opposed to the “theory” you “endorse.” (It would be nice to identify exactly what those practical implications are, and give them a name that’s distinct from utilitarianism-the-theory.)
I tend to ask LWers these needling questions about Bayesianism more often because I don’t have an analogous understanding there. When someone says “I’m a Bayesian” I actually don’t know what they mean, beyond “I endorse these philosophy-of-science attitudes espoused by E.T. Jaynes” (unless they’re a practicing statistician, in which case it might just mean “I use Bayesian methods”). I don’t know what practical reasoning connotations them term has, and when people spell them out they always sound like common sense (”change your mind in response to evidence,” “extraordinary claims require extraordinary evidence,” “you shouldn’t have hypotheses that can be confirmed by every possible observation,” etc.)
So I had a nice long conversation about this with a friend who’s done some workshops and stuff with CFAR a few weeks ago.
(1) A lot of what “philosophical bayesians” maintain is based off that result that says “any method of assigining probabilities to outcomes that satisfies some set of conditions is equivalent to Bayesian updating starting with some prior.” I’m sure you know the conditions and the technical weaknesses better than I do, but that’s waht the justification is. Since all admissible reasoning systems could be rendered as Bayesian, Bayesian updating is obviously fundamental.
(2) He was really genuinely interested to hear about some of y'all’s critiques of Bayesianism and I pointed him to a couple of y'all’s tumblrs and to Dempster-Schafer theory. He hadn’t ever heard a mathematically sophisticated critique of Bayesian reasoning.
(3) After some reflection, he commented that while CFAR is philosophically strong-bayesian, what they’re actually doing in their workshops is trying to get people to think probabilistically at all. And maybe in terms of Bayes’s Theorem, but that’s a tool held in common by basically all approaches to probability.
Basically, his takeaway is that there’s a mathematically and philosophically interesting discussion about philosophical approaches to statistical reasoning, but it’s mostly irrelevant to “people trying to make better informed decisions without a real model” and CFAR’s attempt to “raise the sanity waterline” or whatever is treating stats at a much lower level than the level where these debates can even be defined.
Interesting. Do you (or others) know of an example of the kind of “probabilistic thinking” that CFAR is trying to get people to do?
This is a serious question – I’ve never been sure what non-common-sense ideas there are here. CFAR’s website says its workshops teach people to “make more accurate everyday predictions using Bayes’ Rule” but I haven’t had much success trying to find out what specifically that means.
ETA: Julia Galef of CFAR talks about practical implications of Bayes’ rule in this video, but everything she mentions seems like mainstream scientific reasoning to me.
(via jadagul)
ogingat recently asked a question about the utilitarianism espoused by Less Wrong rationalists. My sense is that the motivation behind questions like these is a curiosity about whether the standard LW positions, like utilitarianism and Bayesianism, are held in a considered, reflective way, one that’s aware of standard counterarguments.
I find myself a lot more interested in these questions with Bayes than with utilitarianism. This is because I have this feeling that I understand what people mean when they say “I’m a utilitarian” in a sort of casual sense, even if they can’t handle all the paradoxes, and I don’t have that sense with “I’m a Bayesian.”
Take effective altruism. It produces a wide range of responses when people first hear about it. Some people are like “this seems right, and very important.” Some other people are like “okay, I’m not sure this is wrong per se, but it just isn’t how I think.” And certain others say “this seems actively incorrect, or even evil.”
When people say “I’m a utilitarian” I think they’re saying something close to “I am in the first group, that ‘instinctively’ finds EA right and important.” Maybe it’s hard to philosophically ground this response so that one could convince other people to have it, if they don’t to begin with – but in any case the fact that one has the response is significant. Not everyone does, and it says something about your practical, day-to-day approach to ethical thinking, as opposed to the “theory” you “endorse.” (It would be nice to identify exactly what those practical implications are, and give them a name that’s distinct from utilitarianism-the-theory.)
I tend to ask LWers these needling questions about Bayesianism more often because I don’t have an analogous understanding there. When someone says “I’m a Bayesian” I actually don’t know what they mean, beyond “I endorse these philosophy-of-science attitudes espoused by E.T. Jaynes” (unless they’re a practicing statistician, in which case it might just mean “I use Bayesian methods”). I don’t know what practical reasoning connotations them term has, and when people spell them out they always sound like common sense (”change your mind in response to evidence,” “extraordinary claims require extraordinary evidence,” “you shouldn’t have hypotheses that can be confirmed by every possible observation,” etc.)
seems to me like assuming that your probabilities sum to 1 is in some sense a probabilistic version of the excluded middle
We just need to think really hard about how the world might be. It might be raining. It might not be raining. Is there anything we’ve left out? Think, people, think!
The issue I’m referring to is where you have “P or ~P” and P is something like “one of the theories of this phenomenon that currently exists in the literature will be fully vindicated”
I mean, either P or ~P, sure, but assigning probabilities to P and ~P is hard because, by definition, you don’t know much about what ~P looks like. There might be some ingenious theory no one has thought of yet, but right now, you can’t tell apart a world in which there is such a theory from a world in which there isn’t.
Like I told perversesheaf just now: First, I apologise for not responding earlier, I needed a break from tumblr because it was cutting too much time from work and now I’m back, though I’ll understand if you don’t want to continue this discussion. That said…
No worries. My own pattern is to engage heavily on an issue for awhile until I get bored with it and stop.
Regarding Solomonoff induction being uncomputable: Kolmogorov complexity is uncomputable, so it follows from that.
And in any case, since I can’t understand this Dempster-Shafer theory thing right now, I can’t tell much of anything, though this has shaken my confidence in Bayes a lot more than all arguments prior to this. But if Dempster-Shafer theory is “the correct one,” or if some other theory is “the correct one,” then what’s the bad Cox Axiom? Real numbers?
What do you think?
I don’t think there is a “right” theory- I think different approaches work well in different situations, so the best you can manage is a patchwork of heuristics.
I think all that Cox tells you is that if you use on real number to represent degrees of belief then that number ends up being probability. It does not then tell you that using Bayesian methods (prior/likelihood/posterior) are the only methods that work ( (I think nostalgebraist called it diachronic vs synchronic bayes theorem). There is nothing to stop someone from using frequentist methods with a subjective probability. This is in part what calibration is about- mixing subjective with frequentist notions of probability.
And you don’t have to use a single number. You can use two and get something like Dempster Shafer or fuzzy sets or whatever.
We had a conversation about this a while back and got confused, because all the standard texts (besides Jaynes) treat Cox’s Theorem as a justification for
“if you use on real number to represent degrees of belief then that number ends up being probability” (synchronic)
but Jaynes argues that it gets you the update (diachronic) as well, and I couldn’t see anything wrong with his argument. This was confusing because if you could get the update from Cox, you’d think someone but Jaynes would have noticed.
It seems like the big issue is accepting the idea of having a single degree of belief in every proposition. If you allow for intervals you get things like Dempster–Shafer. It all comes down to what you’re trying to do and whether “every proposition gets a single degree of belief” is appropriate.
(I think an important desideratum, for a lot of purposes, is to have some practical, gracefully-failing way of dealing with the fact that you haven’t thought of every conceivable proposition, i.e. in many cases the entire set of ideas you’ve thought of won’t have total probability 1. It’s awkward to try to assign a degree of belief to a box labelled “all the things I haven’t thought of yet,” and there are some results showing that this creates problems for Bayesian inference [too busy to look them up right now].)
I just thought of the most annoying imaginable Radical Bayesian answer to the stopping rule paradox: “Bayesianism is the right system even if it gets this one thing wrong in the same way that arithmetic should still be used even though it can’t prove Godel statements”
Bwuh! but! ah!
Arrrgh!
I know no-ones actually saying that but reading it is an incredibly frustrating experience! Congratulations for compressing so many distinct philosophy things that annoy me into one sentence!
:)
(via somervta)
I just thought of the most annoying imaginable Radical Bayesian answer to the stopping rule paradox: “Bayesianism is the right system even if it gets this one thing wrong in the same way that arithmetic should still be used even though it can’t prove Godel statements”