Install Theme

hot-gay-rationalist:

nostalgebraist:

raginrayguns:

nostalgebraist:

“Where do Bayesians get their numbers from anyway,” installment (n+1)

But, I mean, calibrating your numerical probabilities is a thing you can do, right? There are like, books about it.

In the sense of, “out of all the things you say have a probability ¼, ¼ of them happen”

and, it seems like you should be able to, because it’s just a translation of how much evidence you have. And people have a pretty good sense of that, they use it for deciding whether to say “probably not”, “maybe”, or “probably”.

Well, they have a good sense of it within a certain range, below that everything gets rounded to “impossible” and above that everything gets rounded to “definitely”. So when you’ve posted previous examples of Bayesians making up numbers, and they were like 10^-9, I was like, “haha, silly.”

But this seems totally the kind of thing a human can do with practice

Sure, and to be clear here, I’m not objecting to the number.  A verbal equivalent would have been just as bad.  I’m objecting to the assignment of a plausibility in this case.

When I introspect about sufficiently “big” philosophical claims like the one described above, I don’t feel anything like a level of confidence, in the sense I might have a level of confidence about a near-future political outcome or something.

In more down-to-earth contexts I have a sense of “how much evidence I have.”  Claims like the one in question here seem to interact with the concept of “evidence” in a much thornier way.  The whole of history has only happened once, so it’s difficult to say something like “sequences of this sort tend to continue regularly in ‘histories.’ ”  Hanson is relying on a certain set of analogies to make his prediction, but those are analogies between previous events and “truly new” things like superhuman AI.

The argument seems not to be “we have experience in this domain, and it leads to this sort of predictive model,” but “we have no experience in this domain, and the least intuitively bad model I can think of is this one.”  He really doesn’t know whether his model is going to be any good or not, but he doesn’t have anything better to use.

It seems strange to assign a plausibility in this case; it’s not what I would do intuitively.  (It’s made stranger by the fact that he’s arguing with Yudkowsky and seems uncertain at this point what Yudkowksy actually believes — what then should he assign for the plausibility of his model in the possible outcome where Yudkowsky convinces him on some points, given that he doesn’t yet know what those points are?)

The point I’m making here is “in some cases it seems strange to assign plausibilities, and so it seems like people must be assigning them because they think ‘Bayesianism’ requires them to, when Bayesianism is really only a theory of what should be done by creatures who assign plausibilities, and says nothing about those who don’t.”  (Compare to Shalizi’s “optimal theory of six-legged walking.”)

What I thought when I read this was a thing a friend said once: “Have you considered that your intuitions are just wrong?” :P

I jest. What I actually mean is that your intuitions are completely baffling to me. I cannot not assign plausibilities to propositions, and I’d need a very compelling reason to want to not assign plausibilities to things! I feel, intuitively and intimately, that I’m “more certain” of some things than of other things, and when a person says that they assign a ~¼-½ probability to a thing, it means that, historically, when they felt this uncertain about the thing, the thing turned out to be true 25% to 50% of the time. That’s the most direct and simple and obvious interpretation of the numbers: if I say that a thing has probability x%, then that should mean that, historically, when my subjective sensation of uncertainty was similar to this, the thing was true about x% of the time.

Besides, if you push “we have experience in this domain, and it leads to this sort of predictive model” to an extreme, there is no such thing except for the very basic laws of physics. There is no such thing as a bunch of identical days so that you know that when your day is identical to that day, it rains x% of the time. There is no domain and predictive model that doesn’t rely on distributions over lack of information and isn’t the laws of physics, and any apparent qualitative difference between assigning plausibilities to “big philosophical questions” and “near future political outcomes” is patently false, and is simply a quantitative difference with differing resolutions.

No two days are identical, no two political scenarios are the same, the universe isn’t cyclic and doesn’t repeat its patterns exactly. Every plausibility assignment is about a “truly new” thing like superhuman AI, because every single day is a “truly new” day. The fact that you lump a bunch of completely different situations together is an artifact of your representation, but no two Bernoulli urns are the same (and in fact, a single Bernoulli urn doesn’t remain “the same” from one Planck time to the next), so claiming some qualitative difference between assigning plausibilities to superhuman AI, political moves, the weather, or balls in a Bernoulli urn, sounds like a very naïve and unredutionistic view of the world that’s projecting different resolutions of the map on the unified and universal territory.

I’m not claiming there’s a qualitative difference in the territory in this case (though there may be in some cases, which I’ll mention at the end of the post).

But the quantitative difference you refer to is not unimportant.  Some predictions involve chaining together fairly few inferences and it is easy to home in on a number.  I am very very confident that if I dropped a ball on the floor right now, it would obey the laws of Newtonian mechanics.  I would bet a lot on that (given suitably fair definitions of terms).  I have a “subjective plausibility” for that proposition that is very close to 100%, I suppose.

However, predictions become less certain once they involve the further future, or depend on speculative or dubious theories.  When predictions involve the far future and/or chain together speculative theories, I feel highly uncertain about them.

But – and this is the point I’m making – that kind of “uncertainty” is a different subjective feeling from the “I know either is equally likely” uncertainty I feel about flipping a fair coin.

If asked, “what is your subjective probability for this fair coin landing H,” I will instantly say “50%” and would happily insist on refusing any bet not consistent with that.

If asked, “what is your subjective probability that between the hours of 8 and 9 PM EST on December 3, 2041, at least 21 people will be eating strawberry ice cream,” I will first of all feel a state of total uncertainty corresponding to the fact that this is not the kind of question I have ever thought about before.  If asked to quantify this “feeling of uncertainty” I might say “50%,” but that wouldn’t be a confident 50% like the one with the coin; it’d be an attempt to express a state of total under-preparedness to answer the question.  If asked to bet immediately I guess I would go with that 50%.  If asked to bet after a long period of time, I might devote myself to studying the various world-historical factors that might bear on the question of why people might or might not be eating ice cream in 2041.

That initial “50%” in the latter case is not a real “subjective probability” – it’s an expression of “I do not feel that I have integrated enough information to provide a meaningful answer to this question.”  Or: “I’m sure there are various world-historical scenarios in which the proposition would would be either very likely or very unlikely, and to find out how I ‘really feel’ about the question I would have to put some work into deciding what I think about the likelihood of those various scenarios.”

In the coin case I feel like I have all the relevant arguments in mind, and they produce a kind of positive knowledge which corresponds to knowing the probability “50%.”  In the “ice cream” case I feel a state of complete, negative uncertainty which I would be wary of expressing as any probability, and which only may coalesce into a probability once I’ve spend some time integrating what I actually know, or collecting new knowledge.  These are not the same feeling.

To clarify the difference a bit more: I feel wary of accepting any bets about speculative possibilities like the ice cream case, because of the sense that the uncertainties involved swamp any ability I might have to home in on any number between 0% and 100%.  My “50%” seems a bad bet because of the conjunction fallacy: if you said “what is your subjective probability that between the hours of 8 and 9 PM EST on December 3, 2041, at least 21 people will be eating strawberry ice cream, AND ALSO the population of China will be greater than [big number]” then surely the probability of this must be lower than 50%, right?  But knowing as little as I do about population growth projections, I am unable to add any new insight here; if you had given me the “ice cream AND China” proposition originally, I would have grudgingly said “50%” there too, as a proxy for “I have no clue.”  My responses here don’t represent any kind of coherent thinking, only “pure negative uncertainty.”  Given enough time I might be able to think my way to a real subjective probability, but maybe not.

tl;dr my subjective feelings about very inferentially distant propositions don’t feel like subjective plausibilities, and I get the sense that this is true for most people.  The way Bayesians quote numbers seems strange to many people, not just because it is unfamiliar, but because it seems to conflate the “sure uncertainty” one has about a fair coin with the “unsure uncertainty” one has about inferentially distant events.

It seems like you’re one of “nature’s plausibilists” – someone whose mind just naturally assigns a subjective plausibility to every proposition.  Which is pretty cool, don’t get me wrong, but my hunch is that this is not a common trait.  And the ultimate justifications for plausibilism are intuition-based, as ultimate justifications in philosophy tend to be.

(Last point: I think there’s an even more fundamental state of uncertainty one can have, which is being uncertain about whether a proposition even describes a state of the territory at all.  For instance, if you asked me whether I thought Max Tegmark’s “Mathematical Universe Hypothesis” was true, I would feel a fundamental uncertainty caused in part by the fact that I’m not even sure yet what it would mean for it to be “true,” or whether that’s a meaningful question.  That is, uncertainty about whether or not a proposition is vacuous is a second kind of uncertainty that I don’t think can be captured well with plausibilities.  I have no idea if the Mathematical Universe Hypothesis is “plausible”; I don’t even know if it can be true or false, and will have to do more thinking to resolve that question.)

(via hot-queer-rationalist-deactivat)

raginrayguns:

nostalgebraist:

“Where do Bayesians get their numbers from anyway,” installment (n+1)

But, I mean, calibrating your numerical probabilities is a thing you can do, right? There are like, books about it.

In the sense of, “out of all the things you say have a probability ¼, ¼ of them happen”

and, it seems like you should be able to, because it’s just a translation of how much evidence you have. And people have a pretty good sense of that, they use it for deciding whether to say “probably not”, “maybe”, or “probably”.

Well, they have a good sense of it within a certain range, below that everything gets rounded to “impossible” and above that everything gets rounded to “definitely”. So when you’ve posted previous examples of Bayesians making up numbers, and they were like 10^-9, I was like, “haha, silly.”

But this seems totally the kind of thing a human can do with practice

Sure, and to be clear here, I’m not objecting to the number.  A verbal equivalent would have been just as bad.  I’m objecting to the assignment of a plausibility in this case.

When I introspect about sufficiently “big” philosophical claims like the one described above, I don’t feel anything like a level of confidence, in the sense I might have a level of confidence about a near-future political outcome or something.

In more down-to-earth contexts I have a sense of “how much evidence I have.”  Claims like the one in question here seem to interact with the concept of “evidence” in a much thornier way.  The whole of history has only happened once, so it’s difficult to say something like “sequences of this sort tend to continue regularly in ‘histories.’ ”  Hanson is relying on a certain set of analogies to make his prediction, but those are analogies between previous events and “truly new” things like superhuman AI.

The argument seems not to be “we have experience in this domain, and it leads to this sort of predictive model,” but “we have no experience in this domain, and the least intuitively bad model I can think of is this one.”  He really doesn’t know whether his model is going to be any good or not, but he doesn’t have anything better to use.

It seems strange to assign a plausibility in this case; it’s not what I would do intuitively.  (It’s made stranger by the fact that he’s arguing with Yudkowsky and seems uncertain at this point what Yudkowksy actually believes – what then should he assign for the plausibility of his model in the possible outcome where Yudkowsky convinces him on some points, given that he doesn’t yet know what those points are?)

The point I’m making here is “in some cases it seems strange to assign plausibilities, and so it seems like people must be assigning them because they think 'Bayesianism’ requires them to, when Bayesianism is really only a theory of what should be done by creatures who assign plausibilities, and says nothing about those who don’t.”  (Compare to Shalizi’s “optimal theory of six-legged walking.”)

(via raginrayguns)

“Where do Bayesians get their numbers from anyway,” installment (n+1)

This can be viewed as a special case of constructing a new creature with similar goals and more powerful arms, and then replacing yourself with that creature.

Shifts within values of Eudaimonian civs seem of relatively low importance compared to gains from converting empty or paperclip stars to somewhere inside Eudaimonia.

chroniclesofrettek asked: Were you a regular on the TVTropes forums a few years ago, before you read any LW stuff?

No.  I first read LW stuff back in 2008 (back when LW didn’t exist and it was just Overcoming Bias), and I’ve never been on the TVTropes forums.

raginrayguns reblogged your post and added:

I saw it as him slandering authors that I’m…

I agree with what everything you’ve said.  It sounds like a bad article and below Auerbach’s usual standard.

Less Wrong is one of those topics that is so interesting from a non-sensational standpoint (IMO) that it would be fascinating to see in-depth coverage from that standpoint, though so far it hasn’t happened.  (The Betabeat article was relatively non-sensational, but not very in-depth; I’m imagining something like the length and seriousness of a New Yorker feature that goes into the core ideas in detail and talks about demographics, changes in the culture over time, various religion-related angles, etc.)

raginrayguns replied to your post: anonymous said:Hey I just wanted …

that article made me so mad. “What you are about to read may sound strange and even crazy, but some very influential and wealthy scientists and techies believe it.” Like, who. People in the comments section got the impression Ray Kurzweil does

Yeah, that sounds like BS.  I don’t think anyone (as far as we know) believes in the Basilisk in 2014?  Except maybe Roko himself?

I think it’s mainly interesting as an example of Bayesians behaving badly (getting Pascal’s Mugged, stating explicit prior probabilities like “10^(-9)”) and for the light it sheds on Yudkowsky (he said it was the kind of thing to be extremely conservative about until TDT was finished, which seems like getting Pascal’s Mugged in itself, in that he was deriving behavioral implications [“be conservative”] from the direness of the prospect even though it was so far-fetched).

I like Auerbach because he’s a good generalist: he’s worked in software engineering and has also read a lot of literature and so he tends to be good at branching the science/humanities divide, and can write on a huge range of topics, usually well, making connections some people wouldn’t make.  (Incidentally, he wrote a great blog post about Ada or Ardor, one of my favorite books [warning: spoilers].)  The downside is that sometimes he can be pretty superficial about any one given topic.

Anonymous asked: Hey I just wanted to tell you that if you want to read it, the paywall on SA is down right now and you can read the mock thread. Warning: many, many people coming in and going "wait explain roko's basilisk to me again?"

Thanks!

(I know David Auerbach recently wrote a Slate article on Roko’s Basilisk, which was slightly weird for me since I’ve known and liked Auerbach’s writing for a long time but felt no desire to read the article because it’s such an old issue to me)

drewlsummitt:

nostalgebraist:

The Roko thread also contains some choice examples of Bayesians trying to think about far-out probabilities despite the obvious problems involved: someone (somehow!) estimates a relevant probability as “10^(-9) or less” and Roko replies:

Why so small? Also, even if it is that small, the astronomically large gain factor for each % decrease in existential risk can beat 10^(-9). 10^50 lives are at stake.

I’ve never really heard a good response to pascal’s mugging but I don’t have the maths to understand if a response was good. Are you aware of a mathey response to pascal’s mugging? 

I’m not aware of a mathy response but I imagine it would have to be some formalization of the idea that “we’re bad at estimating these kinds of numbers and the resulting uncertainty is too big to make any of these kinds of conclusions worth acting on.”  And I’m not sure that argument needs to be formalized with math?

(It is hard to put that in a Bayesian framework which assumes you do have a probability estimate and have to take it to its logical conclusion, but my response to that is “so much for the Bayesian framework.”  I think the Bayesians must have some way of expressing the response in their own terms but I don’t know what it is)

(via drewlsummitt-deactivated2014110)