Install Theme

Scott Adams Blog →

prophecyformula:

mradilbert:

image

More on this week’s theme from the Dilbert blog. Well predicted. *ahem*

Yeah, the “the Powers That Be are going to very carefully rig the election results” theory is wackadoo. And of course if the predictions it makes fails, Adams should consider that evidence against the theory (which he’s… kinda not? it’s hard to tell). But something about this post really rubs me the wrong way – it’s using “check your predictions” as a kind of ideological weapon rather than an essential practice for anyone trying to predict better.

The spirit I was reading it in was “Scott Adams like to talk about how exceptionally reliable his predictions are, so it’s only fair to point out when he gets something unambiguously wrong”

(I’m getting my impression of Adams here largely from the fact that he retrospectively gave himself a perfect 9/9 on his 2015 predictions, even though this involved misrepresenting most of them)

If Adams were just some pundit I disagreed with ideologically, then things might be different, but I think it’s fair play when he’s set himself up as King Hedgehog, Lord of All Predictors.  (”As regular readers know, 100% of my political predictions for 2015 were correct, thanks to the Master Persuader filter.”)

(via epistemic-horror)

@slatestarscratchpad‘s review of Superforecasting kind of confuses me, because he’s like “this book is about how rationality makes you better at stuff and we already knew that,” but I’m actually not clear on how we already knew that?

There is a lot of empirical evidence out there on how cognitive biases exist, and how it’s possible to resist them to some extent, but is there much evidence out there on how much practical impact this has?  This is one of the things that has always bothered me about LW/CFAR – there are a lot of citations to Kahneman et. al. about the existence of the biases, but there doesn’t seem to be much evidence-based or quantitative argument for the idea that resisting bias is a powerful technique.  (Maybe I just missed it?)

To put it a bit flippantly, Scott’s review reads to me like “we already knew rationality was powerful because a guy wrote some blog posts asserting it was powerful and wrote a novel where a kid does amazing stuff using rationality, but hey, here is some some serious quantitative evidence about how powerful rationality is – as if we needed that”

Speaking of Seattle, while visiting my parents’ house earlier this winter I found these old note cards from (presumably) school assignments I did when I was kid, which are kind of funny in retrospect

The three cards read:


What is math?

Math is the study of solveing problems, in a systematic way, the princeaple of ever-Going numbers.


What does it mean to be good in math?

To be good in math is to know well +, -, ×, ÷, Frations, algabra, etc.  To be good in math is to know well the ansers to many problems.


How do you make predictions about [whether] something will happen.

I usuly figure out what % something will or [figure it] out using other evidence.

This book about the French Revolution (Simon Schama’s Citizens) talks a whole lot about the cultural and ideological stuff that had taken hold in the decades leading up to the revolution (cult of sentiment, Rousseau, lawyers modeling themselves on classical orators, etc.) and it’s very easy to think “hmm, clearly one can predict political upheavals from cultural precedents, it would be naive not to do that right now”

At which point I need to remind myself that people are always predicting revolutions and the like from cultural precedents, and if they’re occasionally right, so is a stopped clock

dagny-hashtaggart:

Petition to add the phrase “y’all motherfuckers need Tetlock” to the vernacular.

I have sometimes contemplated saying “go to tetlock church, lol” when I see someone saying that their unusual ideology lets them confidently predict consequences of current events that no one else is predicting

renormalization and long-term political judgement

prophecyformula:

(epistemic status: highly speculative. i am not a physics. need to RTF EPJ. might be impossible to turn into a testable hypothesis; almost surely impossible to turn into a hypothesis testable over a natural human lifespan.)

A little while back, I read Phil Tetlock’s Superforecasting. I didn’t have much to say about it at the time, mostly because @nostalgebraist‘s posts on the subject expressed my views pretty cleanly. But I’ve been letting the ideas I picked up from the book marinate since.

One of the more pessimistic conclusions Tetlock drew from his earlier research – the stuff in Expert Political Judgment – is that succesfully forecasting on a longer time scale than a couple of years is basically impossible. I don’t remember the exact time horizon, but let’s say that at five years out, nobody did better than chance. This is believable – political systems are chaotic and prone to unpredictable shocks. Even short-term forecasts can change on a moment’s notice; something like last week’s Paris attacks can come out of nowhere and radically alter the political landscape.

But now let’s shift gears for a moment. If I want to predict the weather tomorrow, that’s pretty easy. If I want to predict the weather in seven days, that’s a good bit harder. If I want to predict the weather in thirty days, I can’t really do any better than the long-term average.

So you might think that it’d be hard, or impossible, to predict the climate fifty years out. But we think we can do so to some degree of accuracy. Why? Well, as we move from “weather” to “climate” and from “two weeks” to “fifty years,” we’re averaging over length and time scales, and renormalization comes into play. As su3 put it: “the underlying model is chaotic, but on much longer time scales the system becomes more well behaved.”

I think the analogy should be pretty clear. Maybe over the time scale of 5, 10, 20 years, political systems are too chaotic to predict – but there are nevertheless nontrivial things we can say about politics on 50 or 100-year timescales. Certainly our long-term predictions won’t be as precise as the kind of things we can say about the next six months. But vague statements like “Cthulhu always swims left,” if they can be made into testable predictions, might – counterintuitively – be the kind of thing that people can accurately forecast over long time periods.

Neat idea, although given how hard climate science is, I’m very pessimistic about getting any definite results out of it.

(In climate science we do have some simple pencil-and-paper theory stuff, like the greenhouse effect, but to really get any confident predictions we need to run computer models.  [N.B. there are plenty of other pen-and-paper insights, many of which go into making the computer models faster.]  And even though the models are based on well-known physics, they still involve a lot of simplifications that cause people to argue over how good the results really are.  And of course plenty of people just judge the result based on their politics anyway.

But in climate, since we know the “true equations” even if we can’t represent them exactly on a computer, we at least know which features the computers are worst at capturing, and can reason on the basis of that.  With politics / world history, any computer model would have to be radically simplified and we’d never be very sure we’d represented anything with any given degree of accuracy.  And while one could hold out hope for pen-and-paper theory rather than numerics, the fact that we can’t even make much progress that way in the case of climate should make us pessimistic.

In short, what I’m trying to say is that with climate we are “just barely” able to get insights with any kind of confidence – it takes a combination of our best computers, clever algorithms, theoretical arguments to speed up the models, other theoretical arguments that make us more sure we aren’t leaving anything out, etc.  We’re clinging onto certainty by our fingertips, and if the physics were any less well-known we’d fall off.)

shlevy asked: Did the GJP allow people not to answer/give some response akin to "knightianly uncertain"? Thinking of getting the book based on your description here but am a bit wary if they penalized people's rationality score when they simply weren't informed about a particular domain. Unless there's a good argument presented there that refusing to give an answer/confidence interval is actually irrational as such?

My understanding is that in the GJP (see e.g. the account here), forecasters only answered the questions they felt like answering.

The book is unfortunately not very clear (unless I missed it) on the issue of how forecasting skill related to numbers of questions answered and choice of questions.  There is a lot of discussion of the relative importance of domain expertise, “news junkie” traits, general intelligence, and general rationality.

(Also, it seems like most of the superforecasters were relatively new to most of the topics they forecasted on, and needed to do a fair amount of research.  You’d have to be a serious geopolitical obsessive to know very much about very many of the GJP questions beforehand, so I don’t think variation in pre-existing expertise is much of an issue.)

philip tetlock and the methods of rationality

bgaesop:

nostalgebraist:

bgaesop:

dataandphilosophy:

nostalgebraist:

I’m getting close to the end of Tetlock’s “Superforecasters.”  It’s a fascinating book, and I recommend reading it if it sounds at all interesting to you.  (Tetlock’s earlier Expert Political Judgment is also great, but more technical, and Superforecasters contains a concise and readable summary of its results, plus new ones.)

The book is highly relevant to a lot of the stuff I’ve been rambling about on here for the last few years, especially under the “big yud” tag.  One of my various issues with Less Wrong and Eliezer Yudkowsky is that although they correctly identify that human thought involves various systematic biases and that these biases can be partially mitigated through training and careful self-discipline, they’re never really proven that you can really get all that much out of this, besides some sort of philosophical satisfaction.  I’ve poked fun at LW-rationalists for acting like “rationality gives you superpowers” without ever really demonstrating any superpowers.  (Indeed, the cases of ostensible superpower demonstration given by Yudkowsky are generally just plain embarrassing, like the MWI thing or the AI Box thing.)

It would be a nice, clean story if we could just conclude there and then that resisting cognitive biases does not, in fact, give you superpowers.  Unfortunately for that nice, clean story, Tetlock’s Superforecasters is a very convincing book, by a credentialed expert with an illustrious career, about how … resisting cognitive biases can give you superpowers.

(Specifically, can allow you to forecast world events more accurately than professional government analysts with access to classified information, even if you’re just some regular Joe or Jane with no geopolitics background, forecasting as a way to amuse yourself after retirement.  The way you can achieve this feat, according to Tetlock, is by training yourself to avoid exactly the same sort of biases Yudkowsky and Hanson talk about avoiding.  Tetlock doesn’t use the phrase “aspiring rationalist,” but he might as well.)

Reading Superforecasters after reading so much Yudkowsky-and-co. over the years is a truly strange experience; the book makes a better case for Yudkowksy’s programme than anything Yudkowsky himself has ever written, and presents new results pointing in the direction of the claims Yudkowsky has spent years making with (in my mind) too little evidence to back him up.  It gives me hope that maybe, someday, someone can do the whole “Overcoming Bias” project right, and something (besides Harry Potter fanfic and mildly interesting math logic papers) will come of it.

(One of the things that has always seemed odd to me about HPMOR – or the third of it I’ve read, anyway – is how little it has to do with rationality.  Many of the chapters are named after cognitive biases, but the story tends to have very little to do with cognitive biases, and HPMOR’s Harry – a hyperconfident, Hedgehog-not-Fox thinker – is the opposite of a good role model for aspiring bias-avoiders.  I don’t think HPJEV would make a good superforecaster, but perhaps a superforecaster could make a good HPJEV, if you see what I mean.)

If you want a really terrifying experience, read Psychology of Intelligence Analysis, a manual by the CIA that focuses on techniques and failure modes for the beginning analyst. And yes, they mention cognitive biases and heuristics and some techniques to deal with them that could come straight out of the sequences.

What’s terrifying about that?

Superforecasters got its data from the Good Judgment Project, right? I was in the top 1/3rd of that by putting 55% confident on every answer

Yes, the book is based on the GJP.  Most people, even experts, do really badly on tasks like the GJP – being in the top 1/3 by putting 55% confident on every answer seems (at least qualitatively) in line with the Expert Political Judgment result about how many experts don’t do any better than someone picking answers at random would.

Yeah, I agree.

I think the GJP’s structure was pretty flawed, though. I’m confident I could have been in the top 10% really easily by just following their suggestions and updating my estimates as the deadlines neared. Answering “will North Korea test a new missile by July 1″ is much easier to answer confidently on June 30 than January 1. I wonder how many people who did really well just actually did that instead of giving their initial estimate and then never updating it like me and apparently almost everybody.

I wondered about that too.  Tetlock does say that if you look at just the initial predictions made by “superforecasters,” they’re still 50% better than the average (p. 166), where earlier (p. 102) he’d said that superforecasters were 60% better than average.  (I think these % numbers refer to Brier scores, so we’d need to look at the distribution of Brier scores among GJP forecasters to know how to translate them into figures like “top 10%.”)

He also mentions that superforecasters can “see further into the future” than average (”superforecasters looking out three hundred days were more accurate than regular forecasters looking out one hundred days,” p. 102), which (if I’m understanding it correctly) means that their skill can’t just involve stuff like answering on June 30 rather than January 1.

More generally, the impression I get from the book is that Tetlock thinks “superforecaster” is a natural category because people who do well on any of these measures tend to do better on the others, with no one type of skill uniquely accounting for their overall success – so that, yes, the people who do well tend to update more, but their success isn’t just due to the “trick” of updating more.  (Or, similarly, superforecasters tend to use finer-grained probability distinctions than everyone else, but if you coarse-grain everyone’s predictions, the superforecasters are still better, so their success isn’t just due to the “trick” of using finer-grained probability distinctions.)

Admittedly, as @more-whales lamented yesterday, this is a popular-audience book (unlike EPJ), without any details about data analysis (the most you get is stuff like “50% better”), so if you really want to look at the results criticially you’d have to look at Tetlock and co.’s original papers.  I definitely want to read them at some point.

(via bgaesop-deactivated20160701)

philip tetlock and the methods of rationality

bgaesop:

dataandphilosophy:

nostalgebraist:

I’m getting close to the end of Tetlock’s “Superforecasters.”  It’s a fascinating book, and I recommend reading it if it sounds at all interesting to you.  (Tetlock’s earlier Expert Political Judgment is also great, but more technical, and Superforecasters contains a concise and readable summary of its results, plus new ones.)

The book is highly relevant to a lot of the stuff I’ve been rambling about on here for the last few years, especially under the “big yud” tag.  One of my various issues with Less Wrong and Eliezer Yudkowsky is that although they correctly identify that human thought involves various systematic biases and that these biases can be partially mitigated through training and careful self-discipline, they’re never really proven that you can really get all that much out of this, besides some sort of philosophical satisfaction.  I’ve poked fun at LW-rationalists for acting like “rationality gives you superpowers” without ever really demonstrating any superpowers.  (Indeed, the cases of ostensible superpower demonstration given by Yudkowsky are generally just plain embarrassing, like the MWI thing or the AI Box thing.)

It would be a nice, clean story if we could just conclude there and then that resisting cognitive biases does not, in fact, give you superpowers.  Unfortunately for that nice, clean story, Tetlock’s Superforecasters is a very convincing book, by a credentialed expert with an illustrious career, about how … resisting cognitive biases can give you superpowers.

(Specifically, can allow you to forecast world events more accurately than professional government analysts with access to classified information, even if you’re just some regular Joe or Jane with no geopolitics background, forecasting as a way to amuse yourself after retirement.  The way you can achieve this feat, according to Tetlock, is by training yourself to avoid exactly the same sort of biases Yudkowsky and Hanson talk about avoiding.  Tetlock doesn’t use the phrase “aspiring rationalist,” but he might as well.)

Reading Superforecasters after reading so much Yudkowsky-and-co. over the years is a truly strange experience; the book makes a better case for Yudkowksy’s programme than anything Yudkowsky himself has ever written, and presents new results pointing in the direction of the claims Yudkowsky has spent years making with (in my mind) too little evidence to back him up.  It gives me hope that maybe, someday, someone can do the whole “Overcoming Bias” project right, and something (besides Harry Potter fanfic and mildly interesting math logic papers) will come of it.

(One of the things that has always seemed odd to me about HPMOR – or the third of it I’ve read, anyway – is how little it has to do with rationality.  Many of the chapters are named after cognitive biases, but the story tends to have very little to do with cognitive biases, and HPMOR’s Harry – a hyperconfident, Hedgehog-not-Fox thinker – is the opposite of a good role model for aspiring bias-avoiders.  I don’t think HPJEV would make a good superforecaster, but perhaps a superforecaster could make a good HPJEV, if you see what I mean.)

If you want a really terrifying experience, read Psychology of Intelligence Analysis, a manual by the CIA that focuses on techniques and failure modes for the beginning analyst. And yes, they mention cognitive biases and heuristics and some techniques to deal with them that could come straight out of the sequences.

What’s terrifying about that?

Superforecasters got its data from the Good Judgment Project, right? I was in the top 1/3rd of that by putting 55% confident on every answer

Yes, the book is based on the GJP.  Most people, even experts, do really badly on tasks like the GJP – being in the top 1/3 by putting 55% confident on every answer seems (at least qualitatively) in line with the Expert Political Judgment result about how many experts don’t do any better than someone picking answers at random would.

(via bgaesop-deactivated20160701)

(Incidentally the title of the book is actually “Superforecasting,” not “Superforecasters.”  In my defense, in the actual book, he uses the word “superforecaster” all the time and “superforecasting” almost never)