Install Theme
unvirtuous question: can I get away with reading _Superforecasters_ and /not/ EPJ? Or will I come out hopelessly biased in some way?

You can, and in fact I’d recommend it.  EPJ is a much more academic book and includes more data and nitty-gritty stuff per unit of conclusions drawn.  You can always go back and read EPJ afterwards if you want more detail on the EPJ stuff.

philip tetlock and the methods of rationality

I’m getting close to the end of Tetlock’s “Superforecasters.”  It’s a fascinating book, and I recommend reading it if it sounds at all interesting to you.  (Tetlock’s earlier Expert Political Judgment is also great, but more technical, and Superforecasters contains a concise and readable summary of its results, plus new ones.)

The book is highly relevant to a lot of the stuff I’ve been rambling about on here for the last few years, especially under the “big yud” tag.  One of my various issues with Less Wrong and Eliezer Yudkowsky is that although they correctly identify that human thought involves various systematic biases and that these biases can be partially mitigated through training and careful self-discipline, they’re never really proven that you can really get all that much out of this, besides some sort of philosophical satisfaction.  I’ve poked fun at LW-rationalists for acting like “rationality gives you superpowers” without ever really demonstrating any superpowers.  (Indeed, the cases of ostensible superpower demonstration given by Yudkowsky are generally just plain embarrassing, like the MWI thing or the AI Box thing.)

It would be a nice, clean story if we could just conclude there and then that resisting cognitive biases does not, in fact, give you superpowers.  Unfortunately for that nice, clean story, Tetlock’s Superforecasters is a very convincing book, by a credentialed expert with an illustrious career, about how … resisting cognitive biases can give you superpowers.

(Specifically, can allow you to forecast world events more accurately than professional government analysts with access to classified information, even if you’re just some regular Joe or Jane with no geopolitics background, forecasting as a way to amuse yourself after retirement.  The way you can achieve this feat, according to Tetlock, is by training yourself to avoid exactly the same sort of biases Yudkowsky and Hanson talk about avoiding.  Tetlock doesn’t use the phrase “aspiring rationalist,” but he might as well.)

Reading Superforecasters after reading so much Yudkowsky-and-co. over the years is a truly strange experience; the book makes a better case for Yudkowksy’s programme than anything Yudkowsky himself has ever written, and presents new results pointing in the direction of the claims Yudkowsky has spent years making with (in my mind) too little evidence to back him up.  It gives me hope that maybe, someday, someone can do the whole “Overcoming Bias” project right, and something (besides Harry Potter fanfic and mildly interesting math logic papers) will come of it.

(One of the things that has always seemed odd to me about HPMOR – or the third of it I’ve read, anyway – is how little it has to do with rationality.  Many of the chapters are named after cognitive biases, but the story tends to have very little to do with cognitive biases, and HPMOR’s Harry – a hyperconfident, Hedgehog-not-Fox thinker – is the opposite of a good role model for aspiring bias-avoiders.  I don’t think HPJEV would make a good superforecaster, but perhaps a superforecaster could make a good HPJEV, if you see what I mean.)

itsskeledict:

nostalgebraist:

Although a lot of Tetlock’s elite superforecaster strats are like “think about the base rate!  use the Outside View!  respond to new evidence!  think in probabilities rather than in detailed, superficially convincing narratives!”

And unfortunately it seems that if you (say) create a popular blog focused largely on these topics, you’ll end up with a readership that is still terrible at calibration

i don’t really have a horse in this race, like i’d be willing to believe that most LWers are poorly calibrated, but anecdotally one of my large software engineering classes (~100 people) had a calibration exercise where we were asked to give our 90% confidence intervals on a set of ten questions, and i ended up the only person in the class who got more than five correct, and only two people got five- the majority were clustered around 3 and 4. (iirc i had 8, where it should have been 9, but still). if it’s not LW that’s responsible for that, i don’t know what is. has the survey data been compared to calibration in the general population?

if the survey data isn’t that much better than the general population… i guess i might be an outlier in that i’d attended meetups where we actually did calibration exercises a few times, instead of just being familiar with the concept? if that’s the case, it’s surprising to me that more LW readers don’t even once try calibration exercises.

Also, along the same lines, I think “calibration” in the narrow sense of being able to give numerical probability estimates that line up with actual rates is not the same as “calibration” in the wide sense of acting sensibly (i.e. among other things, acting as though on the basis of calibrated estimates).

I think one possible danger in interpreting Tetlock’s results is overestimating the degree to which his subjects are learning to actually make good judgments, as opposed to just learning to interface well with the initially unfamiliar task of assigning 0%-to-100% probabilities (and doing it sensibly).  Which, in itself, is an extremely important task for governments, corporations, etc. but it’s not clear how useful it is for everyday life.  I wouldn’t be surprised if the superforecasters who know exactly when to answer 53% instead of 50% aren’t able to translate that into any kind of improvement in their day-to-day judgment, and that conversely people with very good day-to-day judgment may not be good at the stylized task of estimating probabilities, at least not without training.

I guess I’m partly talking to myself here – I need to caution myself not to get too excited and decide we all need to turn ourselves into superforecasters, because most real-life tasks are just not that much like the Good Judgment Project.

(via itsbenedict)

Doug knows that when people read for pleasure they naturally gravitate to the like-minded.  So he created a database containing hundreds of information sources – from the New York Times to obscure blogs – that are tagged by their ideological orientation, subject matter, and geographical origin, then wrote a program that selects what he should read next using criteria that emphasize diversity.  Thanks to Doug’s simple invention, he is sure to constantly encounter different perspectives.

From Tetlock’s Superforecasters (Doug is one of the titular superforecasters)