[Note: grandiose, partly aesthetic, not entirely literal]
Reading some of the comment threads on/about Sarah’s post “EA has a lying problem (which is a very good post IMO).
The other day, in a long rambling Facebook comment, I said I liked the way EA (or at least GiveWell) only assumes that its audience comes in with certain widely shared moral values, and not with strongly fixed preferences toward specific causes or people. That their audience is morally committed, but uncertain about facts (and aware of it) – they want to help but they don’t claim to already know how. There is a humane quality to do, an openness to new testimony from previously unfamiliar parts of humanity. The downtrodden feel no less pain if they do not happen to feature in prospective donor’s own provincial mental map (yet).
But then, the point of asking these questions, with this admirable openness of mind, is to eventually get answers, and make the mind less open. If you ask how best to help, and you get a satisfying answer, you now “know” how best (!) to help. If you ask to be told the most pressing causes, you’ll get some very pressing answers, and you may find them more pressing than your curiosity about the unknown.
It’s the explore/exploit dilemma, and eventually you have to start exploiting.
But in the classic explore/exploit dilemma, there’s just one agent, who has to balance the two. For them, more exploring means less exploiting. But the world contains more than one person, so why not divide the labor? The “researchers” (in this case charity evaluators, but the concept applies much more widely) maintain their open minds and stay on the lookout for new possibilities. The “advocates” learn continuously from the researchers, but commit themselves to specific issues in ways that would go against the whole purpose of the endeavour if they were researchers.
It seems to me like there is much havoc in EA because people want “EA” to be both researcher and advocate, and refuse to delegate labor. The attitude implicitly says that “EA” should be a concerted whole that acts like a single agent in an explore-exploit dilemma, where the “effectiveness” consists in the tendency to do well on the dilemma as a whole. But then, to show effectiveness, you need to show you have an especially good strategy for the version of the explore-exploit dilemma in play, which is actually pretty tough to do. Discouraging harsh internal criticism is a move that suppresses exploration in the name of exploitation, and precisely when and where to perform such moves is the whole of the dilemma. If you’re making these calls blindly, or without awareness of the problem you’re trying to solve, you’re probably not using some especially clever strategy to get better (”effective”) results than others who have faced the problem.
On the other hand, if your strategy is just “individuals can keep exploiting like they always do, but in the meantime we’ll keep exploring and anyone looking to exploit can always check out our latest evaluations” – well, that does seem like a distinctive thing that someone ought to be doing. That was how I originally understood the claim to “effectiveness” – the ability to cheat the dilemma by peeking at the notes of someone who’s been happily exploring without caring about their score. That’s a real thing, a good thing.

