The Craft & The Community - A Post-Mortem & Resurrection →
This post is gigantic (I admit I only skimmed it), contains wild and massive claims made with little to no firsthand experience, could stand to have the personal elements more clearly delineated from the ostensibly objective-descriptive elements, and I don’t know if even half of it would withstand any kind of close inspection –
– but, once again, I’m very pleasantly surprised that the rationalist community is talking openly about why the “go out and change the world” thing did not happen, rather than sweeping it under the rug
(Now going beyond mere talk, there’s the part I’m not yet optimistic about)
There’s an assumption buried right near the beginning that Taking Ideas Seriously means “the Rationality Community” should be a center of agency, rather than a social scene that people hang out in sometimes because it’s fun (while fulfilling positive moral obligations and the like elsewhere.) This is just flabbergasting to me.
I think that assumption makes sense because the post is springboarding off the Craft and the Community sequence, in which Eliezer dreams (in a way he admits is idealistic) about IRL rationalist groups that would serve as world-improving “taskforces,” and/or develop rationality through experiment and make possible much higher magnitudes of rationality, and/or “identify as elements of the Common Project of human progress, the Neo-Enlightenment,” etc.
It’s all quite vaguely imagined and could be interpreted in various ways, but there was clearly a dream there that rationalists might start getting together IRL and doing big, fundamentally new things made possible specifically by being rationalists together in communities.
Clearly, this has not happened. My opinion (and possibly yours?) is that it didn’t happen because there LW-rationalism wasn’t actually a recipe for doing big new things, or even a recipe for such a recipe. The author of the linked post thinks it could have happened and only failed for a bunch of historically contingent reasons. I don’t agree, but saying “it failed, but maybe we could try again?” is much preferable to not talking at all about the failure.
To riff a little more (for old times’ sake, haven’t used the #big yud tag in forever):
Why do I think LW-rationalism wasn’t a recipe for doing big new things? Because LW-rationalism focused on correcting individual judgment, but to accomplish big new things you need organizations of people, and the biggest obstacles to improving the world are organizational in nature. Getting big things done is not a matter of taking a bunch of individually optimized humans, putting them together in a building, and letting them run. The really important margin for improvement lies in getting your humans to work together, doing it with strategies that scale, with strategies that are robust to unexpected events and individual points of failure.
LW-rationalism would sometimes talk about relevant issues, but only abstractly (coordination problems, game theory) or speculatively (plausible-sounding theories of group dynamics, from the armchair). In the SSC era, there was a lot more talk about social stuff, but mostly as it related to politics.
The kind of knowledge I claim you’d need is more like management lore. The kind of stuff you could learn from relatively responsible and moral people who’ve spent a lot of time running companies or government organizations, some of which is surely written down somewhere, but less systematically than one would like. (The knowledge relevant to correcting individual judgment is much more systematic and accessible – just pick up Kahneman and Tversky – so this may be a case of looking where the light is.)
In the years since 2009, rationalist-branded organizations have focused on correcting individual judgment (e.g. CFAR) or applying corrected individual judgment (e.g. MetaMed), without (as far as I can tell) much focus on optimizing the organizations themselves, qua organizations. Rationalist ideas about organizational effectiveness are still mostly unsystematic folklore and armchair speculation, much more so than rationalist ideas about individual effectiveness (or society-wide politics).
An extreme example of this asymmetry is Dragon Army, in which a CFAR instructor (who has presumably thought quite a bit about individual effectiveness) decides he wants more organizational effectiveness and designs an organization from scratch on the basis of a fictional organization, evincing little awareness that organizational effectiveness has been sought and achieved before in the real world and that its seekers might have useful guidance.
SSC brought a (welcome) appreciation of the importance of social-level roadblocks, but without much hope for solutions, at least not before the eschaton. In 2009, Eliezer dreamed of a self-improving, world-improving community that would just so happen to invent an AI god as one of its many collective efforts; five years later, in Meditations on Moloch, we learn that the emergent dynamics of human groups will inexorably eat us and everything we love, and this will only stop when an AI god is (somehow) created.
(via nostalgebraist)
