taking a break from “ai discourse”
Over the last year or so, I’ve been writing a lot of long posts about AI timelines, AI risk, the limits of modern ML, and related issues.
People seem to enjoy these posts, and I’m proud of many of them. But the experience hasn’t been good for my mental health.
I’ve been spending way more time than I’d like thinking about these arguments in an obsessive manner – where I want to stop and think about something else, yet I somehow can’t.
—-
Part of the problem is that, this past year, I’ve gotten into the habit of reading LessWrong habitually – something I haven’t done for a very long time.
LessWrong is full of people afraid of near-term superintelligent AI, much more so than the rest of my social circle. (Even the parts of my social circle that read LW are much, much less AI-doomerist on average than the people writing most of the posts on LW.)
This created a sense that “whoa, everyone around me is suddenly way more terrified of AI!”, which felt like a scarily sudden and scarily unmotivated social shift, and made me feel obsessively driven to figure out where these fears came from and what evidence was supposed to justify them.
To some extent, I do think this shift is a real phenomenon that goes beyond LW, and was (very roughly) contemporaneous with me picking up LW again. But it’s also become clear to me that I’m driving myself crazy by reading LW, and letting “what LW posters think” seep into my assessment of “what people think, in general.”
—-
So, I’m going to stop reading LW for now. (Not just habitual reading, but reading posts on the site, period.) I’ll also avoid related sites like EA Forum, and personal blog posts containing the same kind of material.
I’m also going to try to talk about about AI risk and AI timelines as little as possible, whether online or in person.
I’ll still talk about these topics if there’s some overwhelming reason to, but not just as banter or because “someone is wrong on the internet.”
I’ll still talk about ML more generally, as long as the topic isn’t too close to big-picture prognostication about risk, timelines, and the ultimate power/limits of ML.
I don’t know how long I’ll keep this up for. With reading LW, I can probably keep it up indefinitely, since I spent years not-reading-LW in the past and feel none the worse for it. For the more general prohibition on AI discourse, I’ll keep it up at least until the end of the year, and then think about how I feel.
