I have to leave pretty soon and can’t give this post by scientiststhesis a full response, but I want to make a few quick points that may clarify the post of mine that he is riffing on.
First of all: I’m flattered that you say I’m smart! Thank you. But I don’t really see the “obviousness” of certain LW beliefs as being related to intelligence. You say you’ve met lots of people who are “traditionally considered intelligent” and don’t find these ideas obvious. I have also had this experience.
In college, talking to a lot of academically “smart” people, I met some people who already shared my basic philosophical outlook, and also some people who didn’t and who I could never convince to see things my way. Eventually I stopped trying. The way I see it, things like this are a lot like fundamental politics. A site providing basic arguments in favor of mainstream left-wing progressivism would be kind of superfluous, not because everyone (or everyone sufficiently “smart”) is already a leftist, but because most people who aren’t leftists wouldn’t be swayed by the site.
So, when I said this:
“ ‘many philosophical debates are the results of disagreements over semantics’ — yeah, we know”
I was kind of making a joke, because this is the kind of idea that you will find big, politics-like rifts over among academics. You often hear this idea attributed to Wittgenstein, but then, as Paul Graham says:
Wittgenstein is popularly credited with the idea that most philosophical controversies are due to confusions over language. I’m not sure how much credit to give him. I suspect a lot of people realized this, but reacted simply by not studying philosophy, rather than becoming philosophy professors.
In other words, there were already plenty of people who had this idea – but Wittgenstein had the idea and also became a “major philosophical figure,” so that people who wanted to become philosophers were forced to grapple with it. This is the kind of thing I remember being argued over at college dinner tables, and feeling much like a political argument – everyone felt that their side was kind of “obvious” and no one ever really changed their views. When I say “yeah, we know,” the “we” is referring to “me and the kind of people who were on my side in those college dinner conversations.” Or Graham’s “lot of people.”
And I know I’m ignoring the people here, who clearly exist, who do benefit from having the case for certain things spelled out. The invisible middle between “X is obvious” and “you’ll never convince me of X no matter how many college dinner arguments we have.” Cf. queenshulamit’s recent autobiographical post – and I know there are plenty of other people who credit LW with aiding the development of their worldview.
But, again, I see this as being a lot like politics. If that “why you should be a mainstream leftist FAQ” actually existed, there would certainly be some people who would first discover leftism – or leftism as something with some good arguments behind it – through the site, and would credit the site with aiding their personal development. But they would be greatly outnumbered by people who spontaneously become mainstream leftists at age 14 and never look back, or people who reject leftism in similar fashion. And I think we could justly say such an FAQ would be sort of “superfluous” in the sense that the main barrier to the spreading of these ideas is not that people don’t know about the arguments in favor of them. That’s what I mean by “yeah, we know.”
(There’s an open question here about what attitude we should have towards resources about about these kind of “political” questions. Maybe the “why you should be a mainstream leftist FAQ" should exist, if only for the 0.1% of people who would find it useful. But I would not expect its construction to have a major impact upon the world; I would be pretty much indifferent to it, even if to that 0.1% it would be quite valuable indeed. It would be helpful, but not that helpful, and I would be wary of any attempt to "spread the gospel” on the basis of it. Analogy to LW here.)
Okay, second point, about singularity and FAI: I see these as being important to LW in a way that is out of proportion to how often they are mentioned. Yudkowsky works on FAI, tries to get people to work on FAI or donate, several central LW figures (e.g. Kaj Sotala, Nate Soares) work at MIRI. Yudkowsky and Vassar say they’ve had to downplay AI risk talk relative to rationality talk as a PR move (rationality brings people in, AI risk scares people away). AI- and singularity-related stuff seem to be major emotional hooks for much of the community: I went to the Solstice event in 2013 and many of the songs there made the most sense if interpreted as encouragements to work toward a positive singularity, the blog post “Beyond the Reach of God” was read in emotional tones, etc. Even if this stuff is not explicitly talked about much on the main site, its role within the community is very obvious to anyone coming in from the outside.
(There is probably a center / periphery difference here: I would expect that the people who attend Solstice are much more likely to be worried about AI risk than the average main site reader, and also more likely to be core community members, the sort of people who post often on the main site or prominent blogs, attend meetups, etc. So “AI risk is important” is a “core LW belief” in the sense of being a “belief held by the core.”)
