Uh… what? No he doesn’t believe that. He has consistently and repeatedly explained that he thinks the idea of Roko’s…I know he’s said that later, but the Basilisk did freak him out in the original thread. This will not be new to you, but just for the record:
Until we have a better worked-out version of TDT and we can prove that formally, it should just be OBVIOUS that you DO NOT THINK ABOUT DISTANT BLACKMAILERS in SUFFICIENT DETAIL that they have a motive to ACTUALLY BLACKMAIL YOU.
(-EY. Plain text archive of the thread here.)
I admit that EY currently does not believe in the Basilisk (and didn’t even think it would work in the original thread). You’re right about that and I was wrong.
To be honest these details are not all that significant to me because I consider the whole of FAI, and the idea of worrying over Basilisks and trying to prove that they can’t work, are nearly as silly as the Basilisk itself.
To me the sensible ideal from which EY is deviating here is not “not thinking the Basilisk would work,” but “thinking the formal proofs about the Basilisk and other FAI topics are a good way to spend one’s time and energy.” The “thinks the Basilisk would work” / “thinks the Basilisk wouldn’t work” distinction is, to me, like a theological distinction is to an atheist; there’s a much bigger gap here than the one between the positions.
If people want me to I might write a post about why I find FAI so un-compelling (actually, I have written about it at length in private emails so I could easy copy/paste with some light editing).
I’d love to know why you find FAI so un-compelling. Also, I’d like to know whether it’s the idea that it’s possible or that it’s desirable that you disagree with. I have some misgivings about its feasibility, but to me it looks like it’s by definition desirable, so if a person believes it’s possible and they have skills relevant to it, it looks like it’s a pretty clear and obvious goal. Of course, they may be factually wrong about that belief, but then that’s another story altogether; apparently Yudkowsky couldn’t convince Hanson nor vice-versa, and my own position is sort of a middle-ground between the two, so I don’t know.
It’s strictly feasibility, with sort of an irrelevant side order of desirability; mostly it’s that I think we currently lack the information necessary to think about these topics realistically, and that screws up both feasibility and desirability, but the latter is more thorny and gets into more ethics stuff and so I prefer to de-emphasize it.
I’ll look at the email I’m thinking of and see if it’s suitable for copy/pasting into a post here. Note that it was written to someone aware of FAI but (I think) not very invested in the concept; it paints in broad strokes, is very informal about everything, and I imagine much of it will not meet your standards for argument, but it does sketch a set of arguments that could be fleshed out more.
