My frustration here is with feeling unable to say “no, you are just boringly wrong, in addition to being wrong according to the quick heuristic.” Friendly AI, the digit ratio thing, and a lot of Moldbug’s ideas sound like crackpot stuff to many people; they might also actually be wrong, for boring empirical reasons. But ideas that have become protected do not go away no matter how much evidence stacks up against them; people solemnly nod their heads in response to the latest bit of evidence, and then go on respecting the position in principle.
This happens in other communities with other protected ideas – including those that might strike you or I as “obviously wrong” – which is part of why I think the “protected position” idea has validity. When I was younger I spent a lot of time around alternative medicine people because my father is into alternative medicine, and a lot of these people (including my father) are not stupid. I was even prescribed homeopathic remedies, which my father encouraged me to keep an open mind about. These people don’t just ignore negative evidence or physical plausibility; what they do is to point to the vast uncertainty of medical knowledge, to the distorting effects of vested interests, to the occasional study that contradicts the trend, to complicated treatises by their favorite mavericks which you will never have time to read, to statistical nuances, and so forth. And yet the feeling remains: they are always performing these gestures in favor of the positions they happen to like. The gestures always go in the direction of “keep [homeopathy / biofeedback / vitamin megadoses / this month’s preferred obscure supplement] respectable,” and never in the direction of any of the other myriad ideas in idea-space. It feels very similar to talking to LWers about FAI or the like: you can make all the arguments you like, but the fixation on seeing the position as worthy-of-interest is never going to go away.
I guess we might disagree on the object level here. I don’t see “a lot of evidence building up against” things like Friendly AI. I see more and more important people and domain experts admitting there’s probably something to it, extremely strong arguments in favor of it published in various journals that no one has adequately responded to, plus a bunch of people online making counterarguments against it that to me seem obviously wrong and based on trivial misunderstandings. And then after people spend a lot of time pointing out the flaws in those counterarguments, instead of arguing further those people come back and say “How come you’re still talking about this idea we refuted?”
Remember, every group thinks they’ve conclusively debunked their opponents’ ideas and it is only through stubbornness that they haven’t admitted this. Creationists have come up with a thousand knock-down arguments against evolution, and are shocked that the evolutionists continue to believe it anyways.
(also, you use digit ratio research as an example of an idea that has been so conclusively refuted we should just stop talking about it, but I was using it as an example of an idea that a lot of people thinks sounds silly, but which has been supported in study after study. Do you know different information about digit ratio than I do, or did you just misunderstand that example?)
What you’re saying kind of sounds like an “argument from my opponent believes something“ - basically "here are people continuing to believe an idea even though I have told them it’s wrong, therefore they must be biased, therefore they have some weird concept of the ‘protected idea’ in their community.”
It seems like an attempt to short-circuit debate - you just say “I’m sorry, I am so obviously right that I pronounce that you can’t argue for the other side any more, and if you do I will just call you tedious for continuing an argument where your side is obviously wrong.”:
And I can see doing that with something like neoreaction, where pretty much everyone has disagreed with it for the last hundred years. But with Friendly AI and AI risk ideas, it seems that it is becoming gradually more and more accepted, and there hasn’t been criticism of it as intellectually serious as the support for it - so how you can consider it to be so closed that further discussion is tedious is a mystery to me.
I was not using digit ratio as an example of an idea that has been conclusively refuted, I was using it as an example of idea that shouldn’t get any special cred for being counterintuitive. But really I think that about every idea, so the choice was pretty much arbitrary.
I think part of the issue here is the “crackpots work harder on their ideas that their critics think is warranted” problem. In a certain odd sense, the proponents of [insert dietary supplement for whose effectiveness there is no strong evidence] are more “intellectually serious” than its skeptics. The skeptics will, at most, look it up (probably on Wikipedia!) and note that there don’t seem to be any high-quality studies in favor of its effectiveness. The proponents will write treatises attending to the details of the the non-high-quality studies, talking about its possible mechanism of action and the biological reasons for thinking it might work (”animals produce lots of vitamin C when they’re sick!”), its history of traditional use, etc. In a formal debate on the topic, the proponent would look far more erudite than the skeptic, precisely because the skeptic doesn’t consider the proponent’s weird point of fixation in idea space to be interesting enough to be worth learning about. (Were you to learn about everything that is “interesting” by this low standard, well, you’d never have time to do anything else. Meanwhile, the proponent is happily selling their supplement to the masses on the recommendation of their glossy, impressive pamphlet.)
There are similar phenomena surrounding any academic circle of veneration. It is very hard to get an academic Marxist to admit that Marx might be wrong: they’ll say you haven’t read Marx, and a year later, when you come back having plowed through the entirety of Captial, they’ll tell you, oh, you just haven’t understood Hegel. After wrestling with Hegel’s syntax for a few months, you may start to wonder whether all of this is the best available use of your time – which of course you are wondering since you are someone who was doubtful of Marx to begin with, unlike your Marxist acquaintance. The same thing happens with Freud, Lacan, Derrida, and a variety of other figures. What takedown of these figures could possibly be as “intellectually serious” as the work of the figures themselves in combination with the vast body of venerating work they have inspired? Oh, trust me, Marx will make more sense once you read this 1000-page tome interpreting Marx. What, you think you might have something better to do than read a dry 1000-page academic book by an acolyte of something you were wary of to begin with? What are you, some kind of intellectual lightweight?
In short, I think I could be a lot more formal and professional and so forth in my FAI criticisms than I have been, and I’m sure other critics could be as well (many of them would probably do a better job than me). But this would take time and real effort (which these off-the-cuff tumblr posts mostly don’t), and I have seen nothing to suggest that these expenditures would be worthwhile. What, you think I’m going to read up on Derrida, write my anti-Derrida tract, and suddenly cause thousands of sophomore deconstructionists to say “ah, my eyes are opened!”? If I were to get serious and write some sort of fancy anti-FAI tract, I can, at best, imagine it having the effect the Anti-Reactionary FAQ had – people would find it very interesting, but it wouldn’t change anyone’s mind about the respectability of the topic, and people would still be treating it as respectable years later. You can debunk Freud all you want, but you can’t knock him out of his place in some people’s mental firmament.
(I have known some serious Freudians. They were very hard to argue with, because who the hell thinks Freud is worth learning about anymore?)
(via slatestarscratchpad)
zoniventris liked this
almostcoralchaos reblogged this from taymonbeal
inquisitivefeminist liked this
neuroflux liked this
golemesque-blog liked this
moral-autism liked this moriment liked this
timtotal liked this
eliza-was-here liked this
nostalgebraist reblogged this from xhxhxhx and added:
I should have been clearer, but: the point I’m making isn’t a straightforward critique of these traditions, it’s more...
ascerel liked this
wirehead-wannabe liked this
michaelkeenan liked this
rafr liked this
perrydeplatyplus reblogged this from nostalgebraist
perrydeplatyplus liked this massarem liked this
not-even-even liked this
illidanstr liked this
twocubes liked this
phenoct liked this
drethelin reblogged this from slatestarscratchpad
princess-stargirl reblogged this from slatestarscratchpad
brazenautomaton reblogged this from slatestarscratchpad and added:
Really? Because what I see is people pointing out that your conception of AI in general relies on a very large number of...
nothingismere liked this
slatestarscratchpad reblogged this from nostalgebraist and added:
What you say is true and I agree with it, but it’s not more true of false ideas than of true ones. That is, there are...
fake-rationality reblogged this from slatestarscratchpad
maddeningscientist liked this
princess-stargirl liked this
theunitofcaring liked this
dendensden liked this
fnord888 liked this
- Show more notes
