Install Theme

aprilwitching:

geekwithsandwich:

socialjusticemunchkin:

nostalgebraist:

socialjusticemunchkin:

nostalgebraist:

@socialjusticemunchkin, did you coin the phrase “dogma of mandatory comprehensibility” for your NAB review, or does it have some earlier provenance, in your writing or somewhere else?  It’s a phrase that captures something that has frustrated me about deconstructionist (and similar) criticism in the past, and it’d be nice to be able to use it without referring people back to this particular kerfuffle.

Specifically, the frustration I have is that in order to identify “holes” in a text, places where a text “undermines itself,” or the like, it seems to me like you first need to ask the usual questions like “does this make more sense in historical context?” or “does it work to read this as meant ironically?”  I.e. the kinds of questions you usually find non-deconstructionist critics asking when confronted with aspects of a text that confuse them.

And it would be fine if any given deconstructionist had asked the usual questions and simply found the answers wanting, but in the cases I’ve read, they often don’t.  The (unintended?) implication is then that “if it doesn’t make immediate sense to a late-20th or early-21st century college professor, it doesn’t make sense.”  When, you know, that college professor’s viewpoint is not only not omniscient, but (more specifically) conditioned by the public morals and idea systems of their society in ways which they may not be aware of, since that’s how such things tend to go.  (I wonder if Foucault ever got on the deconstructionists’ case about this?)

(Note: I have a rule of not talking about NAB, but this post doesn’t count as talking about NAB by my standards)

As far as I know it’s my OC, and fresh to this particular incident.

The basic idea has been bugging me longer though, tying to the more general pattern I’ve observed of people yelling about things because they don’t realize they don’t speak the same language and thus assume that an expression in rationalist!english means what the same words mean in liberalartist!english, give a reasonable response to their misconception in liberalartist!english and speakers of rationalist!english are like “lol wtf are these guys talking about”, and in the end both sides hate each other for the horrible sin of speaking the Wrong Dialect.

(And the general pattern kind of applies in a lot of uncharitable readings; most snarky nitpicking would lose its effect if one were to read things in the writer’s dialect instead of one’s own; and no matter how much fun said snarky nitpicking is, it’s not at all fair. (Yes, I sometimes do it myself too, feel free to yell at me if you catch me doing it unless I’m clearly aiming for a non-serious&honest approach.))

Thanks for the fast response.

IMO, “liberal arts” is not a very useful term here.  In modern usage it tends to refer to types of education which in some way hark back to the old quadrivium/trivium and the notion of a “broad education” they represented.  The quadrivium/trivium had no “humanities as opposed to STEM” focus – you can sort of break it down (imprecisely and misleadingly) as “trivium is (premodern) humanities, quadrivium is (premodern) STEM,” but logic is one-third of the trivium, so if you count that as “premodern STEM” you’ve got 5 of 7 “premodern STEM” subjects.

(The quadrivium included music, because this was thought of as the study of “number in time,” to go along with arithmetic (number), geometry (number in space), and astronomy (number in space and time, i.e. something like physics).)

Hardly anyone actually uses the original trivium/quadrivium anymore, but modern “liberal arts education” tends to aim for the same breadth.  For instance, at the “liberal arts college” I attended (where I got a physics degree), all students were required to take at least two classes in each of four “groups,” one of which was natural science (and there was nothing like “physics for poets” – everyone had to take the same intro science classes that the science majors were taking, which were taught with appropriate rigor), and one of which was something like “syntactic systems” (it included math, symbolic logic, foreign language courses excluding those classed as “literature courses,” and linguistics).

(Also, the “liberal arts college” as a a subtype of American colleges has a bunch of other characteristics, like being expensive, having small class sizes, and holding many classics as Socratic-ish discussions rather than lectures.  None of these have much to do with the distinction I think you’re drawing.)


“Humanities” I think is a term that works strictly better than “liberal arts” here, because in the modern university it tends to mean stuff that isn’t “natural science” or “social science,” e.g. literature and history.  Still, even this is way too broad, since the “dialect” of a history department, say, will be different from that of a literature department, and even literature departments with different focuses will have different “dialects.”  (There’s been a fair amount of friction involved in the attempt to bring things like deconstruction into the discipline of classics, which tends to be old-school about most things, including literary analysis.)

What I think you’re pinpointing is something like “the most commonly used intellectual dialect in modern university literature departments, excluding classics.”  Although that isn’t a very snappy phrase.  “Talking like an English major,” although crude-sounding, is actually pretty close, but is likely to make you sound like don’t know whereof you speak (cf. the reaction to @theungrumpablegrinch‘s review of NAB).  I’d love to find a phrase here that is readily and mutually intelligible.

Okay, the concept I’ve been trying to translate has been, in my brain, defined by a Finnish word which basically means “not STEM” and I was embarrassingly unfamiliar with the word ‘humanities’. That specific dialect is a subtype of it, but there seems to be a general pattern of “humanist” vs. “mechanist” language and thinking which this dialect, the postmodernist “reality don’t real” meme, the “scientists are soulless, understanding destroys wonder” meme, the idea that science has difficulties modeling fluid dynamics because our systems of knowledge are founded on patriarchal rigidity [sic], etc. are extreme edge cases of.

The thing isn’t limited to English as eg. gender studies tends to feature the same thing to some degree as well; whatever the fuck CrimethInc. is its “Eight Reasons Why Capitalists Want to Sell You Deodorant” is exactly that thing (“Body smells are erotic and sexual. Capitalists don’t like that because they are impotent and opposed to all manifestations of sensuality and sexuality. Sexually awakened people are potentially dangerous to capitalists and their rigid, asexual system.”); the analytic/continental divide in philosophy is also partially about that thing; I’ve seen many humanities people comment on issues of science with an embarrassing unawareness of the actual mechanisms of how things operate (because the broader version of the dogma of mandatory comprehensibility lets them believe things are way simpler than they actually are (and it obviously operates in reverse too with naive STEM people on humanities questions causing enough facepalms to extract all the world’s cooking oil needs from)); the people who stop treating others as humans if they say the word “rational” are that thing; etc.

(And similarly the “mechanist” edge case would be the stereotypical weakman “soulless” engineer who thinks emotions don’t matter and Spock is something to emulate instead of an embarrassing failure of a humanist attempt to cargo-cult rationality, identifies as Objective Rational Thinker™, uses models derived from physics to explain all human behavior and forgets that they are crude simplifications at best, etc…)

Hello Rationalist Tumblr™, I’m here via @aprilwitching, and I just wanted to jump in with a little linguistics.  I believe the “dialects” you’re describing would be considered “registers” in linguistic jargon.  Jargon is also a very useful word for this type of discussion.  I think basically what happens in the types of conflicts you’re describing here is that both parties coming from different academic backgrounds believe they are speaking in “academic register” but in fact there is no one unified academic register, there are many registers specific to the academic background in question, and therefore both parties believe they’re speaking the same register and don’t question the applicability of their jargon.  You can see some evidence for this when those same people speak to a non-academic person about the same subjects in a non-academic, casual register; they’re far more likely to either avoid jargon, or clearly define their jargon, because they know the other party doesn’t speak Academic Register.  If they only applied the same idea to discussions across academic backgrounds, they’d be set!

And also, this conflict definitely happens within STEM to a massive degree, as there is a ton of jargon in, say, biochemistry that a physics person isn’t likely to know.  Or in Ornithology that an Ichthyologist won’t know.  And I constantly find myself trying to explain taxonomic and evolutionary jargon to computer programmers (without much luck).

I don’t have as much experience with cross-humanities register conflicts, but I’m aware that they happen, especially between specific fields that examine the same phenomena from radically different angles.  The intersections of Linguistics with Anthropology and Psychology, Cultural vs. Evolutionary Anthropology, and Sociology with Psychology appear to be particularly rich examples.  There are an increasing number of weird Frankenstein Specialties emerging as a direct result of frustration at the lack of effective cross-specialty communication, too, like Neuroanthropology.

this is a very good addition, which also reminds me of that joke about being able to tell what someone studied in college by having them read out the word “unionized”. 9_9

& yes, “registers” is probably? more accurate than “dialects” here

Oh man, the phrase “unionized TAs” confused me for over a minute one when it came up in an article I was reading

@geekwithsandwich, thanks – I didn’t know that “register” had that meaning in linguistics.  (N.B. I am not a linguist by any means but I took two linguistics classes in college, so if I use some linguistics terminology below, this is not an attempt to assert any expertise of my own, it’s just how I learned to talk about these things in college)

I’m still not sure “register” is the term I want to use for the kind talked about upthread, not because it’s not academically correct, but because of – well, yet another dialect register (jargon?) clash!

If I were talking to a bunch of academic linguists, it would be appropriate to use the word “register” (and I wouldn’t have known that).  On the other hand, if someone (like me) who doesn’t know the academic meaning of the word reads it, they’re going to interpret it using the colloquial definition, which is more like “level of formality.”  (I don’t know if “colloquial definition” is the correct linguistic term for this – I mean something like how “charm” in particle physics means something different from “charm” in everyday English, although that’s an extreme example.)

E.g. there are a lot of writing guides out there that talk about “formal register” and “informal register” as though they are the only two out there.  Googling “formal register” turned up a bunch of these, which sometimes say things like

However, the focus of this learning object, is on how the vocabulary and grammatical choices you make affect the register (the degree of formality) of your finished product. [my emphasis]

But here we want to talk about different academic ways of speaking that all may seem equally formal (they may all appear in published papers in the relevant disciplines, for instance), so the colloquial definition of “register” is going to confuse non-linguists here.

OTOH, the colloquial definition of “dialect” seems about right, even if it’s clearly being used in a sort of figurative way (there is no one who exclusively “speaks” an academic “dialect” the way someone might speak a dialect of English).  Non-linguists think of “dialects” as variants of a language that are still recognizable as that language, which may affect everything from phonology on up to syntax and pragmatics, and which may be hard to understand to speakers of other dialects – all of which are helpful rather than misleading here.  (“Jargon,” colloquially, sounds like it’s just about the lexicon alone, which is misleading.)

(Googling around just now, I found that mutual intelligibility is [sometimes?] used as the criterion distinguishing dialects from languages in linguistics, which seems to differ from the colloquial notion that “dialects are often hard to understand”?  But maybe I just don’t understand precisely what “mutual intelligibility” means)

I don’t think any of these terms work perfectly here, unfortunately.

(via aprilwitching-deactivated201808)

bendini1:

nostalgebraist:

@bendini1

If that is your claim you might want to reword your opening statement, it gave me the impression you meant something different.

Do we have data on the anxiety prevalence in similar demographics? It hardly seems fair to compare the anxiety rate of socially-maladjusted high-IQ atheist nerds to the general American population.

For what it’s worth I have the contrarian trait, yet suffer very little anxiety in that area, I don’t even come close to any DSM listed anxiety or OCD condition. My motives for contrarianism is the opportunity to get better results than everyone else using the hand I’ve been dealt, not to hope that someday I can qualify as a normal human.

Oh, yeah, re-reading it I see that my opening statement was ambiguous.  I meant “contrarianism (when it happens) is often the result of anxiety” but it could have meant “contrarianism is often the result of anxiety (when it happens).”  I would edit the post, but it’s already gotten a big reblog chain and I think the damage has been done

Also I’m going to start referring to the thing I described as “trait C” rather than “contrarianism” or anything, since I want a connotation-free term that means “the contrarian-related trait I described in the OP, specifically”

Anyway, I’m not sure “anxiety prevalence in similar demographics” is relevant here.  What I want here is P(anxiety | trait C), which I am claiming is high.  You could compute this even if you only had data for people with trait C and no one else, since it’s just the fraction of people with trait C who have anxiety.

In particular, if you compared “trait C people” (using some proxy variable) to some sample of very demographically similar “non-trait C people,” it might even be the case that the former have a lower rate of anxiety than the latter; so long as the former have a high rate, the claim is still true.  All we want to know here is “given ‘person has trait C,’ what’s the probability that ‘person has anxiety’?”

If the trait C group has lower anxiety rate than a demographically matched non-trait C group, while still remaining very high in comparison to the general population Then matching “has type C trait” to high likelihood of anxiety is an odd chain of reasoning, unless you just want one question to ask for brevity (even then it might not be the best question, you will pick up a lot of the “contrarian but spouts bullshit and never admits when they are wrong” types which will likely have a relatively low anxiety prevalence)

What I’m saying here is correlation is implying proxy for casual factors, not direct causation

I’m not denying the correlation isn’t strong, just that if you can ask more fine grained questions you could get a stronger correlation, presenting this as the “missing piece of the puzzle” may end up being a curiosity stopper, discouraging people from looking for less neat but more accurate explanations.

Well, but I’m not looking for a full theory of contrarianism, or for an ideal one-question quick diagnostic for anxiety.  I’m proposing an idea that may be helpful in interactions with individual trait C people (or for helping trait C people understand themselves).

In particular, when you’re dealing with a particular individual, you’ll have plenty of other information.  So the risk of false positives isn’t really an issue – if the particular person you’re looking at seems unlikely to have the “anxiety ==> trait C” thing for other reasons, you can just not propose that explanation in that case, drawing on it only when it’s useful.  (That is, it’s a “tool” in one’s “toolbox,” in the terminology I used when we were talking about “amateur sociology” a little while back.)

(via bendini1)

@bendini1

If that is your claim you might want to reword your opening statement, it gave me the impression you meant something different.

Do we have data on the anxiety prevalence in similar demographics? It hardly seems fair to compare the anxiety rate of socially-maladjusted high-IQ atheist nerds to the general American population.

For what it’s worth I have the contrarian trait, yet suffer very little anxiety in that area, I don’t even come close to any DSM listed anxiety or OCD condition. My motives for contrarianism is the opportunity to get better results than everyone else using the hand I’ve been dealt, not to hope that someday I can qualify as a normal human.

Oh, yeah, re-reading it I see that my opening statement was ambiguous.  I meant “contrarianism (when it happens) is often the result of anxiety” but it could have meant “contrarianism is often the result of anxiety (when it happens).”  I would edit the post, but it’s already gotten a big reblog chain and I think the damage has been done

Also I’m going to start referring to the thing I described as “trait C” rather than “contrarianism” or anything, since I want a connotation-free term that means “the contrarian-related trait I described in the OP, specifically”

Anyway, I’m not sure “anxiety prevalence in similar demographics” is relevant here.  What I want here is P(anxiety | trait C), which I am claiming is high.  You could compute this even if you only had data for people with trait C and no one else, since it’s just the fraction of people with trait C who have anxiety.

In particular, if you compared “trait C people” (using some proxy variable) to some sample of very demographically similar “non-trait C people,” it might even be the case that the former have a lower rate of anxiety than the latter; so long as the former have a high rate, the claim is still true.  All we want to know here is “given ‘person has trait C,’ what’s the probability that ‘person has anxiety’?”

(via bendini1)

bendini1:

chroniclesofrettek:

nostalgebraist:

greenrd:

nostalgebraist:

IMO, an idea that should be more widely spread – not even widely assented to, necessarily, just talked about, possibly as a “controversial thing” – is that contrarianism is often the result of anxiety

More precisely, not contrarianism but “I know what you’re thinking, but – what if this consensus idea were actually wrong?”-ism

In stereotype land, the psychology behind this behavior is either a desire to annoy people from a place of presumed intellectual superiority, or just an interest in intellectual game-playing for its own sake.  But in my experience, I find myself wanting to question consensuses because the alternative feels scary.  If no one really knows why the thing is true and everyone just believes it because other people believe it in a self-confirming web, then what happens when it turns out to be wrong?

The anxiety, in particular, makes this weigh on me in particular even though, as just described, it would be a society-wide failure.  I tend to (irrationally) feel like other people can rely on “what seems sensible” without much risk, possibly due (says the anxiety) to some mystical intuitive faculty that aligns their sense of “what seems sensible” with actual truth – but if I try to do that, I end up ruining everything, and then everyone’s looking at me in horror and pain and asking what the hell I thought I was doing, and I’m thinking “well it seemed sensible at the time” but that is not enough, not for me, no, for me only rock-solid nerdy professorial foundations will work, not because I want to be an intellectual, but because I want to not ruin everything

(This almost never actually happens, and when it does it doesn’t happen with anything like the high drama in the previous paragraph, but it feels like it is a danger I must ever be on watch for)

And when I look around me – taking into account of course that I may be projecting my own motivations onto others (I must include nerdy caveats like that one, some people might know how to get by without them but I don’t, you see how it is) – well, it looks to me like a lot of the “contrarians” and “fans of weird ideas” out there have anxiety disorders.  And this makes sense.

Rejecting common knowledge and laboriously replacing it with a nerdy fiddly ground-up programme that either ends up rediscovering the obvious or “absurdly” negating it – this can be intellectual pretentiousness, or a desire to be special, or just poor judgment of how to usefully spend one’s time and energy.  But it can also be what you do because you “know” that if your foundations aren’t rock-solid, they’re going to blow up in your face and also the faces of loved ones and innocent bystanders, even if this never happens to anyone else

If you don’t go back and check whether the oven is on, it’s going to turn out that it was on, because this is how your life works.  If you don’t neurotically plan out your schedules and your schedules-within-schedules and make checklists and proceed in life one carefully regimented step at a time, you are going to make some mistake so stupid that it lies outside of the realm of ordinarily conceivable human behavior, and it will be so embarrassing that you will be cast out from society and gainful employment forever, because this is how your life works.

If you don’t worry over the coherence of your epistemology and your ethics and the reliability of every source you read and the myriad potential for error even in the work of the great scholars and thinkers who have shaped the received wisdom of educated people and the established (established? by whom?) fact that received wisdom in every prior society has contained vast errors and licensed vast injustices and in sum the ever-present possibility that everyone else could just be getting some basic thing (any basic thing) wrong and failing to see reality for what it is,

This is Obsessive Compulsive Personality Disorder. Making it explicit like this helps to show to people who don’t have OCPD how silly these thoughts are. Strangely though, it might not work to stop a person who already has this manifestation of OCPD from thinking like this. Much like eating disorders, I suppose.

It sounds more like OCD than OCPD to me, if we’re talking about specific disorders?  (I was thinking it was most useful to use “anxiety disorders” as a broad umbrella here rather than singling out any particular one, although my description does sound a lot like OCD in particular, at least according to the stereotypes I have in my head.)

Wikipedia says:

Unlike OCPD, OCD is described as invasive, stressful, time-consuming obsessions and habits aimed at reducing the obsession related stress. OCD symptoms are at times regarded as ego-dystonic because they are experienced as alien and repulsive to the person. Therefore, there is a greater mental anxiety associated with OCD.[2]

In contrast, the symptoms seen in OCPD, though they are repetitive, are not linked with repulsive thoughts, images, or urges. OCPD characteristics and behaviors are known as ego-syntonic, as persons with the disorder view them as suitable and correct.

The thing I’m describing involves unpleasant anxiety which the “overly careful” behavior attempts to relieve, rather than a sense that those behaviors are straightforwardly the right things to do.  In particular, there’s a focus on averting disaster via these behaviors, which sounds close to “repulsive thoughts, images, or urges.”

(From the DSM-V description of OCD compulsions: “The behaviors or mental acts are aimed at preventing or reducing anxiety or distress, or preventing some dreaded event or situation; however, these behaviors or mental acts are not connected in a realistic way with what they are designed to neutralize or prevent, or are clearly excessive.”)

There is also a reluctance, in what I’m describing, to generalize from “I should do the behaviors” to “everyone should do the behaviors” – the “everyone else can get away with this, but not me” aspect I described in the OP.  From what I can tell, this is less common in OCPD and may be a factor distinguishing it from OCD.  Diagnostic criteria for OCPD include, in the DSM-V:

Rigid insistence on everything being flawless, perfect, without errors or faults, including one’s own and others’ performance; […] believing that there is only one right way to do things

and in the DSM-IV:

Is overconscientious, scrupulous, and inflexible about matters of morality, ethics, or values (not accounted for by cultural or religious identification).

and in the ICD-10 (which calls it “Anankastic Personality Disorder”):

excessive pedantry and adherence to social conventions [!]

@allfeelsallthetime

It’s a nice theory, but, depending on how bad your anxiety is, over 99% of people with worse anxiety than you haven’t decided to rebuild everything they know from the ground up. This may well be a factor, but until you can explain why those people haven’t taken this option, it remains a nice plausible theory for why people become rationalists.

My claim isn’t “some (large) fraction of people with anxiety do this,” but “some (large) fraction of people who do this have anxiety.”  Anxiety can lead to a lot of different behaviors, but some of those behaviors are relatively reliable (if still far from perfectly reliable) evidence of anxiety.  You know, Bayes’ rule and base rates and stuff

(Also, the phrase “being a rationalist” doesn’t capture the trait I’m talking about, although a lot of LW-rationalists do incidentally have the trait)

(via bendini1)

reddragdiva:

nostalgebraist:

IMO, an idea that should be more widely spread – not even widely assented to, necessarily, just talked about, possibly as a “controversial thing” – is that contrarianism is often the result of anxiety

More precisely, not contrarianism but “I know what you’re thinking, but – what if this consensus idea were actually wrong?”-ism

In stereotype land, the psychology behind this behavior is either a desire to annoy people from a place of presumed intellectual superiority, or just an interest in intellectual game-playing for its own sake.  But in my experience, I find myself wanting to question consensuses because the alternative feels scary.  If no one really knows why the thing is true and everyone just believes it because other people believe it in a self-confirming web, then what happens when it turns out to be wrong?

The anxiety, in particular, makes this weigh on me in particular even though, as just described, it would be a society-wide failure.  I tend to (irrationally) feel like other people can rely on “what seems sensible” without much risk, possibly due (says the anxiety) to some mystical intuitive faculty that aligns their sense of “what seems sensible” with actual truth – but if I try to do that, I end up ruining everything, and then everyone’s looking at me in horror and pain and asking what the hell I thought I was doing, and I’m thinking “well it seemed sensible at the time” but that is not enough, not for me, no, for me only rock-solid nerdy professorial foundations will work, not because I want to be an intellectual, but because I want to not ruin everything

(This almost never actually happens, and when it does it doesn’t happen with anything like the high drama in the previous paragraph, but it feels like it is a danger I must ever be on watch for)

And when I look around me – taking into account of course that I may be projecting my own motivations onto others (I must include nerdy caveats like that one, some people might know how to get by without them but I don’t, you see how it is) – well, it looks to me like a lot of the “contrarians” and “fans of weird ideas” out there have anxiety disorders.  And this makes sense.

Rejecting common knowledge and laboriously replacing it with a nerdy fiddly ground-up programme that either ends up rediscovering the obvious or “absurdly” negating it – this can be intellectual pretentiousness, or a desire to be special, or just poor judgment of how to usefully spend one’s time and energy.  But it can also be what you do because you “know” that if your foundations aren’t rock-solid, they’re going to blow up in your face and also the faces of loved ones and innocent bystanders, even if this never happens to anyone else

If you don’t go back and check whether the oven is on, it’s going to turn out that it was on, because this is how your life works.  If you don’t neurotically plan out your schedules and your schedules-within-schedules and make checklists and proceed in life one carefully regimented step at a time, you are going to make some mistake so stupid that it lies outside of the realm of ordinarily conceivable human behavior, and it will be so embarrassing that you will be cast out from society and gainful employment forever, because this is how your life works.

If you don’t worry over the coherence of your epistemology and your ethics and the reliability of every source you read and the myriad potential for error even in the work of the great scholars and thinkers who have shaped the received wisdom of educated people and the established (established? by whom?) fact that received wisdom in every prior society has contained vast errors and licensed vast injustices and in sum the ever-present possibility that everyone else could just be getting some basic thing (any basic thing) wrong and failing to see reality for what it is,

so … what’s the practical difference for the person on the receiving end of the querulousness?

i appreciate that considering the motivation behind a given piece of querulousness may be important to appreciating the querulant as a person, but not necessarily in dealing with it. “you’re just fearful” strikes me as being in danger of inappropriate personalisation of a response (or, as you posit it here, more of a reaction) presented as substantive and issue-based.

what are you positing as an appropriate response to querulous contrarianism in this framework?

I don’t really think this framework can provide any practical advice of that kind.  If the behavior annoys you, it annoys you.  Ultimately I think that has to be dealt with just like any situation where you want to politely disengage from some conversation, and ideally also express your wish not to get into that type of conversation in the future.  (This happens with all sorts of other things – we all have topics we just don’t want to talk about, or tones/styles of speech/writing that sound like fingernails on a chalkboard to us, etc.)

If this framework has any practical upshot, it will be in – sometimes, perhaps – making certain querulents not annoying where they otherwise would have been.  Sometimes what we find annoying about a speech act is the psychology we read into it, and if we see different psychology there, the amount of annoyance may change.

Like, for obvious reasons, I find it really hard to have any kind of good faith engagement with someone if I think they’re trying to get a rise out of me, which is often what this “hey, what if?” behavior looks like.  (Getting into conversations about something like Friendly AI, say, it’s easy to feel like one is being “trolled” – you strongly feels at the outset that the topic is not worth careful investigation given the opportunity cost, but then you think “oh, I’ll look bad if these people make sophisticated arguments and I have nothing similarly sophisticated to say in response,” so now you’re delving into the details of AI futurism and the concept of Friendliness, i.e. exactly what you thought was not a good use of time, and now you feel like you’ve been, well, owned)

But if the intention (in that or many similar caes) isn’t “trolling” or “feeling intellectually superior to people who don’t waste their time thinking about such things,” but is instead this much other much more #relatable thing, involving the other person’s lifelong quest to make some sort of peace with a threatening world, well, that might make the conversation more interesting to have, less like falling for bait, etc.?

(via reddragdiva)

greenrd:

nostalgebraist:

IMO, an idea that should be more widely spread – not even widely assented to, necessarily, just talked about, possibly as a “controversial thing” – is that contrarianism is often the result of anxiety

More precisely, not contrarianism but “I know what you’re thinking, but – what if this consensus idea were actually wrong?”-ism

In stereotype land, the psychology behind this behavior is either a desire to annoy people from a place of presumed intellectual superiority, or just an interest in intellectual game-playing for its own sake.  But in my experience, I find myself wanting to question consensuses because the alternative feels scary.  If no one really knows why the thing is true and everyone just believes it because other people believe it in a self-confirming web, then what happens when it turns out to be wrong?

The anxiety, in particular, makes this weigh on me in particular even though, as just described, it would be a society-wide failure.  I tend to (irrationally) feel like other people can rely on “what seems sensible” without much risk, possibly due (says the anxiety) to some mystical intuitive faculty that aligns their sense of “what seems sensible” with actual truth – but if I try to do that, I end up ruining everything, and then everyone’s looking at me in horror and pain and asking what the hell I thought I was doing, and I’m thinking “well it seemed sensible at the time” but that is not enough, not for me, no, for me only rock-solid nerdy professorial foundations will work, not because I want to be an intellectual, but because I want to not ruin everything

(This almost never actually happens, and when it does it doesn’t happen with anything like the high drama in the previous paragraph, but it feels like it is a danger I must ever be on watch for)

And when I look around me – taking into account of course that I may be projecting my own motivations onto others (I must include nerdy caveats like that one, some people might know how to get by without them but I don’t, you see how it is) – well, it looks to me like a lot of the “contrarians” and “fans of weird ideas” out there have anxiety disorders.  And this makes sense.

Rejecting common knowledge and laboriously replacing it with a nerdy fiddly ground-up programme that either ends up rediscovering the obvious or “absurdly” negating it – this can be intellectual pretentiousness, or a desire to be special, or just poor judgment of how to usefully spend one’s time and energy.  But it can also be what you do because you “know” that if your foundations aren’t rock-solid, they’re going to blow up in your face and also the faces of loved ones and innocent bystanders, even if this never happens to anyone else

If you don’t go back and check whether the oven is on, it’s going to turn out that it was on, because this is how your life works.  If you don’t neurotically plan out your schedules and your schedules-within-schedules and make checklists and proceed in life one carefully regimented step at a time, you are going to make some mistake so stupid that it lies outside of the realm of ordinarily conceivable human behavior, and it will be so embarrassing that you will be cast out from society and gainful employment forever, because this is how your life works.

If you don’t worry over the coherence of your epistemology and your ethics and the reliability of every source you read and the myriad potential for error even in the work of the great scholars and thinkers who have shaped the received wisdom of educated people and the established (established? by whom?) fact that received wisdom in every prior society has contained vast errors and licensed vast injustices and in sum the ever-present possibility that everyone else could just be getting some basic thing (any basic thing) wrong and failing to see reality for what it is,

This is Obsessive Compulsive Personality Disorder. Making it explicit like this helps to show to people who don’t have OCPD how silly these thoughts are. Strangely though, it might not work to stop a person who already has this manifestation of OCPD from thinking like this. Much like eating disorders, I suppose.

It sounds more like OCD than OCPD to me, if we’re talking about specific disorders?  (I was thinking it was most useful to use “anxiety disorders” as a broad umbrella here rather than singling out any particular one, although my description does sound a lot like OCD in particular, at least according to the stereotypes I have in my head.)

Wikipedia says:

Unlike OCPD, OCD is described as invasive, stressful, time-consuming obsessions and habits aimed at reducing the obsession related stress. OCD symptoms are at times regarded as ego-dystonic because they are experienced as alien and repulsive to the person. Therefore, there is a greater mental anxiety associated with OCD.[2]

In contrast, the symptoms seen in OCPD, though they are repetitive, are not linked with repulsive thoughts, images, or urges. OCPD characteristics and behaviors are known as ego-syntonic, as persons with the disorder view them as suitable and correct.

The thing I’m describing involves unpleasant anxiety which the “overly careful” behavior attempts to relieve, rather than a sense that those behaviors are straightforwardly the right things to do.  In particular, there’s a focus on averting disaster via these behaviors, which sounds close to “repulsive thoughts, images, or urges.”

(From the DSM-V description of OCD compulsions: “The behaviors or mental acts are aimed at preventing or reducing anxiety or distress, or preventing some dreaded event or situation; however, these behaviors or mental acts are not connected in a realistic way with what they are designed to neutralize or prevent, or are clearly excessive.”)

There is also a reluctance, in what I’m describing, to generalize from “I should do the behaviors” to “everyone should do the behaviors” – the “everyone else can get away with this, but not me” aspect I described in the OP.  From what I can tell, this is less common in OCPD and may be a factor distinguishing it from OCD.  Diagnostic criteria for OCPD include, in the DSM-V:

Rigid insistence on everything being flawless, perfect, without errors or faults, including one’s own and others’ performance; […] believing that there is only one right way to do things

and in the DSM-IV:

Is overconscientious, scrupulous, and inflexible about matters of morality, ethics, or values (not accounted for by cultural or religious identification).

and in the ICD-10 (which calls it “Anankastic Personality Disorder”):

excessive pedantry and adherence to social conventions [!]

(via greenrd)

dagny-hashtaggart:

theaudientvoid:

dagny-hashtaggart:

earnest-peer:

dagny-hashtaggart:

While my leftism has grown a lot more complicated and equivocal over time, one thing I’m not likely to stop being in the foreseeable future is a revisionist on the subject of the French Revolution.

When people think of the French Revolution, they inevitably think of the Terror. It stands out as one of the paradigmatic examples of revolutionary violence, proof alongside Stalinist Russia and Maoist China that left-wing revolutions are a Very Bad Thing. The problem is that this portrayal is clownishly inaccurate. Mass executions are clearly bad, but even with the technological differences it’s pretty suspect to equate instances of state violence whose death toll differed by two to three orders of magnitude. (The numbers of deaths attributed to Stalin and Mao commonly exceed 50 million each, though some of those were due to problems of scarcity that are questionably those rulers’ responsibility. The Reign of Terror killed about 40,000 people.)

What really damns the argument that the Terror stood out as a historical atrocity, though, is the comparison to other periods of mass violence in contemporary European history. The English Civil Wars, which were a tempest in a teapot compared to many conflicts of the time, killed around 200,000 people, more than half of them civilians, over a shorter time period than the French Revolution. The Thirty Years’ War killed an estimated 8 million. The Saint Bartholomew’s Day Massacre in 16th century France killed 25,000 Huguenots in Paris alone, probably more like 70-100,000 in toto, over the course of 2.5 months, not ten years. And unlike the Jacobins, the architects and participants of that massacre received full amnesty. That surely sounds like the sort of quiet, stable, respectable society that reactionaries love. Good, clean, salt of the earth folks (presumably in the Carthaginian sense.)

The Terror wasn’t a stand-out atrocity. It wasn’t even business as usual. By the admittedly horrifying standards of the time, it was atypically humane.

This is why I can never take Edmund Burke seriously. The monarchy and traditionalist religion that he and other conservatives of his epoch espoused had overwhelmingly more blood on their hands than the French Revolution. He could only make his case by judging them by glaringly disparate standards. As cynical as it is to say, the simple fact remains: if all the French Revolution had done was murder 40,000 people, it would be a historical footnote. Robespierre would be one failed politician among thousands, not the cackling villain of every period piece written since. People don’t hate the French Revolution because it hurt people; if that was the standard, why don’t we see British and American pundits harking back to the horrors of the Thirty Years’ War or the expulsion of the Jews from Spain all the time? They hate it because it hurt people in the name of liberte, egalite, and fraternite, in the name of an ideology that wasn’t business as usual.

I don’t think the Terror sticks out because it is an ideology being horrible, it does because it’s an ideology being horrible *to itself*. Revolutions are supposed to be scary for the establishment, but the Terror showed that, even in success, the revolution also threatens the revolutionaries.

Also, most of the other mass killings you mention mostly end in the death of lots and lots of NPCs, if you will, whereas the greater focus on the to-be-decapitated means you get the feeling of lots and lots of real victims.

None of these points really mean I disagree with your moral weigthing here. Maybe this post just sums up as “historical commentary not utilitarian, news at 11”.

Your point about the relative facelessness of the victims of many wars and power struggles is a good one.

On the other hand, I can’t really see revolutionaries being horrible to each other and the people caught in the middle as fundamentally different from monarchs being horrible to each other and the people caught in the middle. This is what I was getting at with the references to business as usual: with the exception of more expansive and catastrophic conflicts like the Wars of Religion, the sorts of wars and power struggles that raged in the medieval and early modern periods were pretty much feudalism and monarchy functioning as intended. Nobody outside the church and the Enlightenment intelligentsia seemed to think all that carnage was particularly a problem, at least as long as they maintained a respectable win-loss record.

I think it’s mainly an issue about it being a textbook example of “enlightened revolution” getting co-opted by a strong man dictator. Cromwell did it first, and I guess you could argue that the Puritan Roundheads were the “enlightenment” of their day, but the series of wars that that caused was localized to the British Isles, whereas the Napoleonic wars touched all of Europe (and while were on the subject, if you’re going to count English Civil War deaths as being the “fault” of the initial revolution, then you should probably count Napoleonic war deaths as being the fault of the French Revolution, which would bring the body count up significantly).

Also, I would add that “revolutionaries being horrible to each other and the people caught in the middle” not being “fundamentally different from monarchs being horrible to each other and the people caught in the middle” is entirely the point.

It wasn’t the point for people like Burke, though. He used the FR as a prime example of why the status quo was better than all this revolutionary nonsense, and his intellectual descendants have carried on with that thread. I’m at least halfway inclined to agree with that claim when it comes to the modern liberal democratic status quo, but modern liberal democracy didn’t exist when Burke wrote, and if it had he likely would’ve hated it.

The anti-Burke point sounds right to me (from what little I know of Burke).

I think what’s particularly chilling to me about the Terror is the combination of Enlightenment ideals and behavior that runs directly counter to those ideals.  There’s something uniquely disturbing to me about the way the Law of Suspects allowed the government to arrest anyone deemed an “enemy of liberty,” making people live in cowering fear that if they acted freely their choices might not be sufficiently “pro-liberty” – and how the Law of 22 Prairial made death the only penalty and abolished witnesses and defense counsel.  It’s not that the repression or the killings were uniquely extreme in comparison with the median situation under monarchy, but that it’s an early example of Enlightenment ideals turning into justifications for the same old shit – in the very country and situation that produced many of those ideals.  If I were living at the time I could easily imagine looking at Voltaire, Diderot, Rousseau et. al. and saying with a sad finality: “well, that didn’t work.”


Also, is it really fair to compare the 18th century Ancien Regime to a bunch of stuff a century or more earlier?  Of course, the natural response to me here would be that if we’re judging “monarchy” in general, we have to look at a large sample, and if I insist on a narrow focus on the one late case, I’m like those people who insist global warming isn’t happening because there were a few cold years recently.  OTOH, the 18th century Ancien Regime is what the French Revolution was a response to, and so we have to look at it to determine whether the response was justified.

I gave up on Simon Schama’s Citizens after a few hundred pages of unsparingly minute detail (and Schama may well be full of shit anyway), but in what I read, Schama’s main theme was that the Ancien Regime was modernizing very rapidly, and that in material terms the Revolution may have in fact set things back relative to the course they were on.  He devotes a lot of space to Turgot and Malesherbes, both advocates of sweeping reforms – none of which I remember well enough to say anything interesting about it here.

Anyway, here’s a long quote from Schama – mostly about material rather than social progress, although note the bit about the abolition of torture and the emancipation of Protestants – so you can get a sense of where I’m getting my (possibly bullshit) ideas here:

Keep reading

(via dagny-hashtaggart)

slatestarscratchpad:

nostalgebraist:

@slatestarscratchpad

Let me use a real example so I can get a better mental handle on this.

Depression involves low mood (very typical of depression), anhedonia (very typical of depression), overeating (somewhat associated with depression), and oversleeping (somewhat associated with depression).

Normal antidepressants like Prozac will help all of these things (this is an oversimplification). They may not always help overeating, because sometimes that might be caused by the person just being naturally a glutton unrelated to depression, but the more related to depression a symptom is, the more Prozac will help.

Stimulants like Adderall will help with overeating and oversleeping, but don’t really help low mood or anhedonia.

The usual interpretation of this is that antidepressants treat depression-the-construct, and Adderall happens to help the symptoms of overeating and oversleeping, but doesn’t actually treat depression-the-construct.

It sounds like Ashton and Lee are saying “But what if Adderall actually treats depression-the-construct really well, but then its side effects are giving you low mood and anhedonia, so that it ends up looking like it only treats the less typical overeating and oversleeping symptoms?”

I agree this is possible, but since this is impossible to test, shouldn’t Occam’s Razor tell us it’s more likely that Prozac is a true antidepressant than that Adderall is?

Yes, in that case it should.  The reason this case is different is (I think) because it doesn’t involve a large mean shift with smaller variability around it.

In the actual IQ / education example, the highest educational group did better (on average) on every subtest than the lowest educational group.  The mean difference (in units of Cohen’s d) was 2.0 and the minimum difference was 1.43.  The standard deviation of subtest differences was a mere 0.32.

This is like a drug helping with every depression symptom, just a bit less with some than with others.  The catch is that there was near-zero correlation between the improvement in a subtest and how g-loaded it was.  That’s like a drug that helps a lot with every depression symptom, but helps a bit less with those that are more “related to depression” and a bit more with those that aren’t.  It sounds like this is what would happened if you gave someone a mixture of Prozac and Adderall in a single pill, maybe?

The “method of correlated vectors” would look at this and say, OK, the variability in symptoms-helped is uncorrelated with the variability in degree-to-which-symptoms-are-associated-with-depression, therefore it doesn’t help depression-the-construct.

But it’s helping all the depression symptoms a lot, and the MCV is completely missing that (because correlations ignore means).  If it was literally Prozac + Adderall (with enough of the latter to zero out the correlation), it’d clearly be wrong to conclude that it has no antidepressant action.

Hmmm, thanks, but bear with me:

Suppose there are two things that help you do well on a test (let’s say the SAT), IQ and practice. Both higher IQ and more practice help on every section of the test, but some sections (let’s say logic puzzles) are very g-loaded and harder to improve through practice, and other questions (let’s say vocabulary words) are not so g-loaded but easier to improve with practice.

We test a kid, and she gets an average score. We test her again next year and her score is better. We want to know whether she improved her score by practicing, or by using an IQ-boosting superdrug. We look at the test and find that although she did better in every section, she did much better in the vocabulary sections, but barely better at all in the logic sections.

Doesn’t this provide strong evidence that her gains are from practice rather than IQ, even though she improved in every section just like we would expect an IQ boost to do?

Yes.  Specifically, because of the “much better” vs. “barely better at all” pattern.

Although in the IQ / education dataset that Ashton and Lee are looking at, you get that kind of pattern in the non-g factors, but only if you assume there’s also a g difference.  Simply put, this is because in that dataset, everything went up a lot with education, so if you assume that’s all training, you have to assume that everything is heavily trainable.

The critique of the MCV still stands, because the MCV can’t distinguish between this case and the one you gave.  Correlations don’t see means, so the MCV doesn’t care whether “everything went up” or not, even though we clearly care about that for interpretation purposes.

Here is a quote from the paper, in case my summaries are adding confusion:

Herein lies the implausibility of the conclusion that the zero correlation between vectors indicates a zero group difference in g. If the groups are equal in g, then the pattern of very large group differences on every subtest can only be explained by extremely large differences in the non-g aspects of every subtest. Even though these differences are no doubt accounted for in considerable part–but not entirely– by a few group factors, it is nevertheless unparsimonious to suggest that although the groups differ not at all in g, they differ massively in every other aspect of mental ability.

Let us contrast this situation with that which follows from the assumption that the groups differ by d=2.50 in g. Under this assumption, group differences in the non-g aspects of each subtest range from d=0.08 to d=1.64—that is, from virtually no group difference to a rather large group difference (but one which is still much smaller than the group difference in g). We now ask the reader to reflect on these two alternative possibilities. Given a pattern of large group differences across a variety of g-loaded subtests, which is more likely: first, that the groups show absolutely no difference in g, but differ massively on the non-g aspects of every single subtest; or, alternatively, that the groups show a large difference in g, and also show a range of smaller differences in the non-g aspects of the various subtests? We believe that the latter is obviously a more plausible state of affairs: a large group difference in g is combined with a range of much smaller group differences in various other aspects of mental ability.

(via slatestarscratchpad)

@slatestarscratchpad

Let me use a real example so I can get a better mental handle on this.

Depression involves low mood (very typical of depression), anhedonia (very typical of depression), overeating (somewhat associated with depression), and oversleeping (somewhat associated with depression).

Normal antidepressants like Prozac will help all of these things (this is an oversimplification). They may not always help overeating, because sometimes that might be caused by the person just being naturally a glutton unrelated to depression, but the more related to depression a symptom is, the more Prozac will help.

Stimulants like Adderall will help with overeating and oversleeping, but don’t really help low mood or anhedonia.

The usual interpretation of this is that antidepressants treat depression-the-construct, and Adderall happens to help the symptoms of overeating and oversleeping, but doesn’t actually treat depression-the-construct.

It sounds like Ashton and Lee are saying “But what if Adderall actually treats depression-the-construct really well, but then its side effects are giving you low mood and anhedonia, so that it ends up looking like it only treats the less typical overeating and oversleeping symptoms?”

I agree this is possible, but since this is impossible to test, shouldn’t Occam’s Razor tell us it’s more likely that Prozac is a true antidepressant than that Adderall is?

Yes, in that case it should.  The reason this case is different is (I think) because it doesn’t involve a large mean shift with smaller variability around it.

In the actual IQ / education example, the highest educational group did better (on average) on every subtest than the lowest educational group.  The mean difference (in units of Cohen’s d) was 2.0 and the minimum difference was 1.43.  The standard deviation of subtest differences was a mere 0.32.

This is like a drug helping with every depression symptom, just a bit less with some than with others.  The catch is that there was near-zero correlation between the improvement in a subtest and how g-loaded it was.  That’s like a drug that helps a lot with every depression symptom, but helps a bit less with those that are more “related to depression” and a bit more with those that aren’t.  It sounds like this is what would happened if you gave someone a mixture of Prozac and Adderall in a single pill, maybe?

The “method of correlated vectors” would look at this and say, OK, the variability in symptoms-helped is uncorrelated with the variability in degree-to-which-symptoms-are-associated-with-depression, therefore it doesn’t help depression-the-construct.

But it’s helping all the depression symptoms a lot, and the MCV is completely missing that (because correlations ignore means).  If it was literally Prozac + Adderall (with enough of the latter to zero out the correlation), it’d clearly be wrong to conclude that it has no antidepressant action.

(via slatestarscratchpad)

slatestarscratchpad:

nostalgebraist:

Some stuff I read after getting momentarily obsessed with IQ stuff again after looking up the Similarities subtest:

“The Attack of the Psychometricians” (Psychometrika, 2006) by Denny Boorsboom, which I’m sure I had skimmed at some point since it was linked in That Cosma Shalizi Post I Don’t Even Need To Name, You Know The One … but I went back and read it carefully this time, and it’s great.  Worth reading if you have any interest in these things.  (Largely just says the same things that Shalizi does, but with more systematically, with more citations and detail, and less focus on g specifically.)  It’s from 2006, and near the end Boorsboom mentions some hopeful developments, so I wonder if things have gotten better in the intervening decade.

Summary of Borsboom: the field of psychometrics has made all sorts of advances in the theory of test design in the last 50 years, but the people who make and use IQ tests, personality tests, etc. entirely ignore these and just use the crude and broken “Classical Test Theory” – which is about as bizarre as it would be if semiconductor engineers insisted on ignoring quantum mechanics.  (Incidentally, I had thought “psychometrics” was the name for people who design and use these tests, but apparently it’s the name for people who theorize about them.)

The fundamental thing that frustrates Borsboom is that people will start out by designing a test in an ad hoc intuitive way, without any causal/scientific hypothesis of what actual variable they want to look for – and then they’ll take some numbers that come out of the statistics of the test results, reify them, and act like they’re causal factors, without ever doing a scientific test of whether this causal structure is actually there.  (Simple example: is “general intelligence” discrete or continuous?  You’re going to answer “continuous,” but it’s not like someone proposed and tested “g is continuous” as a scientific hypothesis – instead, if you’re being lazy, it’s easiest to treat the numbers that come out of factor analysis as continuous, so people just did that)

There is one thing that the Classical Test Theory people do to test whether their reified numbers at least could be causal factors, and that’s Confirmatory Factor Analysis.  But passing CFA is just a necessary condition there, not a sufficient one, and they don’t actually seem to care about the results anyway; the Big Five personality traits fail CFA, and no one minds.

(Incidentally, this is something I would really like to see addressed in detail by people who like to talk about the Big Five.  @slatestarscratchpad?)

And even if you get something that passes CFA – still, “hey, I pulled a test out of my ass, and then made up a variable based on it, and the variable doesn’t fail to have some property that real causal factors have, so let’s pretend it’s a causal factor and correlate it with your ZIP code and your mom’s hair color” is still Not Science.

Make a (causal) hypothesis.  Figure out what it predicts.  Find predictions that distinguish it from other alternative hypotheses.  Test those predictions.  This isn’t that hard, guys.


Found via Borsboom: it looks like Susan Embertson has done a lot of interesting work (e.g. in this paper) on designing tests that are specifically based on models from cognitive psychology, which make predictions about how people will solve the test problems.  (If your reaction to this is “wait, isn’t that the obvious thing to do?  you mean they aren’t doing that?” … ohyou sweet summer child.)


One obvious hypothesis that comes to mind about the Similarities subtest is that it’s really testing for education level (“how long have you spent in environments that valued abstraction for its own sake?”), and that the g-loading of the subtest is a result of the fact that g causes educational achievement, without much, or any, direct effect of g on Similarities performance.  (An extreme case of this kind of thing would be a “subtest” where you just ask people “what is the highest level of education you’ve gotten to?”)

Searching for tests of this hypothesis turned up frustratingly little.  (Let me know if you’ve found anything.)  But I did find one interesting paper: Colom, Abad et al., “Education, Wechsler’s Full Scale IQ, and g, (Intelligence, 2002), which claims that the differences in IQ (as assessed by the WAIS-III) between groups of different educational levels are not reflective of differences in g.

They used Arthur Jensen’s “method of correlated vectors,” which in this case meant looking at the correlation between “how g-loaded is this subtest?” and “how much difference between educational levels was there on this subtest?”  If the educational-levels difference was due to a g difference, we’d expect these to be correlated.  But they aren’t.  (For more detail, see Tables 4 and 5 in the paper, and the surrounding text.)


Colom, Abad et al. 2002 has been cited a bunch, but most of the citing articles aren’t very relevant to my interests.  One citing article, however, is a critique of the method of correlated vectors which directly attacks Colom, Abad et al. 2002: Ashton and Lee, “Problems with the method of correlated vectors” (Intelligence, 2005).  I was only able to get this one through my university library, so I can’t link you to it, but basically they take the g-loadings and educational differences from Colom, Abad et al. and then show how, even with a substantial g difference, the near-zero correlation could be produced by supposing a certain pattern of differences in non-g variables.  These differences aren’t large, and for some subtests are near zero.  On the other hand, if you suppose the g difference is actually zero, you need to propose that there’s a large difference not due to g in every single subtest.

The basic idea here (it seems to me) is that the output of the “method of correlated vectors” (MCV) doesn’t depend at all on the mean size of the g-loadings, just on their variability around that mean.  Hence, even if you have a huge difference due almost entirely to g, the MCV can give you no correlation if tiny little differences in g-loadings are cancelled out by opposite tiny little differences in non-g factors.  Of course, this works the other way around: even with no g difference, you can get a big correlation if your tiny little differences in non-g factors happen to line up well with the tiny little differences in g-loadings.  So the MCV lacks both sensitivity and specificity.

(This is all made a bit less irrelevantly academic by the fact that Jensen invented the MCV to test something called “Spearman’s hypothesis,” which, well, look it up.)

All this MCV stuff would, I think, stop being a headache if people actually proposed hypotheses about non-g factors and tested them.  (Colom, Abad et al. don’t have a model of how education might work as a factor.  If they did, everything would be clearer.)  Which I guess gets us back to where we started, with Borsboom.

I don’t like Big Five, I grudgingly tolerate it because it’s the only game in town.

Jonah Sinick knows a lot about this and has been looking into the Big Five recently. If you don’t already know his email I can give it to you and he would probably know more.

Isn’t the ability of tiny things to cancel each other out kind of universal? If a medication shows no effect on mortality, we can’t be sure that it didn’t successfully treat cancer but also cause a corresponding increase in heart attack risk. Or are you talking about something more than that?

Sinick email would be cool, yeah.

The issue here isn’t just “small things can cancel out.”  I guess a contrived medical analogy might be: there’s a drug that can help with all symptoms of cancer.  But it so happens to be slightly better at helping with less common symptoms of cancer than with more common symptoms.  (Like, idk, 85% of patients see a change in [slightly less common symptom] but only 70% see a change in [slightly more common symptom].)  And in fact this was true in individual patients, where it was common for patients to see more improvement in their less common symptoms than in their more common symptoms.

And now someone comes along and says “clearly this drug isn’t doing anything to cancer.  If you give someone a single number defining ‘how bad their cancer is’ [analogous to g], making their cancer better would mean decreasing symptoms in proportion to how much they’re associated with your cancer score.  More common symptoms are more common because they’re more highly correlated with your cancer score; decreasing cancer score should decrease those the most.  But this drug doesn’t follow that pattern at all.  Therefore, it doesn’t affect cancer at all, but just does some mysterious other thing(s) that happen to help with cancer symptoms.”

And then someone else (Ashton and Lee above) say “you’re focusing too much on the little differences in symptom rates [analogous to g-loadings], and not to the fact that this thing is making all the cancer symptoms better.  This is all consistent with the hypothesis that it helps a lot with cancer, and just has a side effect profile that happens to cancel out the little differences in symptom rates.  But the side effects, and the symptom rate differences, might be dwarfed by the overall tendency to reduce all the cancer symptoms.”

(In the IQ case we had “zero correlation” rather than “negative correlation” but the latter was easier to describe in words and the principle is the same.  Also, in the above I’m just using “cancer” as a generic disease that we want new treatments for)

(via slatestarscratchpad)