Install Theme

dndnrsn:

nostalgebraist:

Re @veronicastraszh‘s interesting post here

I tend to imagine the origins of “tech geek reaction” lying in the “nothing is actually normal” attitude that you see in, say, hacker culture – when you encounter a system, you don’t take it as a given, you think about how it works and try to find vulnerabilities, or things that can be gamed in fun ways, or just to understand its design relative to other conceivable designs.

It’s perhaps more obvious for this to lead to tech geek progressivism, and indeed it often has.  Tech geek culture grew out of the 60s counterculture.  The first actual communist I ever knew, back in high school, was a very smart hacker.  “Why this particular design (of society)?  Is this good engineering?  Could I improve on it?”

But one version of this question you could ask is, “why have we changed our designs (of society) so much in the last few hundred years?  The old designs lasted for a really long time.  What exactly was wrong with them?”  There are standard answers to this question, but to some geeks those answers seem too complacent, too much like “I use Windows because everyone uses Windows.”

I can definitely see the family resemblance between Moldbug and Yudkowsky, Paul Graham, Hofstadter, et. al.  It is definitely a cluster.  Naively, though, it seems weird to me that this would be characteristic of people from STEM backgrounds – like, notably, most of these people never write down any actual numbers, or even any mathematical or other formal models.  I look at any economics or finance paper, I see numbers and equations; I look at any Paul Graham essay, I see none.  Yudkowsky seems remarkably uninterested in the actual practice of statistical learning and decision theory.  Moldbug may be the most extreme: he refuses to look at actual statistics about the past or do any number-crunching, and indeed insists on only reading primary sources.  He sounds like a history professor from some engineering major’s nightmare (which may be the intent, for all I know).

But the “existing systems are arbitrary, think about their design and what’s wrong with it, seek out smart people with alternatives” hacker mindset is the common thread here, I think.  For each of these writers, who casually dismiss whole libraries worth of mainstream ideas in favor of their own, I think it helps greatly to make sense of things if you think of the mainstream ideas as “Windows.”  In the tech geek reactionary’s case, modernity is “Windows.”


There we get close to a paradox.  Reactionaries sometimes mention “Chesterton’s fence” as a reason to question modernity: don’t change something until you understood why it was there in the first place.  But this is kind of the opposite of the hacker mindset referred to above, which is more like “change something unless you can think of a reason it should be there.”  Can you really apply this backwards – so that you return to the past, simply because the present is arbitrary?  At what point does this become “I have all these objections to how Linux is set up, so I’m reinstalling Windows”?


(I very, very much have the “nothing is actually normal” feeling myself.  But I also do not like broad strokes.  Everything is very, very complicated, and “good design” for social structures is very elusive and very tricky.)

This jumped out at me : “He sounds like a history professor from some engineering major’s nightmare (which may be the intent, for all I know).”

He’s more like an undergraduate from a history professor’s nightmare. A lot of good historians love hard numbers, and relying on primary sources too much is bad. Especially only on primary sources that support one opinion. You need lots of primary sources, statistics if you can get them, secondary sources to see what other historians have written.

Moldbug just finds some book from 1843 saying such-and-such historical figures that high school students get taught were great were actually scoundrels, and concludes they were scoundrels, and that 16-year olds are told they were heroes is part of a campaign of Whig brainwashing.

Whereas, legitimate historians have probably argued for decades, based on all available sources, about the hero-to-scoundrel balance of the folks in question.

Or, historian looks at early 1xth century Europe. They come up with: “parish records and tax rolls suggest a small spike in deaths and a small drop in tax revenues, and analysis of tree rings shows a very slightly lower temperature, but the diaries of nobility tend to say all was well. Probably there were some minor famines, but not serious enough to upset the nobility”. This is how real, trained historians do this. There’s more to history than memorizing what happened in the past - there’s a method for handling these things.

This is a better approach than they get “see, these diaries by nobles say everything was great, clearly the past was better when we had nobles, because we had nobles”, which is Moldbug’s approach.

I agree.  This probably wasn’t clear, but I was trying to say that Moldbug seems like the kind of history professor an engineering major would find especially hard to deal with (or would perhaps dream up in a literal nightmare), not that he’s like a typical history professor.

I was sort of picturing one of those dinosaurs in a classics department who doesn’t see why the rest of the department is learning about ordinary Roman life from inscriptions found in recent archeological digs when they could be re-reading Tacitus and Sallust yet again, since if you just read the “right authors” you will learn everything about ancient Rome anyone needs to know.  Moldbug isn’t quite like that, but there’s a resemblance.

(via dndnrsn)

redantsunderneath:

nostalgebraist:

brazenautomaton:

nostalgebraist:

The book Bad Monkeys was so promising for the first 2/3 and then the last third was just so … I was going to say “I don’t know how to describe how bad it was” but that’s too harsh, it’s more like it has a type of badness I never knew existed before

I can’t think of any analogy that really works, but it’s sort of like what would happen if someone tried to come up with “the coolest story ever” by writing down a list of every fiction trope they liked, and then just used the giant list of tropes as the outline for their story, without filling in anything else, so the good guys are literally named “the Good Guys” and thematic tropes like “determining someone’s true nature” become “now I will use a True Nature Detector on you”

And as the book goes on, the author includes more and more tropes at an increasing rate, ending in a sort of omega point that is almost literally this

how had I never seen this sketch before

also I went to have the book spoiled and all them goodread…ers said things about the ending falling apart but didn’t go into specifics

since I won’t read this book, if you feel like going into detail as to just w the f, I would be appreciative

Actually, having just read it, that would be kind of therapeutic

Complete spoilers under the cut, obviously

Keep reading

Did you read Lovecraft Country? 


Completely coincidentally, this other Matt Ruff novel was the last book I read.  I liked it, with reservations, but what I found interesting enough to decide to read Bad Monkeys next was that there were some similarities between your description and my read of LC despite the fact that the books, on a surface level, couldn’t be more different.  Your description has BM as a “director with peculiar preoccupations’” (think Ryan Kelly/the Wachowskis) action movie version of California drug era paranoia fiction, a la PKD, RAW, or Pynchon.  LC seemed at first like HPL with the racial lens inverted, asking what it is like to be the one thought of as a monster and bringer of corruption under siege by powerful men full of fear.  So, cool enough on both counts, but way different in tone (LC is relatively sedate) and subject matter.  

But as LC went on, more and more stuff built up and the overstuffed ambitions became more obvious. First was the realization that it had the structure of a novel-of-short-stories, where each of the 8 or so stories had a different POV character and came to a conclusion, but the characters were all in multiple stories tying them together and telling a larger story.  Then came the realization that each story was actually a different kind of story, even if all were some type that you would have found in a genre anthology when those were big (first one’s Lovecraftian, but then we get ghost story, negative zone-dimension exploration, body swap, portal to a strange world, secret order playing with larger forces, evil imp, Brigadoon of lost souls/emotional vampires), but all trying to find a “race in the civil rights era” angle.  Bradbury was a bigger influence than HPL.  

But at a smaller level, he just kept cramming stuff in there: recognizable genre obsessions (special powers run off of comatose people, alien beach of death, and on), references to specific stories (many HPL stories from Innsmouth to reanimator, the Dark Tower books, John Carter, etc.), and heavily researched subject areas (the safe negro travel guide culture, board games of the 50s, sci fi magazines, other civil rights era stuff, comic books, period Christmas decorations, the origins of the Coke Pepsi wars…) with lots of detail.  By the end it felt like Lost – too much cultural reference to make work entirely, but not lacking in chutzpa.

But the structure lacks conviction a bit – short story novels usually tell an overarching story that is more character development and theme oriented, with some of your engagement coming from non-linear time progression (we see somebody young after we have seen them old) and containing transitions that pivot on non-time story elements (an emotional congruence, an open question, an image).  LC is time linear, ordered around an actual progressing plot with no effort to coordinate the transitions. It’s something other than a multiple POV novel (like Game of Thrones) only because the stories have names and have individual conclusions, but not a true short story collection because it has the same pool of characters, a master plot, and overriding conceits (like the anthology template, and shared themes including “protection” as conferred by status, the ability for social power to override will, and otherness)

This winds up like a Boolean story idea (“Die Hard in the White House!”) taken to an elaborate extreme (“a 50s period genre anthology with each story, as a different subgenre, telling a linear chunk of a longer story from the shifting POV of the main characters, focusing on racial power relationships of the time from the side of the “other,” containing excruciatingly researched detail and referencing a whole bunch of stuff from Stephen King to product branding”). Not half bad idea but not exactly subtle.  This is similar to your description of Bad Monkeys in that BM takes a different subcultural literature (California weird drug era fiction instead of golden age fandom fiction) and ambitiously tries to cram stuff in using structure, plot, tone, and themes implied by the source.  But the failure modes are different as the failure modes of the fiction types being strip mined are different.  

LC’s weaknesses are due to a kind of coldness or distance that keeps the material from being fully present enough to completely synthesize the over-leveraged elements. The endings of the substories feel too pat, leaving stuff on the table, and don’t really impart a catharsis no matter how big the events. This is consistent, to me, with what those stories being referenced feel like, but that mode makes the larger ambitious enterprise feel not “pulled together” enough to be 100% satisfying.  It seems by your description that BM’s weaknesses (I’ll update once I’ve read) are also those of the source material – the chaos and the paranoia of the material make the whole thing fly apart into incoherence.  

One other note. Lovecraft Country made me realize how much comic books and genre mags have influenced the current storytelling not only in the mass market (franchise movies, series books, television drama, as is obvious) but in more high end literature – the structure of a short story novel reminds me, upon LC related contemplation, most of the experience of getting into comics and shared universe sci fi, where you get chunks of stuff that is very different, read out of order, but are threaded together with individual connections and references into something that you have to piece together, and how you do it means the “gaps” make the whole thing come alive.  So the correct reference point for Visit from the Goon Squad might not be Cheever, but Marvel’s Bring on the Bad Guys or Larry Niven’s Tales of Known Space.  If you liked a lot of Bad Monkeys and you have any interest in 50’s anthology genre fiction, I think you’ll dig Lovecraft Country.

This is all very interesting!

I don’t think I’ll read Lovecraft Country – I think Matt Ruff is actually a skilled writer in a lot of ways, which is why I held out hope for Bad Monkeys as long as I did, but that makes it all the more frustrating to see him fail.  I’d definitely read a Matt Ruff book that was pitched to me as “he actually just got it right this time.”

I don’t think the failings of BM are quite the failings of the source material, though.  In the source genre you identify – in many PKD books, in some core threads in Illuminatus!, in Crying of Lot 49 – we start out with an almost overly bland “sensible adult” viewpoint character, who then “goes down the rabbit hole” and becomes paranoid as a result of apparent evidence (whether real or imagined) that they really are embroiled in something weird.  There’s always some incredulity about each step of the weirdness, at least until the end.  Even the PKD characters who start out with weird beliefs, like Horselover Fat, have an awareness that their beliefs are weird.

The first 25% of BM looks like it is setting this up – Jane is established as a more-or-less sensible person who’s not unusually credulous toward weird conspiracy shit, and she’s incredulous towards the very first stages of the weirdness, like the hints in crossword puzzles.  However, once the Organization reveals itself, she doesn’t question its nature or structure even though it makes no sense (the all-powerful conspiracy that hardly does anything with its power?) and never questions the simplistic good/evil morality.

Unlike in the source genre, we are told very directly that this seemingly sensible narrator is actually unreliable, and so it seems like this new credulity is going to be explained that way.  Instead, not only does that not happen, but the most unbelievable aspects of the setting are intensified more and more in the second half, with no voices of incredulity anywhere by the end.  Thus it entirely lacks the “but can all of this truly be real? I mean, seriously?” aspect that is a key part of classic paranoia fiction.

I think if PKD wrote BM, not only would there be incredulity throughout, but the unbelievable aspects would turn out to have their own explanations, if perhaps even weirder ones – say, the good/evil morality arises because we’re really witnessing a conflict between angels and demons (whose “inherent” goodness/evilness is actually quite alien to human thinking), or something.

(via redantsunderneath)

jadagul:

slatestarscratchpad:

nostalgebraist:

slatestarscratchpad:

nostalgebraist:

Replies to this post

@princess-stargirl

The questioner is just talking none-sense. He is assuming, seemingly for no reason, that you know that AC=>BC. If he does not make this false, completely unwarranted, assumption notice strange happens at all.

(also it is, of course, wrong to assume any individual person will assign consistent probabilities to events. By “consistent” I mean consistent with what the person knows not what some aliens know. )

@unknought

I’m not really a Bayesian, but the obvious response is that learning that AC implies BC should cause you to update the probability of AC downwards and the probability of BC upwards.

Knowing that a statement is a named conjecture tells you something about how likely it is to be true. Knowing that it’s a named conjecture with a stronger form which is also a named conjecture tells you more.

The “questioner” framework was just a story.  The real situations I am thinking about here are ones in which there is no one around to tell you that AC=>BC.  Either you simply don’t know it, or you could figure it out but may not currently realize it, or know it but just don’t have it in mind.

I think I was actually clearer in my earlier post on this subject, so maybe I should have just found and linked that.  Or maybe I should rewrite that one.

I just tried to type a clear explanation of my point and failed and deleted it, which is frustrating because it’s clear in my head.  I think I’m just too tired.

But my point is something like: if you only have a hazy sense of the area you’re talking about, you’ll be tempted to give most events a sort of “reasonable, conservative” probability estimate, if someone asks you about them or you happen to ask yourself.  But since there is all this implication structure (or just non-independence) among the various events, you’re actually implying a whole bunch of nontrivial things by doing this.  You may not know about this structure, or only know about some of it, or know about a lot of it but not be able to hold it all in your head at the same time (quick, name every necessary condition for “cupcakes are still being sold in 2050″).

This seems wrong to me. Maybe I am misunderstanding it. Let me try to get my head around it and explain where I’m coming from.

Suppose that Omega comes to Alice and says: “Here is a coin that I have biased to either always come up heads or always come up tails. I will not tell you which. But I will tell you this. I know every truth about human history, and I have arranged for this coin to be biased heads if 9-11 was truly an inside job, and tails otherwise.”

Maybe Alice believes there was only a 0.1% chance that 9-11 was an inside job, so she believes 99.9% chance a flip will land tails, and 0.1% chance it will land heads.

Now Alice goes to Bob, who knows nothing about any of this, and says “What’s your probability distribution over how this coin lands when I flip it in a second?” Bob says “Fifty-fifty heads/tails, obviously.”

Alice says “Aha, so you think there’s an equal chance that 9-11 was or wasn’t an inside job? You’re pretty dumb.”

Obviously this is unfair to Bob. But equally obviously, Bob did exactly the right thing, from his position, to say that the coin flip odds were 50-50.

But I feel like you’re doing the same thing. Confronting the human for having beliefs that implied surprising facts about the relationship between the alien conjectures, makes no more sense than confronting Bob for having beliefs that imply surprising facts about the US government’s relationship with terrorism. But that means your argument proves too much - it suggests we can’t even assign a probability of 50% to a coin toss.

It seems like the example I gave was really bad for communicating what I actually wanted to communicate.

The alien conjectures are meant to be an extreme case of “a situation where we have incomplete information about the dependence relationships between different events.”  (In the example, we had no information.)  The example is meant to distill a phenomenon that happens in other, less contrived situations.

I’m too tired right now to come up with a good single example, but the prototype case I have in mind is relatively ordinary statements about the future, like “cupcakes are still being sold in 2050.”  This seems pretty likely, and at first glance I’d just give it a probability I associate with “pretty likely.”  But then you can make various more specific statements, involving people in 2050 doing things with cupcakes they’ve bought, which also seem “pretty likely,” but technically require the first statement and should have lower probability than it, unless they absolutely must happen if cupcakes are still sold.

All these statements are sort of “hidden conjunctions,” which depend on all sorts of prerequisites, many of which may not come to mind directly when thinking about the statement.  When everything’s a conjunction of things you’re very unsure about, which are themselves conjunctions of things you’re very unsure about, etc., it becomes hard to keep the probabilities ordered in a way that respects this structure.

Are you just saying most people are bad and inconsistent in their probability assignments and commit the conjunction fallacy? Such that someone might assign 20% to “Linda is a feminist bank teller” but only 10% to “Linda is a bank teller” which would be absurd? If so I agree but I think that’s an argument that humans are bad at this, not that it’s not a good theoretical framework.

I think the point is that without a fairly detailed grasp of the situation, there’s no way your credence assignments aren’t going to lead to all sorts of conjunction fallacies. Nostalgebraist is trying to give some examples where this is really clear, but it winds up making the examples not be very convincing.

I think a better example is the statement: “California will (still) be a US state in 2100.” Where if you make me give a probability I’ll say something like “Almost definitely! But I guess it’s possible it won’t. So I dunno, 98%?”

But if you’d asked me to rate the statement “The US will still exist in 2100”, I’d probably say something like “Almost definitely! But I guess it’s possible it won’t. So I dunno, 98%?”

And of course that precludes the possibility that the US will exist but not include California in 2100.

And for any one example you could point to this as an example of “humans being bad at this”. But the point is that if you don’t have a good sense of the list of possibilities, there’s no way you’ll avoid systematically making those sorts of errors.

Consider the following list of statements: 1) in 2100, the US will exist. 2) In 2100, the US will contain states. 3) In 2100, the US will contain states west of the Mississippi. 4) In 2100, the US will contain states west of the Rockies. 5) In 2100, the US will contain California.

In my judgment, all of those statements are “almost certainly true.” And there’s content to that, as a matter of “giving credence to propositions about the future.” But if you want me to assign “probabilities” then you want me to assign numbers to all of those statements in a way that’s consistent across all those statements. And there’s no possible way to do that unless you have a list of all the possible propositions.

Try it. And then ask what you think the probability is that in 2100, the US contains any states bordering the Pacific.

Thank you – this is what I meant, and your examples are better.  (When I’m less busy/tired I’ll see whether I can come up with more examples.)

(There’s another point I want to make, but I’m still not able to make it clear to my satisfaction.  To sketch it: if you try to be “clever” and avoid this by listing lots of propositions like this and inserting little probability gaps, these little gaps may add up to a non-trivial gap between the most general statement and the least general, which you have no justification for except “I wanted to avoid making this mistake.”  The only real cure is having an actual picture of the different 2100s that are possible and querying them each in turn, i.e. “having a detailed knowledge of the situation”)

(via jadagul)

slatestarscratchpad:

nostalgebraist:

Replies to this post

@princess-stargirl

The questioner is just talking none-sense. He is assuming, seemingly for no reason, that you know that AC=>BC. If he does not make this false, completely unwarranted, assumption notice strange happens at all.

(also it is, of course, wrong to assume any individual person will assign consistent probabilities to events. By “consistent” I mean consistent with what the person knows not what some aliens know. )

@unknought

I’m not really a Bayesian, but the obvious response is that learning that AC implies BC should cause you to update the probability of AC downwards and the probability of BC upwards.

Knowing that a statement is a named conjecture tells you something about how likely it is to be true. Knowing that it’s a named conjecture with a stronger form which is also a named conjecture tells you more.

The “questioner” framework was just a story.  The real situations I am thinking about here are ones in which there is no one around to tell you that AC=>BC.  Either you simply don’t know it, or you could figure it out but may not currently realize it, or know it but just don’t have it in mind.

I think I was actually clearer in my earlier post on this subject, so maybe I should have just found and linked that.  Or maybe I should rewrite that one.

I just tried to type a clear explanation of my point and failed and deleted it, which is frustrating because it’s clear in my head.  I think I’m just too tired.

But my point is something like: if you only have a hazy sense of the area you’re talking about, you’ll be tempted to give most events a sort of “reasonable, conservative” probability estimate, if someone asks you about them or you happen to ask yourself.  But since there is all this implication structure (or just non-independence) among the various events, you’re actually implying a whole bunch of nontrivial things by doing this.  You may not know about this structure, or only know about some of it, or know about a lot of it but not be able to hold it all in your head at the same time (quick, name every necessary condition for “cupcakes are still being sold in 2050″).

This seems wrong to me. Maybe I am misunderstanding it. Let me try to get my head around it and explain where I’m coming from.

Suppose that Omega comes to Alice and says: “Here is a coin that I have biased to either always come up heads or always come up tails. I will not tell you which. But I will tell you this. I know every truth about human history, and I have arranged for this coin to be biased heads if 9-11 was truly an inside job, and tails otherwise.”

Maybe Alice believes there was only a 0.1% chance that 9-11 was an inside job, so she believes 99.9% chance a flip will land tails, and 0.1% chance it will land heads.

Now Alice goes to Bob, who knows nothing about any of this, and says “What’s your probability distribution over how this coin lands when I flip it in a second?” Bob says “Fifty-fifty heads/tails, obviously.”

Alice says “Aha, so you think there’s an equal chance that 9-11 was or wasn’t an inside job? You’re pretty dumb.”

Obviously this is unfair to Bob. But equally obviously, Bob did exactly the right thing, from his position, to say that the coin flip odds were 50-50.

But I feel like you’re doing the same thing. Confronting the human for having beliefs that implied surprising facts about the relationship between the alien conjectures, makes no more sense than confronting Bob for having beliefs that imply surprising facts about the US government’s relationship with terrorism. But that means your argument proves too much - it suggests we can’t even assign a probability of 50% to a coin toss.

It seems like the example I gave was really bad for communicating what I actually wanted to communicate.

The alien conjectures are meant to be an extreme case of “a situation where we have incomplete information about the dependence relationships between different events.”  (In the example, we had no information.)  The example is meant to distill a phenomenon that happens in other, less contrived situations.

I’m too tired right now to come up with a good single example, but the prototype case I have in mind is relatively ordinary statements about the future, like “cupcakes are still being sold in 2050.”  This seems pretty likely, and at first glance I’d just give it a probability I associate with “pretty likely.”  But then you can make various more specific statements, involving people in 2050 doing things with cupcakes they’ve bought, which also seem “pretty likely,” but technically require the first statement and should have lower probability than it, unless they absolutely must happen if cupcakes are still sold.

All these statements are sort of “hidden conjunctions,” which depend on all sorts of prerequisites, many of which may not come to mind directly when thinking about the statement.  When everything’s a conjunction of things you’re very unsure about, which are themselves conjunctions of things you’re very unsure about, etc., it becomes hard to keep the probabilities ordered in a way that respects this structure.

(via slatestarscratchpad)

lisp-case-is-why-it-failed:
“nostalgebraist:
“michaelblume:
“lisp-case-is-why-it-failed:
“michaelblume:
“…am I just too Bayesian to understand why this is supposed to be weird?
”
Yes. Imagine if you believed in the frequentist or propensity...

lisp-case-is-why-it-failed:

nostalgebraist:

michaelblume:

lisp-case-is-why-it-failed:

michaelblume:

…am I just too Bayesian to understand why this is supposed to be weird?

Yes. Imagine if you believed in the frequentist or propensity definitions of probability. Almost all of these questions are nonsensical then (although you should be able to handle the aliens question with propensities).

Alternatively: most of these questions are about the macro structure of the universe. How do you answer such questions without some kind of universal prior?

Ok but then how do you have beliefs of any kind about reality ever bet?

My opinions about things like this sometimes change, and are sometimes vague/confused, and I’ve rambled a lot about it

But my current opinion is something like:

I don’t have a prior with support over every conceivable outcome (using “outcome” in the broadest sense, so that things like those in the screenshot apply).  I don’t think anyone actually does.

What we have is more like a mental function we can query that outputs “how likely does this feel to me?”  We can, if we wish, try to translate these feelings into numbers in [0,1].  But calling these numbers “probabilities” is inappropriate in most cases, because the mental function isn’t consulting some underlying distribution obeying the probability axioms, except in toy problems like rolling fair dice (where the function will, so to speak, call another function that actually does the math).

In particular, the mental function generally doesn’t even use a consistent outcome space, and e.g. if A and B are both things I “have no idea about” it will also tell me that I equally “have no idea about” the event A&B.  (It makes the conjunction fallacy because it has no picture of outcome space with regions that could be labeled “A,” “B” and “A&B.”)

How does this relate to betting?  Well, I’m wary of making bets by using credences which generally will not (except by coincidence) obey the probability axioms, for the usual reasons.  (I don’t exactly mean “Dutch books” because I think that issue is a bit different from how it’s usually presented, but I think that human biases make it easy to get tricked into bad bets and incoherent credences only make it easier.)

One could then object that surely I’d take some sufficiently skew bets on any given question.  Would I really turn down a bet that costs me $1 if MWI is false and pays me $1 billion if MWI is true?  And couldn’t you back out a “revealed probability” from this?   I talk about this issue here – the upshot is that while I might take such a bet, this has nothing to do with the specific concept I’m being asked about, but is simply an instance of my generic, default betting behavior in response to questions where my mental function outputs “oh god who even knows.”

Even then, I’d probably reject all such bets in real life.  Partially because I’d be suspicious about why the other side is willing to offer them, but more fundamentally, because I try to do things that are designed for actual probabilities – like expected value calculations – only when I feel like my credences come from some actual knowledge about the underlying outcome space.

That is, I won’t pay a Pascal’s Mugger, not because I “believe there is probability zero that the mugger has magical powers,” but that I don’t have any informed breakdown of how the world could be, such that some parts of it are labelled “these magical powers are possible” – specifically such a breakdown I would have been able to give you before I ever encountered the mugger.  I file “the mugger has magical powers” under “hey, anything’s possible,” rather than under “I have a number of theories of how the world might work, and under this subset of them, the mugger could have magical powers.”

To sum up, I make decisions in various ways, and use ways that approximate EV maximization when I think I’m in the sort of domain where I can construct something like a probability distribution on a well-defined outcome space.  I think it’d be actively irrational, or at least totally without rational justification to use that sort of technique when I can’t do this even approximately.

This includes cases like “MWI is more or less correct” and “God exists.”  I do have opinions about these questions – my sense is that MWI would at least need substantial revision to be correct, and that God almost certainly doesn’t exist.  But I have nothing like a probability space associated with these questions.  (For instance, it’s conceivable that the problem of evil is correctly resolved by “it’s all God’s plan and was all a good idea for some reason beyond our current understanding,” but I don’t have a picture of all the ways in which this could be true nor any sense about “how likely” it is for “a typical universe” to be configured in any one of these ways; again, for me this falls under “hey, anything’s possible.”)  Thus, I can’t provide numbers that I could justly call “probabilities.”

ETA: I don’t personally frame this “frequentist defn. vs. Bayesian defn. of probability,” but rather as a distinction between beliefs about how to do inference correctly in real life.  It’s not that I think “degrees of belief” can’t be probabilities by definition, but rather that treating my degrees of belief like probabilities in all cases would be bad practice.

Longish response:

Keep reading

(Responding the most recent reply, the one under a cut)

I worry this will sound arrogant or hostile, but I’ve been reading/thinking/talking about these issues for a long time, so I don’t think I’m just making some basic misunderstanding about what Bayesians mean by certain terms.  Relatedly, most of the issues you raise are things I’ve talked about on tumblr before at some point – see my Bayes tag (which I realize is long and disorganized, I just don’t want to repeat myself).

A few points (again, there is more in the tag):

I understand that the “degrees of belief as coherent probabilities” is an ideal for rational agents rather than a description of human psychology.  The practical question is then “what should I do, given that I have degrees of belief that don’t work like probabilities?”  For instance, should I still do expected utility calculations (pretending my degrees of belief are probabilities)?  In all cases, or only in some?

In some cases our “failures of coherence” are just due to simple mistakes that can actually be patched in practice, with stuff like “don’t neglect the base rate.”  In other cases it has the much deeper cause that we don’t know what the outcome space looks like, so we can’t put a distribution over it, even a flat one.  (One consequence is that it is basically impossible to deal with conjunctions sensibly in these cases – I made some posts with more detail about this a while back)

Since we are so very far from being coherent rational agents, it’s not clear that behaving more like those agents in any single, particular way will be good rather than bad for us.  In optimization terms, the ideal is far enough away that it doesn’t tell us much about the local gradient, so to speak.  I think the use of the word “probability” in things like the OP picture comes from a belief that in fact we are sufficiently close to the ideal that “moving towards the ideal” approximates “moving along the gradient,” i.e. “these aren’t probabilities, but they’re sort of close to being probabilities and we rational folks are trying to make them even closer.”

Incidentally, I think the Dutch book argument for coherence has serious problems, although there are other arguments for the same conclusion.

(via just-evo-now)

Amateur Sociology Considered Harmful →

veronicastraszh:

davidsevera:

ozymandias271:

nostalgebraist:

marcusseldon:

veronicastraszh:

Oof!

I think Ozy hit this one out of the park.

On the one hand, I’m really tempted to agree with Ozy, and to a degree I do.

On the other hand, I agree with stargirlprincess that humans are inherently political and social, and that to an extent we can’t avoid doing sociology and having ideologies. 

I guess we need to do amateur sociology, but also try not to take it too seriously either and be open to changing our minds or to particular situations differing from our sociology.

Copy/pasting the comment I posted:

I think there is a difference between “grand theories which are supposed to explain every case” and “general mechanisms which may be at work in any given” case, and I think grouping both of these together under “too meta, not epistemically careful enough” is a mistake.

I’m confronted with many, many different sorts of situations in my life. I find it invaluable to be able to apply things I have learned from previous experiences to new ones, even if the new experiences are different in various ways. How do I deal with these differences? By developing generalizations that work as a toolbox rather than a template. That is, I don’t just abstract some single idea of “how friendships work” from all of my friendships so far and expect that every future friendship will work in this way. Rather, I notice a variety of “dynamics” or “mechanisms” that happen sometimes in some friendships — without necessarily happening all the time in all friendships — and then use them as a toolbox when thinking about new friendships. I can notice when a familiar mechanism seems to be at work, while not expecting that mechanism come up every time.

The posts you’re criticizing here may go too far in claiming that the described mechanisms apply in every case (or in most cases, or in most important cases). But what I get out of them, personally, is just: “Here’s a mechanism. To sketch why it might be worth having in your toolbox, here are some important situations in which it at least plausibly may be at work.” So I don’t buy “toxoplasma of rage” as some general theory of conflicts, but I do sometimes find myself in specific situations where I think “hmm, the ‘toxoplasma of rage’ concept would explain this very well.”

Note two things here which may seem paradoxical, but aren’t. First, when I say “this particular situation looks like ‘toxoplasma of rage’ at work,” I’m able to be much more confident about this than I would be about some general theory of how “toxoplasma of rage” is how everything works. In that sense, “particular” is better than “general.” But second, I would never been able to have this particular insight if I hadn’t been exposed to the general concept! If I thought of it only as a model for some specific case, rather than an element of my toolbox, I would never be able to use it outside that one case.

I agree with your comment + it prompted me to start drafting a followup which addresses this issue. I think where we disagree might be that I worry more about that (legitimate) use providing inappropriate credence to illegitimate uses?

Like, there’s a really important difference between toxoplasma of rage as theory of conflict and toxoplasma of rage as thing that happens in some conversations, and IME people don’t make that distinction as carefully or cleanly as I would like. Like, I don’t have a problem with it being applied to, oh, why we keep talking about fucking STI, but I am deeply uncomfortable with it being used to describe why everyone is so interested in talking about the Michael Brown shooting. And, notably, the examples in the post are (nearly) all the latter case.

Just speaking personally, I find that the best way to internalize often conflicting insight porn-type stuff is to throw everything from everywhere together into one big intuitive gestalt, sort of like @nostalgebraist‘s toolbox, but perhaps even less formal than that. For me it’s all about getting a sense of the ins and outs of how the world, and not in a way that will give too many useful predictions. Maybe this means that insight porn hardly needs to refer to the real world at all? I feel like - ideally - what it does is put you in a state of intellectual humility/enlightened agnosticism more than anything else, but perhaps I’m fooling myself that that’s a real benefit. (And admittedly, that’s not often the intention of people who write that stuff and they don’t usually go around framing their work as such.)

This is why I often feel that people who write big takedowns of Notable Thinker X are often, not exactly missing the point because counterarguments and approaching the truth are still important, but missing the what the actual use of system building and grand theorizing is. (But again, maybe I’m imagining that there’s a real use.) You can say “Yudkowsky is wrong here, here, and here!” and that’s good, but there’s (perhaps) something else going on too.

Exactly. This is why I personality rather love “the sequences.” When I encounter anti-Yud arguments, I often say, “Yeah that part was obviously silly. I kinda skimmed it to find the good stuff.” When I see “true believers” locking horns with “passionate critics,” – it kinda confuses me. I’m like, people take this that seriously?

Even with this perspective, I find it hard to get anything about of most of the Sequences, but I do feel this way about various things that sometimes get slammed for being insufficiently rigorous, like the more grandiose side of SSC.

I guess I think of these things as just being in the tradition of, well, essays.  Writing that’s not really done with academic technical care, but that’s still meant to be convincing in some way.  That’s interested in the details of its case, but also in having fun with language, or inspiring the reader, or making them angry, and all that good stuff.

I think some of the issue at hand is people object to essayistic writing that has a “STEM” aura or talks about technical topics.  Which is fair, but I like essayistic writing in these areas.

Another example is The Secret Sins of Economics, which I have read a zillion times and recommended endlessly.  It’s about a technical subject, and the technical argument it makes could be a lot firmer and clearer, and there are a lot of individual points in there I disagree with or don’t get, but my god, it’s just such a fun, joyous piece of writing, in the way a technical report on the same issue would not be.  But I’ve recommended it to multiple people who’ve told me it was too unclear or scattered or that Deirdre McCloskey is too dramatic or full of herself.  What can I say?  I love that kind of stuff.

(via starlightvero)

Reminder

lostpuntinentofalantis

:

nostalgebraist:

lostpuntinentofalantis:

nostalgebraist:

ozymandias271:

fnord888:

ozymandias271:

socialjusticemunchkin:

sonatagreen:

In accordance with the schedule, as of today (Sweetmorn, the 18th of Discord), the official debate topic is now Torture vs. Dust Specks. Please proceed accordingly.

This. Nobody has the neurons to comprehend 3^^^3 properly so all arguments resting on trying to replace it with a comprehensible number are invalid by definition.

every action you take has at least a 1/3^^^3 chance of causing or preventing torture

by extension if you’re a dust specker you should be making all your decisions based on whether or not they have a vanishingly small chance of affecting someone being tortured

Hey now, Pascal’s Wager is next month.

this is NOT pascal’s wager as it is not “small chance of infinite benefit” it is “small chance of (comparatively) small benefit” and is intended to point out that 3^^^3 is REALLY BIG

also is arguing ABOUT the thought experiment torture v. dust specks technically an instance of arguing about torture v. dust specks because I think it continues to be a bad idea to use torture in thought experiments unless the thought experiment is actually about “what if the bad thing???? were justified???? in an extreme circumstance?????” + also that kind of thought experiment is tacky and I hate it

ETA: HEY WAIT next month is “social justice: has it gone too far or not far enough?” NOT pascal’s wager, pascal’s wager has to wait for utilitarianism grab bag with everyone else, I am looking forward to claiming that all instances of bad SJ are in fact instances of insufficient SJ

torture vs. dust specks is controversial because no one agrees about how to do utilitarian aggregation, but without a known aggregation rule utilitarianism has no consequences (or rather, “utilitarianism” just means choosing an ad hoc aggregation rule case-by-case with no underlying theory), so the torture vs. dust specks debate shows that utilitarianism doesn’t (currently) exist

Is there any moral theory which does any sort of aggregation? My impression is that virtue ethics, deontology etc. just explicitly disavow themselves from doing any comparison.

Why rag on utilitarianism specifically?

Aggregation is part of any ordinary definition of utilitarianism (”greatest good for the greatest number,” that sort of thing), so if we don’t know how to aggregate we can’t really be utilitarians.

“I don’t know how to do this, but I don’t have to” is preferable to “I have to do this, but I don’t know how to, so I’ll just slip it under the rug somehow.”

Now I’m especially confused.

One thing people do is that they explicitly invoke moral theories to police what other people do. This is done in the context of “what should a government do?“ or “what should I do in this dispute between other people“ or “this is the best for me, but I shouldn’t do it because of effects X Y and Z on other people.“ As far as I can tell, people use moral theories to guide themselves on what to do in those circumstances.

What is this if not just doing ad-hoc aggregation but just refusing to talk about it at all?

I think this refusal to talk about aggregation has mainly Hansonian roots, e.g. it allows people to covertly defect in situations unfavorable to themselves when it becomes relevant, or it’s used to deflect that a lot of virtue ethics/deontology is mostly about showing off how good of a friend/ally you are. This is deeply appealing to me and I think it’s appropriate to state this unsavory view.

(I’ll note that my moral intuitions probably come from some weird half mutated form of Confusionism Confucianism, which I guess is sort of like utilitarianism made by virtue ethicists)

I’m not saying that aggregation (in the sense of “taking multiple people into consideration”) is bad.

My point is that if you say you have a theory, you need to actually have a theory.

In practice, I probably make decisions in roughly the same way that self-described utilitarians do, using the same sorts of ad hoc aggregation.  I just think calling this “utilitarianism” is misleading.  I have no idea how to determine “utilities,” or how to combine them across multiple people if I knew them (much less how to include the utilities of hypothetical people who could exist but don’t).  If you give any standard description of utilitarianism, where evaluation outcomes means “comparing utilities,” that is not the thing I am actually doing.

I don’t think this kind of ad hoc reasoning is bad – it’s what I do, after all – but it seems strange to say that it is, or is an approximation to, some formal theory that has never actually been constructed.  It’s a bit like, say, deciding that your musical taste is governed by something called the “Abstract Ideal of Musical Taste,” and then just having the same musical taste you always did, but now claiming that it’s not just your taste, it’s what the Abstract Ideal says.

(via lostpuntinentofalantis)

Reminder

lostpuntinentofalantis:

nostalgebraist:

ozymandias271:

fnord888:

ozymandias271:

socialjusticemunchkin:

sonatagreen:

In accordance with the schedule, as of today (Sweetmorn, the 18th of Discord), the official debate topic is now Torture vs. Dust Specks. Please proceed accordingly.

This. Nobody has the neurons to comprehend 3^^^3 properly so all arguments resting on trying to replace it with a comprehensible number are invalid by definition.

every action you take has at least a 1/3^^^3 chance of causing or preventing torture

by extension if you’re a dust specker you should be making all your decisions based on whether or not they have a vanishingly small chance of affecting someone being tortured

Hey now, Pascal’s Wager is next month.

this is NOT pascal’s wager as it is not “small chance of infinite benefit” it is “small chance of (comparatively) small benefit” and is intended to point out that 3^^^3 is REALLY BIG

also is arguing ABOUT the thought experiment torture v. dust specks technically an instance of arguing about torture v. dust specks because I think it continues to be a bad idea to use torture in thought experiments unless the thought experiment is actually about “what if the bad thing???? were justified???? in an extreme circumstance?????” + also that kind of thought experiment is tacky and I hate it

ETA: HEY WAIT next month is “social justice: has it gone too far or not far enough?” NOT pascal’s wager, pascal’s wager has to wait for utilitarianism grab bag with everyone else, I am looking forward to claiming that all instances of bad SJ are in fact instances of insufficient SJ

torture vs. dust specks is controversial because no one agrees about how to do utilitarian aggregation, but without a known aggregation rule utilitarianism has no consequences (or rather, “utilitarianism” just means choosing an ad hoc aggregation rule case-by-case with no underlying theory), so the torture vs. dust specks debate shows that utilitarianism doesn’t (currently) exist

Is there any moral theory which does any sort of aggregation? My impression is that virtue ethics, deontology etc. just explicitly disavow themselves from doing any comparison.

Why rag on utilitarianism specifically?

Aggregation is part of any ordinary definition of utilitarianism (”greatest good for the greatest number,” that sort of thing), so if we don’t know how to aggregate we can’t really be utilitarians.

“I don’t know how to do this, but I don’t have to” is preferable to “I have to do this, but I don’t know how to, so I’ll just slip it under the rug somehow.”

(via lostpuntinentofalantis)

raginrayguns:

lambdaphagy:

nostalgebraist:

nostalgebraist:

vaniver:

nostalgebraist:

So: what’s the deal with Akaike information criterion vs. Bayesian information criterion?  "Information theory” and “Bayesianism” are both things with a lot of very devoted adherents and here they appear superficially to give different answers

They correspond to different priors. AIC has a bit better underlying framework (from an information theory point of view) and I believe better empirical validation.

Ah, OK.  I found this paper through Wikipedia, about AIC as Bayesian with a different (better?) prior, which looks good.

BIC has the advantage that it will converge asymptotically to the true model if the true model lies in the set of models being fitted, although it’s disputable how important this is.  And BIC can be derived using a minimum description length approach (can you get AIC this way too?).

One of the things I am wary of here is the sense that “information theory is magic” – e.g. in the paper linked above:

Their celebrated result, called Kullback-Leibler information, is a fundamental quantity in the sciences […] Clearly, the best model loses the least information relative to other models in the set […]

Using AIC, the models are then easily ranked from best to worst based on the empirical data at hand. This is a simple, compelling concept, based on deep theoretical foundations (i.e., entropy, K-L information, and likelihood theory).

Maybe I just don’t understand information theory, but I’m confused why I should care that the K-L divergence is “deep” and “fundamental,” here.  The question at hand is how to select a model based on some sort of estimate of how the model will generalize from the training set.  In practice I hear people justify using things like AIC by saying “well, obviously, you want the most information,” where “most information” is just a verbal tag we’ve associated with the K-L divergence and I’m not sure what mathematical weight I should give to it.  If AIC does well, and this is because it is based on information theory, I would like to understand this in a nonverbal way – what property of K-L divergence made it a good choice here, ignoring suggestive words like “information”?

Reblogging because I’m really curious about this – I’ve been aware of information theory for a long time but I’ve never been sure how it justified choices like this, and I feel like I must be just missing something major / “obvious.”

@su3su2u1, @lambdaphagy, @raginrayguns, et. al.?

Oops, didn’t have a chance to get to this earlier.  Others have already chimed in with sensible responses, but here’s another way to think about it non-verbally, especially if you want to ask “why KL divergence in the first place?” rather than “why AIC?”

KL divergence arises naturally when you ask the question “what does it mean for two distributions in a parametric family to be ‘close’ to one another?”  Take univariate Gaussians parametrized by mu and sigma, and consider each measure as a point in a 2-D parameter space.  Consider some plausible things we’d like to say about this space.  First, for any two measures (mu1, sigma1) and (mu2, sigma2), the distance between them should vary with the difference between mu1 and mu2: the further apart the means are, the “further apart” the distributions are.  But secondly, as sigma1 and sigma2 grow larger, the difference in the means should matter less.  As sigma goes to infinity, the value of the mean washes out and there is only really only one Gaussian distribution left, with its density smeared out over the entire real line.

If we think about what this means for the geometry of the parameter space, we realize that it’s not Euclidean.  In fact it’s hyperbolic: we’ve got a half-plane that draws to a single point as sigma goes to infinity.  This motivates us to ask what the appropriate metric tensor is.  It turns out (and here you must imagine my hands waving hard enough to achieve lift-off) that if you take the Hessian of the KL divergence with respect to the parameters, you get the Fisher information matrix and that does the job quite nicely.  The KL divergence is then, roughly, measuring our surprisal about the samples coming off of our distribution of interest as we move through parameter space. 

(This is backwards from the usual presentation, and I’m not sure what you’d get if you went through this exercise with some other notion of distance between distributions, like L1, L2 or TV.  KL divergence has so many other useful properties that I would expect the Fisher-Rao metric to be canonical in some sense, but I don’t know which.)

Okay, I tried to think this through a bit with L2 distance, and I think I’m dropping several levels in HabitRPG as a consequence, I really need to be writing a fellowship applicaiton, anyway….

so here’s the formula I got for L2 distance between two normals

image

So, as for the properties you described.

  1. Increases with |mu1 - mu2|. Yes it does.
  2. Rate of increase with |mu1-mu2| is lower with higher sigmas. I set sigma1=sigma2 and plotted it, and yup, the plot is less steep when sigma1=sigma2 is higher.
  3. Is zero when the sigmas are infinity. Yup.

So…. I guess the same argument… applies? You lost me at hyperbolic geometry ‘cause idk what that is. But definitely L2 fits the picture you painted as well as KL.

There’s a difference though, which is that KL distance between two normals is a convex function of |mu1-mu2|, right? The bigger the difference already is, the more increasing it counts? L2 distance on the other hand is not. So, like, if we set mu1 to 0, and consider positive mu2, then d/dmu2 KL is an increasing function of mu2. But d/dmu2 L2 looks like this:

image

so, what’s that all mean? idk.

This is all very interesting.  Another property you’d want is invariance under general changes of variables, which L2 doesn’t have, but K-L has (the scaling cancels in the fraction, and outside the fraction it gets cancelled by dx).

(via raginrayguns)

fierceawakening:

nostalgebraist:

lovestwell:

nostalgebraist:

Ill-advisedly, I started reading about the Michael Bailey controversy again (after being reminded of it by controversy over Alice Dreger’s recent popular-audience book)

I last read about it back in mid-2010 and remember it being frustrating, because no one involved in the controversy came off well – I don’t mean “they’re all equally bad” or “the truth is in the middle,” I mean I couldn’t find anyone who made me think “okay, I trust this person to tell me this story.”

Anyway, this time I have found Julia Serano’s writing on the subject, and it fills that gap.  I have not read any of Serano’s books but this bolsters the positive impression I had of her from some of you.

(I did also read the relevant sections in Dreger’s recent book, which gave me the usual “I don’t trust this person” vibe)

FWIW, my impression was very different. I didn’t read Dreger’s recent book, but I did read, a few years ago, her very long academic article about the Bailey affair, as well as *all* of the responses and rejoinders to responses that all together took up a whole issue of the journal (it still amazes me that I invested so much time into it). My impression then was that Dreger came off quite well. While it was clear that she sympathized with Bailey’s theory, she was careful to keep that implicit and well-separated from her main thesis, that Bailey was viciously and falsely attacked, including via administrative academic channels, and what that says both about academic research and popular writing in this area, etc. etc.

Most responses, though not all, didn’t even try to connect with this main thesis of Dreger’s article, basically saying “Bailey’s book is highly transphobic” in various ways. I don’t recall reading a response that convinced me or even made it seem likely that Dreger mischaracterized or omitted important evidence w.r.t her main thesis. 

Serano’s responses (then and now - she quoted her response from 2008 and I remembered it) were better than in most in the sense of not being nakedly politicized attacks. But they still suffer from the same flaw, compounded in this case by Serano repeatedly insisting that Dreger ought to have written a different article about a different issue than the one she wrote. Dreger focuses almost exclusively on particular attacks by three activists that she feels were outrageous. That *is* her story. Serano accuses her of not writing a lot about what other trans activists thought about Bailey’s book and why they hated it, the ones who didn’t resort to outrageous actions. That’s *not* her story (nor is “autogynephilia’s status as a scientific theory, by the way). It’s bizarre to accuse her of not writing it, and of creating an impression that the response by the trans community to Bailey’s book was *all* of it outrageous.

To me, Serano reads very much as “interesting arguments, but very clearly lots of highly motivated reasoning, and I do not trust this person to tell me this story”.

Keep reading

From what I remember looking into this some months ago, so I’m rusty:

I didn’t have a problem with Dreger, because I didn’t get the sense that she was trying to defend Bailey’s theory. (That she thinks people are misrepresenting it, at least for purposes of exaggerating its badness, is clear, but I didn’t get the sense that she was sure it was or should be settled science.) I got the sense that she was alarmed by what she saw as personal attacks, and focused on those attacks and how bad they are. Basically, saying “this callout became a witch hunt.”

Through that lens, while I find what Serano has to say valuable and important, I think it’s off point. Dreger is saying “these people escalated this to the point they made sexual, disgusting comments about a man’s children. What the fuck?” and Serano is going “why do you care what those weird people did? Have some science!”

Which, cool. Science! But the whole point Dreger was making was “look at what these weird people did, and how anyone looking into this finds what these weird people have to say. Viral callouts are bad, guys.”

I liked Dreger, not because I agreed with her on everything, but because I do think that politics can get toxic, and it does sound to me like THE WAY SOME ACTIVISTS MADE THIS PERSONAL is a pretty clear case of toxic “my identity means I can do anything to you and it will be morally acceptable!” progressivism.

(NOTA BENE: I of course do not agree with Bailey at all and think he was relying on junk science, and even that may be generous. I think his book deserved to be panned and it made sense that activists would sound an alarm about it misrepresenting trans women.

ALSO NOTE: if my personal views are relevant at all, I honestly don’t care if autogynephilia exists or not. I am radically pro-body modification and would not be in any way morally opposed to someone altering their body who said “I am a cis man. I identify as a guy. But ever since I was a young teenager I’ve had this intense fantasy about having a pair of great tits. It’s not about my gender, I just want boobs! I realize that my sexual drive will likely wane as I age, and I realize that I will be judged and ridiculed and quite possibly even attacked if I do this. But I want to anyway! My body, my choice.”

My response to such a person is “you go, strange tit boy.”

So maybe that makes me weird somehow in this discussion. But I don’t THINK so. I think I can both hold “if autogynephiles or even people who identify as cis men but want ‘female’ features exist, I can think it’s morally permissible for them to modify their bodies for reasons no more ‘deep ’ than this” AND “I do not actually think autogynephiles exist, or if they do, that they are anywhere near common enough to represent a notable subset of trans women.”)

My impression of Dreger is a bit different.  Her writing on the subject – both in Galileo’s Middle Finger and in the parts of her paper I’ve read – are focused on the kind of material that would concern a detective or a lawyer: here’s who did what to whom, here’s the evidence who shows it, here’s who was guilty of what in which event, etc.

She does believe that it’s a story about toxic activism, but she doesn’t really say anything about why she thinks the activists behaved in this way in this case, nor does she give any coherent impression of just why they thought they Bailey book was so bad (which would be necessary in order to make constructive suggestions – “if you are faced with something bad in this way, don’t act like Conway et. al., instead do this”).  She seems to think that they attacked Bailey for daring to say that the “woman in a man’s body” story is sometimes inaccurate, but she doesn’t tell us why they cared so much about this distinction.  She repeats this in Galileo’s Middle Finger.

In particular, note that the person who got attacked was not Blanchard, for devising the theory, but Bailey, for popularizing it.  Why not attack Blanchard this way, who has a maligned history of actual clinical work with trans women, rather than Bailey, who just talked to some trans women and wrote a book?  Dreger’s explanation does not give a satisfactory answer to this question.

I guess my take on the issue is that – well, Conway and James did and said some really nasty stuff.  (Just in the course of poking around a bit while writing the OP, I came upon James speculating about whether Anne Lawrence is autistic, as though that would discredit her.)  But generally people don’t act like that for no reason.

It’s important to understand what they were so angry about, not just for the sake of understanding the issue, but also because – even if Dreger is correct and Bailey did (basically) nothing wrong, how are future Baileys going to know when they will piss people off to this extent?  “Sometimes, if you write a book, activists will make sexual, disgusting comments about your children” is not a very helpful thing, in itself.  And if you want to make a case to activists that they shouldn’t act this way, you won’t get very far if you don’t seem to understand, or even care much about, their motivations.

Re: your last note, this is also what Dreger thinks, and what she says Bailey thinks – that autogynephiles should get access to hormones and surgery, that there is nothing wrong with autogynephilia, but that it should be known about if it’s a thing, for general scientific interest.  Dreger also thinks that Conway, James, and McCloskey are all autogynephiles, and in fact she dredges up a letter from 1998 (?) in which James describes herself using that word.  So she thinks that autogynephiles are attached to a theory that says they don’t exist?  That they somehow missed (??) Bailey’s accepting attitude?  Again, she just doesn’t seem to know what made these people angry.

(via fierceawakening)