Install Theme

lambdaphagy:

loki-zen:

ursaeinsilviscacant:

wolffyluna:

ursaeinsilviscacant:

Like, I can see why e.g. saying the rosary would be a good anxiolytic and/or virtue increaser for humans who aren’t me. Or how having a religious community is comforting and helpful. I just don’t get how the afterlife stuff doesn’t massively offset it. 

But I genuinely don’t get how anyone’s mental health is improved by a religion with a Hell in it. Everything else, sure, I see how that helps. But “eternal torture exists and you deserve it” is like… this isn’t Taking Ideas Seriously or Competing Access Needs. Any person is not going to deal well with the risk of eternal extreme misery. It is horrible by definition.

I know it’s bad to psychologise your opponents, but I can’t see how Christians believe in Hell if they’re mentally healthy. Maybe the sola fide ones are okay with other people being tortured and secure in their own salvation, but “they are okay with other people being tortured” is a horribly mean thing to say. And yet the alternative “they are lying about their religion” is also horribly mean.
What is going on here? What am I not getting? What is there that’s missing? “I am crazy” is 500x more plausible than “everyone else is crazy” but NOTHING. ABOUT. THIS. BEHAVIOUR. MAKES. ANY. SENSE. AT. ALL.

Explain it to me. My inability to understand is maddening. Maybe there’s some piece I’m missing and if I see it Jesus is real and the eternal torture is somehow okay. But fuck I hope that’s not true.

In my (limited) experience, people seem to completely forget about the concept of Hell, or believe it could never happen to them/those that they love*.

But plain old forgetting/forgetting the implications seems most common.

*Esp in some Protestant systems (not sure if correct word), where it is relatively easy to ‘qualify’ for heaven. Though it varies wildly depending on who you ask.

*nods* Yeah, this is similar to David’s point.

I just don’t see how you can forget about something which is literally the worst thing possible.

Lots of happy-seeming religious Christians, and also every Muslim I’ve asked, have told me that they don’t view Hell as a place of eternal torture.

The Christians pointed to something Jesus said about ‘cleansing’ and say that ‘eternal’ is hyperbole; what they believe is that bad people go to a place that is pretty unpleasant until they have learned their lesson.

The Muslims told me either that Hell is a process of cleansing those who are not yet fit for paradise, or that before you go to Hell you get undeniable proof that the Islamic God exists (he’s basically sat there in front of you being all God and such) and only go to Hell if you still refuse to become a Muslim. And that at any point after you go to Hell you can change your mind and he’ll forgive you.

so idk, data points?

Lewis’s theology of Hell is pretty interesting.  Just as Heaven may be “an acquired taste”, the damned in Hell may not even wish to leave: 

“The damned are, in one sense, successful, rebels to the end; the doors of hell are locked on the inside.  I do not mean that the ghosts may not wish to come out of Hell, in the vague fashion wherein an envious man ‘wishes’ to be happy: but they certainly do not will even the first preliminary stages of that self-abandonment through which alone the soul can reach any good. They enjoy forever the horrible freedom they have demanded, and are therefore self-enslaved: just as the blessed, forever submitting to obedience, become through all eternity more and more free.

The Problem of Pain

In any case, our choices will echo at least until heat death whether we like it or not. Given that inevitability, should we adopt a metaphysics that rubs it in our faces or one that allows us to push the thought to the back of our minds?  Are we playing for keeps or not?  And if not, why not? 

(Cut for length – and sorry for writing so many words in response to so few)

Keep reading

(via lambdaphagy)

lovestwell:

nostalgebraist:

waystatus:

nostalgebraist:

Oh, while I’m on this sort of thing – does anyone know about AI models of concept formation?  Like, have they been developing along with the rest of AI?  Are there current ones doing cool things the way Deep Dream and neural-style are doing cool visual perception things?

I ask partially because a while ago I claimed rashly that the current enthusiasm for deep learning would plateau eventually, because deep learning was just learning to mimic the sort of sequential abstraction that goes on in relatively early sensory perception (except without top-down input from more abstract/conceptual parts of the brain!).  So it would produce results that seemed very human-like in the sensory realms, without ever getting to the point of being able to abstract beyond “whorls like in Starry Night” or (in the case of that automated NSFW flagger) “things that look like butts” to the kinds of higher- and higher-tier concepts we deal in.

Concept formation in particular seems to me like it’s somehow “especially unsupervised” (I’m sorry if this is naive or doesn’t make sense, I am very tired rn).  Ask a neural network to look for dogs and it will gain an impressively high-level and robust sense of the visual appearance of dogs.  But no human comes into the world knowing that “dogs” are a thing they are looking for.  They just get the buzzing, blooming confusion, and are able to somehow infer that it is useful to lump certain roiling patterns of visual data together with certain auditory and tactile patterns (and higher-level induced patterns of animal behavior) together into this new concept, “dog,” without ever being pointed in that direction.  Meanwhile they do not invent, or do not maintain for long, “non-natural” classification systems like the ones in Borges’ Emporium.  I have never seen an AI that can do something like this.

In the bullshit amateur neuroscience district of my mind, it feels like there is a connection between: the “especially unsupervised” nature of concept formation, the lack of success (?) in designing artificial neural networks for concept formation, and the way in which (I think?) neural processes after early sensory processing get more and more interconnected and less separable into discrete stages (point 5 in this old post).  The amazing “style representations” that let you make anything look like van Gogh or Kandinsky (you may begin to notice how shallow my reading has been, how the reference points repeat) were obtained through a certain kind of averaging over the things “seen” by a visual classification network – and that network got supervised training on labelled data, and did as well as it could by being “very deep” (19 layers), i.e. (?) it got a lot of chances to form more and more abstract generalizations on its way to a preset goal.  Each representation feeding neatly into the next, nothing emerging that wasn’t told to emerge at the end.

Of course that would all break down if there are similarly “amazing” results in AI concept formation that I just haven’t heard about.  Which there may be.

(Apologies if the style of this post is too grandiose or otherwise off, blame sleep deprivation and long flights across the ocean and massive amounts of caffeine to compensate, I should go to bed very soon)

My answer to this depends on what you mean by “concept formation”.

If you mean “given a bunch of data about dogs and cats, and instructed that there are such things as dogs and cats, can an AI figure out which data points are dogs and which are cats?”, the answer is “yes, AIs have done well on this task quite a while now”. We’re at the point with this stuff that computers often outperform humans, particularly with large datasets: it’s often worth it to feed a large dataset to a computer so it can find things no human would notice, and even if it doesn’t it will sure sort the data more quickly than a human would.

If you mean “given a bunch of data about dogs and cats, and no instructions that there are such things as dogs and cats or how many groups to look for, can an AI figure out that there are two groups and separate them?”, the answer is still yes but it’s a more qualified yes. The algorithms for doing this are still very good but not as good as the supervised algorithms, and are usually used only when the human isn’t sure whether there are patterns in the data. They are already practically useful and capable of finding patterns that humans can’t, though, so they’re still quite good.

If you mean “given a bunch of data about dogs and cats, and no instructions that there are such things as dogs and cats or how many groups to look for, can an AI give you a definition for ‘dog’ and a definition for ‘cat’?” the answer is “sort of, but it’s probably going to give what looks to a human like a very strange answer”. I don’t think it’s entirely fair to blame the AI for this though. If everything you knew about dogs was in terms of height, weight, and ear length, and all relative to cats, you would probably give a very strange definition of a dog too.

(Also, sidenote: it’s not like human learning is entirely unsupervised either. You were taught, at some point, that dogs were a thing distinct from cats, and that blue was a color distinct from green. You didn’t figure that out purely from experience. The reason I bring up that second example is because my proof is that in some languages, blue is not a color distinct from green. It’s not like English babies go around wondering why adults are making these arbitrary distinctions between some grue things and other grue things, or like Welsh babies are confused about why people keep calling green things grue.)

(This was originally meant as a quick response to your post but turned into a longer, much more explicit statement of what I was trying to say in the OP)

By “concept formation” I don’t mean the same thing as “binary classification” (or even multiclass classification with a specified number of classes).  There are at least two related differences I have in mind:

(1) Human learning is, yes, not entirely unsupervised, but it is also able to take place in a continually changing perceptual world where one never gets some circumscribed “data set” known to contain some number of useful classes if analyzed properly.  If one had infinite storage space and computation time, one could just run a classifier algorithm on the data set consisting of all the sense data one’s ever seen, but clearly that’s impractical.  It’s possible that this can be done by just continually running classifier algorithms on recent sense data and maintaining a set of “cluster boundaries thought to be useful” which are gradually updated as new data comes in.  (I’m sure there is work like this out there, I just don’t know about it.)

(2) Humans learn hierarchies rather than flat categories, and are able (?) to generate new hierarchies on the fly.  Most work on hierarchical categorization, I think, gives the classifier a pre-built hierarchy which it uses as background knowledge to help with classification.  Some work on learning hierarchies starts with an initial hierarchy and evolves it, while other methods need no initial seed ( “adaptive” and “generative” approaches as defined in section 2.2 here, respectively).  The latter seems like what we want.  But even in this work, one usually (?) starts with a known set of flat classes – much more abstract than the data itself – and then the algorithm learns how to group them together (as in this paper or this one).

By contrast, in human learning, it seems that the set of flat categories we begin with is very primitive – not quite the raw sense data, but just the output of standard sensory processing applied to that data.  (E.g. not “retinal activation patterns,” but “shapes, patterns of motion, etc.”)  For instance, if we sometimes encounter dogs, our sensory apparatus might give us information like “visually furry,” “moves without an external force being applied,” and some sort of processed version of a barking sound (at the outset we do not have the concept of a “bark” yet).  In the high-dimensional space of these sensory features, there is a density peak around the conjunction of these three things (because dogs exist), but finding it by doing classification over the whole flat space of such sensory features is surely computationally intractable.

Instead, I imagine something like this is going on: we notice the immediate conjunction of some sensory features, generate a bespoke hypothetical category (“things that are furry, move on their own, make this sort of sound, etc.”), and then quickly try out that category on new sense data in the short term.  If we keep seeing the same conjunction (where there is one dog, there are often others!), we keep the category and put some work into refining it, so that perhaps we start to notice that dogs have various higher-level features we would never have defined at the outset.  (Even if standard auditory processing doesn’t give us “bark” as a primitive, once we have started to think that dogs are a thing, we may start to pay attention to higher-level features of their sounds and define “a dog’s bark” in terms of the auditory primitives.  More generally, once we have “grabbed onto” a “dog”-like concept, we can start noticing many higher-level features of dogs – that they are common pets, say – which would be far too abstract to be found in a brute-force examination of the primitives.)


@lambdaphagy linked to a very interesting paper in which a very large network (though still 10^6 times smaller than the human visual cortex) was able to learn the (visual) concept of “face” from a training set of unlabelled YouTube frames, less than 3% of which contained faces, just by optimizing for accurate compression of input images.  (“Compression” because some of the intermediate layers were much smaller than the input layer or the output layer [which was the same size as the input and meant to ideally match the input image].)  It also learned about cats and human bodies (which, as the authors drily note, “are quite common in YouTube”).

This is exciting, but it’s a result about how well existing architectures can do when you use a big enough network and run a single round of optimization on a huge computing cluster for 3 days.  There may be tricks that work specifically for unsupervised learning of hierarchical features from low-level primitives – along the lines of what I discussed two paragraphs ago – which could work much more efficiently than the standard approach used here.

Also, the kind of “propose concepts, retain/refine some and discard others” approach I described may be especially valuable for an organism or machine that must react continually to incoming stimuli – online learning by necessity – and doesn’t get to passively absorb a large training set and then, all at once, do a possibly costly optimization over it.  (Although perhaps one could do this while sleeping!)  For building robots or other kinds of artificial minds, it seems especially important to make an algorithm that can (say) pick up a rough version of a concept like “face” pretty quickly, perform information-seeking behavior to help it evaluate the usefulness of this candidate concept, and refine the concept while continuing to interact with the world.


Finally, my interests here are slanted toward the basic science approach “let us explore concept formation, perhaps with the result of efficient algorithms years down the line” and away from applied science approaches like “there is interest in knowing when there’s a dog in a YouTube video, what’s the best we can do in [short, market-driven timeframe].”  In other words, I’m thinking more in the spirit of the earlier AI researchers who liked to talk philosophically about whether their algorithms represented the “right approach” to the problem domain, as opposed to just the most efficient approach available – do perceptrons really get at “what concept formation is”? do Hopfield nets really get at “what memory is”?

As usual (?) in basic science, it may be better to try many ideas on very simple testbeds rather than to jump straight to the kind of real problem that humans solve.  “Discovering ‘dogs’ the way we do” is probably way too much to hope for at the outset.  But maybe we could experiment with similar discovery methods in simpler toy worlds.

Like, consider a computer program designed to play Super Mario Bros.  After some suitable (built-in?) pre-processing of the pure sense data (pixels, possibly sound), could it form a hierarchy of concepts including “enemy” and “block I can hit to get something,” where “enemy” has subtypes like “goomba” and “koopa”?  Could it sometimes sacrifice short-term performance (as predicted by its current, imperfect knowledge) to refine its concepts, say by testing whether a given object is indeed an enemy?  Could it do this all on the fly without needing offline optimization sessions?

I know there are people working on AI for video games, so maybe this thing I’m thinking of already exists?  Would love a pointer to it if so.

A relevant recent paper is 

Building Machines That Learn and Think Like People

I recently summarized it in a talk at a local rationality meeting, as a way of motivating myself to read it carefully. It’s very readable and particularly useful in that it takes into account plenty of very recent research (2015-2016). 

The authors’ main point is that “human-like learning” uses model-building crucially, where by model-building they mean much the same or exactly the same thing as you mean by concept formation. They look at specific examples of classifying handwritten characters and playing old video games, two areas where deep learning systems have been very successful recently, but their success still seems to be contained within a sharply limited functional area, and the authors believe that to get out of that area, machine learning needs to try and incorporate model-building. I particularly liked the way they explained and illustrated the amazing flexibility of models compared with trained neural nets, in that a model remains useful even if you change the goal function sharply. E.g. if you learn to play an Atari video game, you immediately get “for free” such skills as: explaining it to a friend; playing to lose; playing to reach a particular score rather than maximize a score; playing in parallel with a friend aiming to finish the level with identical scores; etc. etc. Any of this is a separate goal for a deep learning system, and all the training it did to learn to play the level really well - even better than you - is just useless in reaching these separate goals. There’s no “learning transfer”. This is just one example - they list several more important aspects in which human learning outperforms the best of deep learning systems in ways that seem to require fundamentally new ideas - but it’s the one which resonated with me the most.  

The authors have some specific ideas on how to try and augment current best-of-breed learning system with model-building-related features, but at least my impression was that these do not reach anything like the goal (if that is to be a goal) of formalizing a general concept of a “model” and explicitly managing such “models” in code. They suggest augmenting AI systems with features that resemble certain seemingly important building blocks of human learning. For example, human infants already early on have a good understanding of how the physical world works (the behavior of solids, liquid, gravity, objects being pushed, pulled, etc.), and we seem to use this understanding to make very efficient shortcuts in e.g. learning to play a video game. They say that an AI system which learns to play an Atari video game,  and is already very good at classifying various objects and classes of objects that appear in the game, might be able to make significant shortcuts by using a relatively simple physics model of the kind current physics simulators already use. [the rest of the paragraph is my opinion] While this may well be a promising approach (hard for me to judge), and while it may direct the attention in the “right” (?) direction of model-building, it doesn’t seem to be nearly general-purpose enough. If I play a video game in which my intuitive physical understanding helps me, and then suddenly the laws of physics in-game change (e.g. gravity reverses direction), I very quickly learn this by using the same general-purpose learning mechanism as I use for the game itself, and adapt. That is, “physics in this game” is for me a model as rich and as amenable to change as “how to play this game”, while the authors mostly discuss using “physics in this game” as a lower-level building block aiding the deep learning system, a building block that doesn’t undergo learning itself.  

But the reason that is so, as I understand it, is simply that nobody knows how to formalize explicitly, or reach implicitly with some sort of learning, models that are as rich and feature-full as human learning seems to use. 

Whoa, this is exactly the kind of thing I’m looking for!  Once I’m over my refractory period for thinking about this stuff (I’m kinda spent after the previous two posts) I’ll definitely check out that paper.

The point about models generalizing to different goals is really cool.  One of those “seems obviously important in retrospect but I’d never thought about it” things.

(via lovestwell)

@lovestwell

There are a lot of little details here that we could go back and forth on forever.  I don’t really want to continue arguing over these details.

Your account of Dreger’s perspective, although internally coherent, feels like it’s reading a lot into the book I just read that wasn’t actually there.  The line you’re drawing is (I take it) between scientists “doing their thing” with potentially harmful results down the line, and scientists using actively unethical methods.  But I don’t remember Dreger ever drawing that distinction explicitly.  This is not me being coy or “perversely charitable” or something; I just don’t remember that being the thrust of the book I read.

(It is also not something I would naturally read in, because it doesn’t fit the facts as I see them.  Michael Bailey is in fact a scientific researcher, but the campaign to ruin his reputation was in response to a popular book he wrote which meant to illustrate a theory he didn’t himself develop – and the theory itself was developed by Ray Blanchard in a clinic [the Clarke Institute, or “Jurassic Clarke”] that has a reputation for clinical horror stories.  So what Bailey actually did is sort of analogous to some colleague of Maria New writing a popular book in which they interview some cherry-picked children who received prenatal dex talking about how great the results are.  Would the author of that book be “just a scientist doing their thing”?)

But in particular I want to reply to your concluding paragraph, because it seems to get at some core friction here:

(and if you do believe that - if you do think that someone writing an article in support of autogynephilia, for instance, is “causing indirect harm”, and thereby qualifies as “those few cases where nastiness and extremism might be okay”, to quote @socialjusticemunchkin - then your repeated insistence on discussing the object level, the actual truth of autogynephilia and other such theories - remains that much mind-boggling to me).

I don’t understand this, so I apologize if I’m getting it wrong.  What I think you are saying is “you and Promethea believe that mere discussions among scientists of whether certain theories are wrong or right can be inherently harmful and deserve public shaming and nastiness, in which case you must be willing to give up the whole endeavor of scientifically adjudicating the truth or falsehood of those theories.”

I certainly don’t believe that.  I believe (like Dreger) that activism, and society in general, needs the free discussion of scientific ideas.  But I also think that not every statement by someone with a scientific professorship counts as a defense-worthy part of this free discussion.  At a certain point – as when someone writes a book for a general audience containing no new scientific content – they are acting simply as citizens, not as participants in the protected sphere of scientific discourse.  No idea should be inherently anathema in the academy, but no one spends all their time in the academy.

If a chemistry professor (after work) tells someone (not a colleague) that they should mix bleach and ammonia when they get home to make a super-great cleaning product – “trust me, I’m a chemistry professor” – they are not advancing an unorthodox scientific hypothesis in some way we ought to protect and celebrate.

(via lovestwell)

socialjusticemunchkin:

brazenautomaton:

ursaeinsilviscacant:

nostalgebraist:

Another post on Galileo’s Middle Finger, having finally finished the book.  (Previous posts: Maria New and prenatal dex, also various posts in the tag #michael bailey cw?)

Galileo’s Middle Finger (hereafter GMF) is a strange book.  On one level, the book’s content is pretty easy to make sense of: Alice Dreger has been involved in a number of dramatic academic controversies over the course of her career, and she figured (sensibly enough) that people might enjoy reading a book that retells these stories.  To some extent, she just presents the book as “a memoir of the controversies I’ve been involved in.”

However, she also claims that these stories are connected by an overarching theme, which is something like this:

“Scientists and activists often find themselves at odds, on opposite sides of angry battles.  But everyone should recognize that truth and justice are intimately connected: you can’t help the victims of injustice if you don’t care about the facts of the situation, and if you’re in a unique position to explore facts (such as an academic job), you ought to steer your investigations toward the social good – not by sacrificing the truth, but by looking for the truths that can help.  Activists need to be more concerned with truth, and scientists need to be more concerned with justice.  And if both sides followed this advice, they would be at odds far less often.”

All of this sounds very agreeable to me; I think I already more-or-less agreed with it before I read Dreger’s book.  But do Dreger’s accounts of various controversies actually serve as useful examples of this stuff?  Not always.  And Dreger’s attempts to link everything back to her theme produce some awkward results.


Besides a few minor subplots, there are three controversies narrated in GMF.  First, she narrates the controversy over Michael Bailey’s book The Man Who Would Be Queen.  Second, the controversy over Patrick Tierney’s book Darkness in El Dorado, which accused anthropologist Napoleon Chagnon of genocide as well as various other wrongdoings.  Third, Dreger’s investigations into Maria New and her struggles to get her criticisms recognized by government bodies and the public.

Of these three, it’s the Tierney/Chagnon case that most directly fits Dreger’s theme.  Tierney’s book was a work of shoddy hack journalism which made spectacular allegations that have been uniformly refuted by later investigators.  (N.B. Tierney made many allegations, and some of the more minor ones have been less clearly refuted, but those weren’t the ones that made headlines.)  Nonetheless, shortly after the book came out, the American Anthropological Association quickly endorsed Tierney’s book – the academic equivalent of reflexively believing a callout post without checking the sources.

Reading this in terms of Dreger’s theme seems straightforward: in its concern for justice, the AAA neglected the value of truth, and thus failed to even serve justice.

Even here, though, the theme strains a bit.  The Tierney debacle was not exactly a conflict between “activists” and “academics”; the people under-valuing truth in the service of justice were the academics of the AAA.  (Tierney could arguably be called an “activist,” but Dreger treats him – rightly, it seems to me – as a hack journalist from whom more concern for the truth cannot be expected.)

The Maria New story also lacks a clear instance of an activist failing to sufficiently value truth.  In that story, Dreger is the activist, raising ethical concerns from the outside about an established academic, and her activism is directly grounded in science that she believes that academic is ignoring.  She may intend this as an example of “activism done right” (about which more later), and/or as a case of an academic caring too little about justice.  But it’s not as though New is ignoring justice because truth is her only value; as Dreger notes, her prenatal dex work has produced little in the way of academic knowledge.  So again, it’s hard to see this as an illustration of the theme.

So far, it looks like Dreger has failed to exhibit an example of activists behaving badly, although this is crucial to her theme.  The third story (well, first as presented in the book), about Michael Bailey, is her main (and only) example of this.  But of the three stories, it’s that one that fits the theme least well.

Dreger’s account of the Bailey controversy shares a quality with her account of the Chagnon controversy: both are told as stories of lovable and humane, if out-of-touch, researchers being persecuted by ignorant people who don’t understand them.  Dreger spends a great deal of text talking about how much she personally likes Bailey and Chagnon – Bailey is a personal friend, Chagnon she met while investigating that controversy.  As “characters” in the book, they downright glow.  They’re funny, they’re good company, they both have cute and harmonious marriages.

It makes sense to write stuff like this in order to humanize people who have been demonized by others.  But one has to note here that none of this bears on the “truth” side of the things.  It’s certainly possible for someone to have committed genocide and still be a warm and sparkling conversationalist at the dinner table; it’s possible for Michael Bailey to be a great guy if you know him personally, and nonetheless to have been wrong about trans women.

With Chagnon, this tension never becomes relevant, because as a matter of simple fact, Chagnon was exonerated by multiple serious investigators.  With Bailey, the tension is glaringly relevant, because the issue of whether Bailey is actually right never gets fully addressed in Dreger’s treatment.  Indeed, she treats it almost as an irrelevant side issue.  Where is the value of truth here?

To be fair, Dreger does put her beliefs on the table about the issue.  But these beliefs seem to reveal little serious interest in the questions involved.  She seems to have uncritically bought the Blanchard-Bailey line – possibly because she only cares about these issues insofar as they affect her good friend Michael Bailey? – and to have done little investigation into academic work on transgender beyond this.

Astonishingly, for instance, the phrase “gender dysphoria” never appears in GMF at all.  (A word-search for “dysphoria” turns up only one result, in the title of a Blanchard paper cited in the endnotes.)  When Dreger presents her account of trans women, she talks about (for instance) transitioning as a choice made by feminine gay men in order to better fit into homophobic social environments, stressing that these people might not have transitioned if feminine gay identities were more accepted in their local environments.  I’m willing to believe this happens sometimes – but Dreger seems to actually not know that gender dysphoria is a thing.  This is in a book published in 2015.  One wonders if she’s ever even looked up the condition in the DSM (which changed the name from “Gender Identity Disorder” to “Gender Dysphoria” in the 2013 DSM-V, but even before that had included dysphoria as one of the two major diagnostic criteria).

Dreger has a page on her website, written after GMF was published, in which she responds to questions about “autogynephilia” and states her current positions.  Again, she never mentions gender dysphoria.  Of Blanchard’s androphilic/autogynephilic typology, she says that “I think what I’ve seen from the scientific clinical literature and socioculturally suggests this division makes sense.”  She does not provide any citations, and does not address critiques (see here) that the data show a continuum which does not separate well into two clusters.

I belabor all of this because Dreger’s indifference to the truth here simply makes GMF fundamentally incoherent.  I agree with Dreger’s theme; I have no clue how she thinks the Bailey story illustrates it.


But wait – Dreger’s claim is that activists value truth too little in their quests for justice.  Does this hold true for the activists who attacked Michael Bailey?

Again, Dreger seems to not much care.  She devotes a lot of space to the claims made by these activists, but mostly to express confusion over them.  Noting that some of them display what look to her like signs of autogynephilia, she scratches her head: why are they angry at a book for talking about autogynephilia?  One would think that someone in Dreger’s position – someone interested in getting to the bottom of situations where truth and justice appear to conflict – ought to answer a question like this.  Dreger doesn’t.  Her attitude is basically: “who are these weird people attacking my friend Michael?  What do they want?  They’re so confusing!  Michael is a scientist, so maybe they don’t like science?  Jeez, who knows!!!”

What she substitutes for consideration of these issues – and let me be clear, this is not nothing – is a detailed, blow-by-blow account of the nasty, dishonest ways in which the activists tried to ruin Michael Bailey’s reputation.  They were, in fact, really nasty.  But people don’t just do things like that for no reason.  What about the larger questions of truth and justice here?  Why do these activists believe Michael Bailey is so harmful?  Could it be the case that Bailey is harmful, to the point that defaming him is a net good?

Dreger never mentions this sort of idea, but it hangs uncomfortably over her whole book.  She bemoans the fact that her work on Maria New – which is generally polite and non-nasty, if very harsh on New – has failed to make appreciable waves in the world, beyond loading the first page of Google results with dex-critical pages.  On the other hand, Bailey’s book is now solely known as the subject of a stormy controversy, which received huge amounts of media discussion.  What if nasty activism is sometimes necessary to get the job done?  What if simply having both truth and justice on your side isn’t enough?  And, putting it the other way around, how can Dreger assume that the anti-Bailey crowd didn’t have truth and justice on its side, just because they were nasty and vicious to her friend?

In Dreger’s telling, Andrea James is a scary asshole who sends her possibly-physical threats via email, and Michael Bailey and Napoleon Chagnon are precious cinnamon rolls.  But fighting for truth-and-justice is not the same as identifying the Nice People and the Mean People.  These may in fact be (I hate to say it) largely unrelated endeavors.

A serious book about activism, science, truth and justice would begin with these disquieting possibilities, and then explore from there.  (One example that book might look at: Dreger’s earlier non-nasty activism for intersex people has gotten stuff done.)  Dreger’s book instead stays in an overly cozy universe, where “fighting for good” and “defending her lovable buddies against the bad meanies” can never be conflicting goals.

“What if nasty activism is sometimes necessary to get the job done?  What if simply having both truth and justice on your side isn’t enough?”

I really want more serious treatment of this question from someone sensible. Obviously I really hope the answer is no, and I am tired of discourse from people who seem like they would actively prefer the answer to be yes (although maybe it’s only my bias making them seem that way.) But yeah, doe anyone know of any decent book-length discussions of the issue which look at real-life situations?

The answer is no because the moment you decide that nasty activism is necessary to get the job done you completely lose the capacity to distinguish “cases when nasty activism is a distasteful necessity” from “cases where nasty activism can be used to punish people for saying things that make me upset, or just for the crime of being unpopular and perceptible to me.”

This is proving too much. If one contrasts nasty activism to violence, one could say the exact same thing, and to some degree it’s quite true (PoliceMob being a very good example of insufficiently restrained violence, contrast with BadSJMob), but the actually correct answer would probably be “it’s possible to use it usefully, but most of the time it’s a bad idea and completely abstaining from it is way less likely to be harmful than using it indiscriminately”.

TL;DR and a fucking massive disclaimer to not get this misunderstood and misrepresented by everyone: I think most nastiness is excessive and unwarranted, but consider it at least possibly excusable in some situations where people are reacting to sufficiently shitty things, and Bailey is up there in the list of “those few cases where nastiness and extremism might be okay”, along with the likes of Judge Rotenberg Centre etc.; and it’s really shitty that if I say “Bailey is terrible, scorn him”, some asshole somewhere will take it as endorsement of heaping abuse on some kid whose only crime was not being up to date with their shibboleths.

(descriptions of dirty tricks, nasty sj, and other dark underbelly-of-the-world things below the cut)

Keep reading

(via oktavia-von-gwwcendorff-deactiv)

lovestwell:

nostalgebraist:

Another post on Galileo’s Middle Finger, having finally finished the book.  (Previous posts: Maria New and prenatal dex, also various posts in the tag #michael bailey cw?)

Galileo’s Middle Finger (hereafter GMF) is a strange book.  On one level, the book’s content is pretty easy to make sense of: Alice Dreger has been involved in a number of dramatic academic controversies over the course of her career, and she figured (sensibly enough) that people might enjoy reading a book that retells these stories.  To some extent, she just presents the book as “a memoir of the controversies I’ve been involved in.”

However, she also claims that these stories are connected by an overarching theme, which is something like this:

“Scientists and activists often find themselves at odds, on opposite sides of angry battles.  But everyone should recognize that truth and justice are intimately connected: you can’t help the victims of injustice if you don’t care about the facts of the situation, and if you’re in a unique position to explore facts (such as an academic job), you ought to steer your investigations toward the social good – not by sacrificing the truth, but by looking for the truths that can help.  Activists need to be more concerned with truth, and scientists need to be more concerned with justice.  And if both sides followed this advice, they would be at odds far less often.”

All of this sounds very agreeable to me; I think I already more-or-less agreed with it before I read Dreger’s book.  But do Dreger’s accounts of various controversies actually serve as useful examples of this stuff?  Not always.  And Dreger’s attempts to link everything back to her theme produce some awkward results.


Besides a few minor subplots, there are three controversies narrated in GMF.  First, she narrates the controversy over Michael Bailey’s book The Man Who Would Be Queen.  Second, the controversy over Patrick Tierney’s book Darkness in El Dorado, which accused anthropologist Napoleon Chagnon of genocide as well as various other wrongdoings.  Third, Dreger’s investigations into Maria New and her struggles to get her criticisms recognized by government bodies and the public.

Of these three, it’s the Tierney/Chagnon case that most directly fits Dreger’s theme.  Tierney’s book was a work of shoddy hack journalism which made spectacular allegations that have been uniformly refuted by later investigators.  (N.B. Tierney made many allegations, and some of the more minor ones have been less clearly refuted, but those weren’t the ones that made headlines.)  Nonetheless, shortly after the book came out, the American Anthropological Association quickly endorsed Tierney’s book – the academic equivalent of reflexively believing a callout post without checking the sources.

Reading this in terms of Dreger’s theme seems straightforward: in its concern for justice, the AAA neglected the value of truth, and thus failed to even serve justice.

Even here, though, the theme strains a bit.  The Tierney debacle was not exactly a conflict between “activists” and “academics”; the people under-valuing truth in the service of justice were the academics of the AAA.  (Tierney could arguably be called an “activist,” but Dreger treats him – rightly, it seems to me – as a hack journalist from whom more concern for the truth cannot be expected.)

The Maria New story also lacks a clear instance of an activist failing to sufficiently value truth.  In that story, Dreger is the activist, raising ethical concerns from the outside about an established academic, and her activism is directly grounded in science that she believes that academic is ignoring.  She may intend this as an example of “activism done right” (about which more later), and/or as a case of an academic caring too little about justice.  But it’s not as though New is ignoring justice because truth is her only value; as Dreger notes, her prenatal dex work has produced little in the way of academic knowledge.  So again, it’s hard to see this as an illustration of the theme.

So far, it looks like Dreger has failed to exhibit an example of activists behaving badly, although this is crucial to her theme.  The third story (well, first as presented in the book), about Michael Bailey, is her main (and only) example of this.  But of the three stories, it’s that one that fits the theme least well.

Dreger’s account of the Bailey controversy shares a quality with her account of the Chagnon controversy: both are told as stories of lovable and humane, if out-of-touch, researchers being persecuted by ignorant people who don’t understand them.  Dreger spends a great deal of text talking about how much she personally likes Bailey and Chagnon – Bailey is a personal friend, Chagnon she met while investigating that controversy.  As “characters” in the book, they downright glow.  They’re funny, they’re good company, they both have cute and harmonious marriages.

It makes sense to write stuff like this in order to humanize people who have been demonized by others.  But one has to note here that none of this bears on the “truth” side of the things.  It’s certainly possible for someone to have committed genocide and still be a warm and sparkling conversationalist at the dinner table; it’s possible for Michael Bailey to be a great guy if you know him personally, and nonetheless to have been wrong about trans women.

With Chagnon, this tension never becomes relevant, because as a matter of simple fact, Chagnon was exonerated by multiple serious investigators.  With Bailey, the tension is glaringly relevant, because the issue of whether Bailey is actually right never gets fully addressed in Dreger’s treatment.  Indeed, she treats it almost as an irrelevant side issue.  Where is the value of truth here?

To be fair, Dreger does put her beliefs on the table about the issue.  But these beliefs seem to reveal little serious interest in the questions involved.  She seems to have uncritically bought the Blanchard-Bailey line – possibly because she only cares about these issues insofar as they affect her good friend Michael Bailey? – and to have done little investigation into academic work on transgender beyond this.

Astonishingly, for instance, the phrase “gender dysphoria” never appears in GMF at all.  (A word-search for “dysphoria” turns up only one result, in the title of a Blanchard paper cited in the endnotes.)  When Dreger presents her account of trans women, she talks about (for instance) transitioning as a choice made by feminine gay men in order to better fit into homophobic social environments, stressing that these people might not have transitioned if feminine gay identities were more accepted in their local environments.  I’m willing to believe this happens sometimes – but Dreger seems to actually not know that gender dysphoria is a thing.  This is in a book published in 2015.  One wonders if she’s ever even looked up the condition in the DSM (which changed the name from “Gender Identity Disorder” to “Gender Dysphoria” in the 2013 DSM-V, but even before that had included dysphoria as one of the two major diagnostic criteria).

Dreger has a page on her website, written after GMF was published, in which she responds to questions about “autogynephilia” and states her current positions.  Again, she never mentions gender dysphoria.  Of Blanchard’s androphilic/autogynephilic typology, she says that “I think what I’ve seen from the scientific clinical literature and socioculturally suggests this division makes sense.”  She does not provide any citations, and does not address critiques (see here) that the data show a continuum which does not separate well into two clusters.

I belabor all of this because Dreger’s indifference to the truth here simply makes GMF fundamentally incoherent.  I agree with Dreger’s theme; I have no clue how she thinks the Bailey story illustrates it.


But wait – Dreger’s claim is that activists value truth too little in their quests for justice.  Does this hold true for the activists who attacked Michael Bailey?

Again, Dreger seems to not much care.  She devotes a lot of space to the claims made by these activists, but mostly to express confusion over them.  Noting that some of them display what look to her like signs of autogynephilia, she scratches her head: why are they angry at a book for talking about autogynephilia?  One would think that someone in Dreger’s position – someone interested in getting to the bottom of situations where truth and justice appear to conflict – ought to answer a question like this.  Dreger doesn’t.  Her attitude is basically: “who are these weird people attacking my friend Michael?  What do they want?  They’re so confusing!  Michael is a scientist, so maybe they don’t like science?  Jeez, who knows!!!”

What she substitutes for consideration of these issues – and let me be clear, this is not nothing – is a detailed, blow-by-blow account of the nasty, dishonest ways in which the activists tried to ruin Michael Bailey’s reputation.  They were, in fact, really nasty.  But people don’t just do things like that for no reason.  What about the larger questions of truth and justice here?  Why do these activists believe Michael Bailey is so harmful?  Could it be the case that Bailey is harmful, to the point that defaming him is a net good?

Dreger never mentions this sort of idea, but it hangs uncomfortably over her whole book.  She bemoans the fact that her work on Maria New – which is generally polite and non-nasty, if very harsh on New – has failed to make appreciable waves in the world, beyond loading the first page of Google results with dex-critical pages.  On the other hand, Bailey’s book is now solely known as the subject of a stormy controversy, which received huge amounts of media discussion.  What if nasty activism is sometimes necessary to get the job done?  What if simply having both truth and justice on your side isn’t enough?  And, putting it the other way around, how can Dreger assume that the anti-Bailey crowd didn’t have truth and justice on its side, just because they were nasty and vicious to her friend?

In Dreger’s telling, Andrea James is a scary asshole who sends her possibly-physical threats via email, and Michael Bailey and Napoleon Chagnon are precious cinnamon rolls.  But fighting for truth-and-justice is not the same as identifying the Nice People and the Mean People.  These may in fact be (I hate to say it) largely unrelated endeavors.

A serious book about activism, science, truth and justice would begin with these disquieting possibilities, and then explore from there.  (One example that book might look at: Dreger’s earlier non-nasty activism for intersex people has gotten stuff done.)  Dreger’s book instead stays in an overly cozy universe, where “fighting for good” and “defending her lovable buddies against the bad meanies” can never be conflicting goals.

>how can Dreger assume that the anti-Bailey crowd didn’t have truth and justice on its side, just because they were nasty and vicious to her friend?

Because they sought to maliciously interfere with the scientific process, and censure a researcher for arguing a particular theory. Dreger is pro-science. Dreger is anti-muzzling. Andrea James et al unmistakably tried to muzzle Bailey and ruin his career because they didn’t like the theory he was proposing. Therefore they’re bad activists. Dregger, when she was an activist, didn’t try to stop Maria New because of articles New was publishing; she was trying to stop New’s unethical (in her view) actions towards patients. Therefore she was a good activist.

This view may be naive, but I struggle to understand how you see it as confused or why you continue to charge Dregger with ignoring the object level truth. Dregger believes that science is our best way of getting to the truth, and she’s not shy to repeat it again and again and again. She likes people who insist on doing science, including and especially science perceived as “problematic” by various activists. She dislikes activists who seek to destroy the careers of aforementioned people, and offers her own history of activism, which according to her did nothing of the kind, as a better alternative. This you dismiss derisively as “defending her lovable buddies against the bad meanies”, thereby missing the point. You make it seem as if she met Bailey, really liked him as a person, and continued to twist the facts on the ground into this lovable-Bailey theory, ignoring things that wouldn’t fit. The causality arrow goes in the other direction: those people became her lovable buddies *because* she perceived them as not compromising on science. 

OK, but what if the meanies are correct on object level, you keep asking. What is autogynephilia is bunk, doesn’t Dreger have a duty to address this possibility? Suppose a Fermat Last Theorem crackpot beat up several number theorists because they wouldn’t address his latest proof, and someone wrote about this, and you asked: but how come they don’t address this latest proof? Isn’t it conceivable that it might be correct? Yes, it’s conceivable, but also besides the point, the point being don’t beat up mathematicians. Did I load the dice too much in this analogy? I certainly did, so let’s make it a maverick algebraic geometer who beats up several established mathematicians because they give bad reviews to their articles and they can’t get published. Is it still maybe reasonable to say, don’t beat up other scientists, and whether or not your theory is correct is largely beside the point? Maybe it is, but how can we tell if you keep threatening you’ll thrash anyone who disagrees? Sorry, but in that climate references to published articles that entirely support your theory aren’t very convincing. It’s silly to try and talk about the object level while studiously ignoring the fact that the meta level is thoroughly poisoned.

P.S. You also complain at length that Dreger never tries to explain why the “meanies” seek to muzzle the “lovable buddies”, and what could their motivation be. She does, in fact, talk about it repeatedly; in the Bailey case, her view is that autogynephilia threatens the popular metaphor/theory of “a woman’s brain in a man’s body” etc. In her view, (some) trans activists see this theory as indispensable to promoting trans rights, and of course also as the truth; therefore autogynephilia is a vicious lie seeking to deny the reality of the trans-individual’s true gender. It’s a fair criticism that Dreger doesn’t try very hard to see things from trans activists’ perspective and that her explanations of their motives may be overly simplistic (in particular, that she prefers not to mention dysphoria at all is suspicious). But it isn’t a fair criticism that she doesn’t try at all.

Andrea James et al unmistakably tried to muzzle Bailey and ruin his career because they didn’t like the theory he was proposing. Therefore they’re bad activists. Dregger, when she was an activist, didn’t try to stop Maria New because of articles New was publishing; she was trying to stop New’s unethical (in her view) actions towards patients. Therefore she was a good activist.

I don’t see a clear-cut division between these two cases.  In both cases, the critics have an ethical case and a theoretical/academic case, and they would be vindicated if these cases add up to the conclusion that the researchers are not generating enough useful knowledge to justify the harm they are doing.

James et. al. didn’t just object to the Blanchard theory out of abstract disagreement; they believed that this theory was harming patients (insofar as it was believed by clinicians) and worsening the day-to-day lives of trans women (insofar as it was believed by the public).  Likewise, Dreger doesn’t just object to New’s methods; she also believes that New is not generating useful knowledge (her papers are few and methodologically poor).

In both cases, it is hard to separate “this theory is wrong” from “this theory is harmful,” since the two are deeply intertwined (as, per Dreger, the two tend to be).  Dreger is not just concerned for New’s own patients, but for the patients of other clinicians who have bought New’s take on the issue.  For instance, in GMF she writes

After pulling all the published information I could find and looking for evidence of proper ethics and scientific oversight, what I was seeing just seemed to confirm our worst fears. Besides promoting the intervention via the support group for CAH and her own foundation Web site, when writing about CAH for various textbooks, Dr. New had made a point of plugging prenatal dex to other doctors, writing as if it simply was the standard of care among clinicians in regular practice. As a result, all over the country, obstetricians and genetic counselors were using prenatal dex believing it to be safe and effective.

Note that in some respects it would be easier to make a case that Dreger is trying to “muzzle” New because of her theories than to make the analogous case for James and Bailey – because at least New is doing real, unique research.  (If only because everyone else has held back on doing the same research for ethical reasons!)  Bailey, by contrast, was merely writing a popular book providing narrative illustrations of a scientific theory he already believed was well-established.  Dreger in fact mentions this in the course of defending Bailey:

For the purposes of his book, Bailey wasn’t engaging in novel scientific research of this type [i.e. the type that would require IRB approval]; he was just picking and choosing stories from real-life people he met to illustrate scientific theories he believed were already firmly established. One might try to claim (as complaints against him hinted) that in choosing whom to write about in his book, Bailey was engaging in psychological research to test Blanchard’s theory. But that would attribute to Bailey a more open mind than he in fact had about male-to-female transsexualism. The truth was that he had become a convert to Blanchard’s taxonomy long before he wrote about it. To say Bailey had been doing novel science in his book would be like saying that if you were on a walk with an evolutionary biologist and she chose to point out to you an evolutionarily interesting behavior of some nearby birds, she was doing research to test the theory of evolution. The personal stories in Bailey’s book were really just window dressing for a store Bailey had long since bought.


I don’t want to get too far down the rabbit-hole of these two cases and all of their “object-level” details, here.  My point is that once you look at such details, it becomes difficult to judge these cases on the sole basis of a simple “pro-truth, anti-muzzling” principle.

Moreover, such a principle cannot be applied until one looks closely at the merely “object-level” details.  Questions about the wrongness of muzzling a researcher depend on the details of what that researcher is doing and whether it is really contributing to the common store of knowledge.  Questions about research ethics are inextricable from questions about academic theories, because often the ethics will look different depending on which theory you believe.  (Dreger has to cite studies in order to convince people that prenatal dex is unsafe; I imagine Maria New would have some research-based argument for why it is safe.  This stuff is not orthogonal to the ethical questions!)

(via lovestwell)

@rendakuenthusiast

I’m confused by your post – you say there is an emotional dividing line for you between harm that violates obligations and harm that doesn’t, and you provide an example of something you consider “harm that doesn’t violate obligations.”  In order to see what your dividing line looks/feels like, I would need to also have an example of something that you consider “harm that violates obligations,” and a description of how the two feel different.

Or maybe I’ve misunderstood, and the line you’re drawing is that (as you say) you feel that failing to do a “good but good obligatory” thing is “not morally [your] problem.”  It seems to me that there is more going on here, though, than the original distinction – there is also the concept that people have the absolute right to choose certain things.

The concept of an absolute right to choose one’s romantic partners makes sense to me, but clearly (?) not all actions that seem “good but not obligatory” involve choices to which this sort of absolute right is attached.  For instance, if you donate some fraction (perhaps zero) of your income to charity, you may have various arguments for why you do not donate more, and one common such argument is that donating more is supererogatory rather than obligatory – but it doesn’t seem to me that one has some sort of “absolute right to decide how to spend one’s money,” such that it can never be obligatory to pay money for anything at any time.

So is your dividing line drawn at the boundary between obligatory and supererogatory, or at the boundary between “involves my absolute rights” and “doesn’t involve my absolute rights”?

(via rendakuenthusiast)

resisting the badbrains when you know it’s badbrains is actively moral, not just permissible

@nostalgebraist

(via ursaeinsilviscacant)

[snip]

well the distinction is that things that are obligatory, if you don’t do them you’re doing something morally bad. superogatory if you don’t do them you’re not doing something morally bad. Would you say that not fighting against bad brains is morally wrong/ immoral?

(via cromulentenough)

This distinction only makes sense in theories that distinguish actions with the qualities

“it’s morally good if you do them”

“it’s NOT morally bad if you don’t do them”

from other actions with the qualities

“morally good if you do them”

“it’s morally bad if you don’t do them”

As I outlined in my previous response, I just don’t actually believe in categorizing actions in this way.  If you ask me which category “good action X” is in, I may hem and haw over which of the two best captures how I feel about good actions in general, but this tells you nothing about my opinions of X in particular.

Or: it’s no more or less obligatory than any other good action.  And no more or less supererogatory than any other good action.

(via cromulentenough)

resisting the badbrains when you know it’s badbrains is actively moral, not just permissible

@nostalgebraist

(via

ursaeinsilviscacant

)

When you say actively moral, do you mean obligatory or supetogatory (I yhink those are the right words?

(via cromulentenough)

I’m not sure I actually believe in that distinction, except when I’m in the throes of badbrains, so it’s a bit subtle.

The “permissible” part is about the ethics of the anxiety/depression/whatever, which can make it feel like you’re obligated to agree with it and thus that it isn’t “permissible” to resist it.  Then the point of the quote was that up here in “real world ethics” – as opposed to the pathological ethical standards one feels in the badbrains state – it’s actually good, and not merely neutral or “not very bad,” to resist the badbrains.  (I can’t say if this is obligatory or supererogatory because those concepts, along with “permissible,” are ones that definitely exist in my badbrains-ethics but not necessarily in my actual beliefs about ethics.)

so your non-bad brains ethics doesn’t have a difference between ‘must do this’ and ‘good to do this but not a must’? which of the two don’t exist? maybe it’s just my background since islam seperates things into 5 categories and i’ve never really questioned it but to me stuff is one of : ‘must do’ ‘good to do but not must’ ‘neutral’ ‘bad to do but not forbidden’ ‘forbidden’. where everything but forbidden is permissible and the first two i would consider ‘actively moral’ i guess. by superogatory i mean the second category, with the first obviously being obligatory.

(via cromulentenough)

I don’t (at the moment) feel like I believe in “obligatory” or “forbidden.”  I think some things are just really bad and some really good, but there’s no non arbitrary-place to draw the line between “really bad” and “so bad it’s forbidden” (or likewise for “very good” / “obligatory”).

As a matter of practical ethics, I think it can be good to make rules that say that certain actions are forbidden or obligatory, in cases where (say) a bad action might be tempting in the moment but would have disastrous and very long-term consequences.  (For instance, “drinking alcohol is forbidden” seems to be an advisable rule for many recovering alcoholics.)

The crucial thing here is that what makes an action suitable for these sorts of obligatory/forbidden rules is the way that action interacts with human psychology (temptation, nonzero time preference, temporarily impaired judgment for various reasons, etc.) rather than any inherent quality of actions as actions.  Trying to use these concepts in “non-practical” ethics (by which I mean judging the value of actions in the abstract, without considering the psychology involved) tends to confuse things – it’s tempting to say that if there any obligatory actions, they must especially good ones, but then this leads to strange “overly demanding” systems that say it is obligatory to be a perfect moral saint.

(In practice, it should be the other way around: the obligatory actions should be relatively achievable, and the supererogatory ones should be those that are difficult but “good if you can manage it.”  But “achievable vs. difficult” is a psychological thing, not an inherent property of actions themselves)

(via cromulentenough)

aprilwitching:

ursaeinsilviscacant:

raggedjackscarlet:

ursaeinsilviscacant:

raggedjackscarlet:

Oh, and, as a follow up,

The Cute People believe you can build a utopia by purging all the Rough People.

After all, no one will be mistreated in a society devoid of people who are bad at garnering sympathy.

I literally don’t understand what you’re trying to convey and I’m worried you’re calling me evil.
And I can’t defend myself because I don’t understand your point enough.

I’m sorry, honestly.

It was a follow up to this post, in which I was vagueblogging at the ””toxic masculinity”” discourse.

So, first of all, let me assure that this has nothing to do with neoteny.

What I meant by “Cute People” is…

people who have almost never been treated callously

people whose pain almost always fits The Narrative™

“people walking around with neon signs that say ‘EARN STATUS BY PITYING ME’.”

a Cute Person is someone who doesn’t understand that not everyone has access to a hugbox, and therefore believes that the toughness required to live without a hugbox is nothing but empty chest-beating.

a Rough Person, by contrast, has never had access to a hugbox. a Rough Person’s trauma doesn’t fit The Narrative™ , and so when a Rough Person opens up, reactions range from bemusement to disgust. So they don’t open up. And then the Cute People say “Your refusal to open up shows that you are fundamentally bad. Your presence ruins our infinite circlejerk of pity. go away.”

As for the original post up there, well… consider it a response to every SJ thinkpiece that could be summarized as “If you want me to be less ableist, you aspie freaks should be less contemptible”

Thank you for the explanation. And don’t feel bad that I initially didn’t understand. It isn’t possible to write in a way that nobody will ever misunderstand.

This is a totally valid point.

so it is certainly possible to become traumatized by way of situations that don’t involve interpersonal cruelty, brutality, or callousness– for example, by surviving a natural disaster or a terrible car accident, or by witnessing the death of a loved one– but i suspect what’s being talked about here is more in the line of abuse, rape, bullying, etc. victims whose experiences fit into a particular narrative and whose expressions of pain are considered appropriate and legitimate within a certain subcultural context vs. those whose experiences don’t, and whose expressions of pain aren’t.

i wouldn’t say in that case that “cute” people (i hate this terminology, btw) are generally those who have “rarely experienced callousness”, nor are they generally people who are intentionally trying to idk manipulate or bamboozle others into pitying them on purpose. i’d say they are damaged people who have been burned a lot by life, who have finally found some group or clique that actually listens to them, validates them, gives them a coherent narrative to shove their pain into, in many cases, and treats them nicely. that is pretty heady stuff, and i think it sometimes renders such people myopic and insensitive when it comes to other people’s issues with trauma– it can be difficult to realize that what worked well and was maybe even lifesaving for oneself might be incredibly detrimental (or just completely inaccessible) to another person. there is also the fact that if the group is heavily focused on promoting a specific trauma narrative, the members do often end up having difficulty with the idea of valid traumas that exist outside that pattern, or that actively go against it in certain ways. (but this happens with a lot of different groups/subcultures and types of narrative– i don’t think it’s exclusive to “sj” and i think a lot of “sj”-dominant trauma narratives are prominent/emphasized precisely because they still tend to be invalidated or invisible in the culture at large. which is not to say that people whose traumas fit NO established or widely recognized narrative don’t have it even worse, in terms of what support they can access, whether anyone is supporting or helping them at all, etc.)

anyway besides being quite belittling, i sorta object to “cute people” vs. “rough people” simply bc i think it obscures the actual point you’re trying to make– a lot of people who are prominent in internet “sj” circles and who tend to be mean to people outside those circles are pretty vitriolic and blunt and sarcastic and overtly angry. since they fit the narrative + have clout within their social group for other reasons, their over-the-top anger-spillage is either considered righteous and valid, or tolerated as a sympathetic, justifiable response to all the ways they’ve been burned in life. meanwhile, a lot of people who would probably fit into your “rough” category are closed off and shy or incredibly soft-spoken and doormat-like– or they deal with their trauma by being very jokey and dismissive and “lol like that would ever bother me” about it– or they deal with their trauma by putting on a really lighthearted, sweet, or fun! persona and just never actually talking about/addressing the trauma directly– or they deal with their trauma by sincerely embracing cheesy inspirational quotes and a “bootstraps! :)” mentality and, idk, probably also collecting those hideous little precious moments figurines. i absolutely agree that people who have certain types of trauma and who respond to their trauma by becoming aggressive, cynical, bad-tempered, and/or completely unwilling to display emotional vulnerability in any context are misunderstood and denied support. but they are not the only kind of person whose reactions help ensure that they’ll be left in the cold.

I’m reblogging this bc I thought RJS’s first post on this subject was really interesting at least as I interpreted it, but now I’m not sure if I read it as intended.  So I want to spell out my original interpretation

Which was that “cute people” and “rough people” are not categories that actually exist, but part of the structure of a viewpoint I see expressed sometimes (most often on tumblr).

Specifically, the viewpoint of people who are generally in favor of emotional expression and asking for emotional help, but who tend to mock male emotional expression (“manpain”) or men asking for emotional help (“demands for emotional labor”)

This isn’t self-contradictory or anything.  But it does have the odd, usually unstated implication that men should become stoic, i.e. should learn to deal with their pain through their own psychological resources alone, without the help of others and even without showing that they’re suffering.  This is, of course, a traditional gender role.  I do wonder whether the people who implicitly say this have thought through the fact that they are advocating for a traditional gender role.  (Maybe some of them have.)

(This is something about the “emotional labor” conversation that has always confused me: people tell me that women traditionally do a disproportionate amount of emotional labor in relationships, but isn’t the traditional cliche relationship one in which the man never needs emotional help but provides a steady, firm shoulder for the woman to cry on when she’s being emotional?)

Anyway, I took RJS’s “rough people” and “cute people” as very gender-loaded terms that nonetheless weren’t quite equivalent to “men” and “women” (or “men” and “non-men,” which is how I tend to see this stuff broken down).  The idea being that if this viewpoint were “put into practice,” it wouldn’t necessarily create precisely the old gender role.  In particular, rather than stoicism being seen as a positive manly quality (and its absence, in men, as a failure), stoicism would just be largely invisible, because it’d be widely believed that expressing one’s emotions was good and stoicism was pointless – and thus stoicism would be re-invented by people who couldn’t seek sympathy in a socially acceptable way, and practiced silently without being spoken of.  The stoics here would not all be men, by any means, but the whole state of affairs would come about for gender-loaded reasons.

So, uh, that’s how I took the original post and it seemed interesting to me?

(via aprilwitching-deactivated201808)

On the monster at the end of Neoreaction: A Basilisk

@philsandifer

The broad form of the problem you’re identifying here is a pretty common one when doing any sort of cultural or media criticism in which gender comes up. For the most part, my approach tends to be to be more interested in dealing with existing culture than to try to deal with an imaginary utopian culture, and to accept that the centuries of connection between femininity and purity are not simply going to be wished away, so one has to engage with this sort of thing, mess and all.

OP’s discussion of trans women seems particularly apropos here - as she notes, trans women subvert gender. And yet they’re fundamentally invested in its reality. To be a trans woman is not only to say that “women” and “men” are real and meaningful categories, it’s to say that there exist things (like yourself) that decisively do not fit within one - an absolute denial of the idea that you could possibly be a man.

I don’t think “I want to avoid it, as much as possible, when I’m thinking about actual flesh-and-blood people” is quite the right disclaimer, therefore. Something more like “I want to remain mindful of its problematic elements and always think about what harm using this culturally prevalent concept might cause.”

But when, as with NAB, I’m dealing pretty directly with white patriarchy, I tend to think that a straightforward embrace of the feminine in its sense of “that which is consciously excluded and marginalized by patriarchy” is fairly safe and doesn’t need too-constant disclaiming.

I disagree with the last paragraph.

The reason is that I don’t think the senses of femininity (empathy, receptiveness, purity) mentioned upthread are anything like “that which is consciously excluded and marginalized by patriarchy.”  They are in fact ideals constructed by patriarchy.  Patriarchy has room for women as long as they’re like that.  The ones who get excluded are (among others) the ones who aren’t.

Victorian patriarchy had plenty of room for femininity, if in the form of the Angel in the House.  She was empathetic, receptive, and “above all – I need not say it – she was pure” (as Virginia Woolf put it).  If we lived in Victorian times and you proposed to embrace those “feminine” qualities, you wouldn’t be embracing something excluded.  You’d be embracing an existing ideal, the kind of thing people would laud in book-length poems.

Or back in ancient Greece, we have Penelope, who retains her “purity” for ten years while her husband is away, and who doesn’t mind that he was sleeping around the whole time.  Somewhat different ideal, same purity and domesticity.  The ideal Odysseus (and his fellow Achaean soldiers) embodied was pretty violent and awful, but I don’t think the solution is to demand that everyone be Penelope.  Many, perhaps most people are unable to tolerate being Penelope.  Or the Angel in the House.  Or the yamato nadeshiko.  Etc.

Patriarchy doesn’t just devalue femininity, it constructs its own ideals of femininity; as a woman, you’re still lower even if you do femininity “right,” but if you do it wrong, hoo boy.  It seems to me that “embracing” a patriarchal ideal of femininity will largely comfort the comfortable and afflict the afflicted.  After all, that ideal has already been embraced, and the women who can’t or won’t meet it are getting the short end of the shortest stick.  (Some people are very tired of hearing “be pure, be empathetic, be yielding.”  Some people can’t, and are punished for it.)

(See also @polyaletheia‘s point #3, about how white supremacism doesn’t exclude femininity from its notion of whiteness, but specifically includes it as a thing that must be preserved from defilement.)

(via eruditorumpress)