Install Theme

A few days ago I finished reading Lanark by Alasdair Gray.  It’s a book I’d been meaning to read for a long time, since people kept recommending it to me, and I really like another book by the same author (Poor Things)

It was … well, it was really good, in a lot of different ways: structured in a cool and unique way, darkly and drily funny, emotionally involving (sometimes painfully so), often bizarre and dreamlike to the point of being profoundly unpredictable and yet without descending into mere randomness.

But it’s put me in this odd conflicted position, because it was also one of the most distinctively depressed novels I’ve ever read – not as in “dark” or “depressing” (although it is those things), but as in “expresses the worldview of (a certain kind of) depression, and evokes the characteristic thought patterns of (that kind of) depression.”

I’ve never been clinically depressed per se, but I’ve had bad periods that I identify as depressive because they share features with full-blown depression, just with less persistence and/or intensity.  There are some features of my bad periods that overlap with things I’ve heard from some friends and writers with depression, and it’s those features I see in Lanark – i.e., I don’t think all depression has all of those features, but there’s a definite thing there.

I have to leave the house pretty soon, so I may come back later and elaborate on this brief description, but like … there will be a voice, or a character, in my head.  Not a distinct persisting entity, more of a gestalt that will attach itself one day to some cynical blogger I’m reading, and then the next day will take the form of some vague nonspecific person I’m talking to in my internal monologue, etc.  The main characteristics of this voice are that it is

(1) very cynical and very critical of everything, with extremely high standards for all people / things, standards that are an apparently seamless mixture of ethical, intellectual, and aesthetic

(2) very impressive, in a way that makes me feel like it has the “right” to make each criticism it makes; when the voice makes intellectual criticisms it always seems frighteningly smart, when it makes aesthetic criticisms it seems to have impeccably refined taste, and so on

(3) specifically, it feels so perceptive and so synoptic in its understanding of things that its “insights” feel inescapable; it has already prepared for any objection I might make to it, and there is a feeling that it is “playing any game I play at a higher level,” that it thinks what I would think if I were more honest and perceptive, that it is a sort of mentor figure I would do well to learn from

And in short, Lanark feels like a novel written by that voice.  And it’s very good – of course!  The voice is a genius!  But I would feel strange recommending it, or even praising it at all without including this caveat that it is written by a devil that squats on my shoulders and the shoulders of many others (and maybe the shoulders of Alasdair Gray? does he hear the voice in all its particulars, or am I just reading my own experiences into his book?).

So, this is going to sound way darker and edgier than I intend, but – I honestly don’t know what people mean when by “forgiveness”

When someone wrongs me, it hurts more at first, and then less later on, sure.  I think of that as closely analogous to a physical wound healing, with about as much moral significance (none).  And yes, I do try to give people second chances, but that doesn’t depend on overcoming my disapproval of first offenses.  (I still disapprove of them, just not enough to preclude a second chance.)

I understand the idea that you have to stop letting anger consume you after a point, and that anger can lead you to wish truly horrible things upon a person for a temporary period before you mellow out and remember that they’re human like everyone else.  But both of these strike me as entirely internal processes, about taking proper care of your own mind.  Yet people talk about “forgiveness” as something you do to someone, or an opinion you express – like it’s some sort of open acknowledgement that the other party’s wrongdoing was different than you’d previously thought/felt, if not necessarily not wrong.

Again, I just think about these things like physical wounds.  If I burn my hand on a stove, it may hurt for a time, and in that time I may be especially wary of touching stoves.  Then, as the wound heals and I forget about it, I may (or may not) lose some of that wariness.  But there’s never any discrete point where I’m like, “UPDATE: hand-burning potential of stoves less of a big fucking issue than I was previously making it.  Stoves and I are cool now.”  I still know heated stoves can burn my hand, and that never becomes less true!  I may forget this property of stoves, occasionally, but I never forgive stoves for it, whatever that would mean!

One thing that stands out to me about my current job in the private sector, relative to my previous life in academia, is that there is a lot more implicit recognition of the importance of structure for getting things done, and a lot less reliance on any individual’s supposed ability to “just do it” through an act of pure heroic will.

Like, I know that “meetings” are not a part of corporate culture that people tend to have warm feelings about – and I’ve already tasted, a few times, the unique frustration of having so many meetings about the thing I’m supposed to be doing that I have no time left to do that thing – but they have their uses, and this fact was downright wondrous to me when I first arrived.

People would talk about a potential task or project, and then – rather than acting like it’ll just happen because we’re all supposed to be Heinlein protagonists here –  they would bring in other people at the company with relevant expertise and ask them stuff, and would start blocking out an ad hoc structure for talking about the thing every week, and would ask things like “how does this stack up against the other priorities of the people who’d be working on it?”, or “what level of imposed structure is necessary to make sure this actually happens, given that people have other stuff going on?”, or "can we commit to a date when we’ll either have a minimal proof-of-concept or dismiss this as too costly for the expected benefits?”

I was like, you can just do that stuff?  You can acknowledge that some structure is necessary and helpful, and that every task trades off against other competing tasks, and then talk about the tasks as though you actually want them to get done by real humans in the real world?

Because where I came from, it wasn’t like that.  Where I came from, the only meetings that happened with any regularity were one-on-ones between a professor and a grad student or postdoc, and in those meetings the professor would talk about what they wanted to happen, and the student / postdoc would talk about what they had recently done, and the entire thing was always about the content of the project and never about process.  The process was that someone was told to do something (perhaps something specific, perhaps something vague like “look into this”), and they’d go off by themselves, and do whatever their own personal process was, and then they’d come in next time and stuff would just be done (or ought to be), and no one would talk about how this happened.

And it felt embarrassing to bring up issues of process, because it felt like an admission that you weren’t good enough, that you couldn’t “hack it,” that you hadn’t done the work (which was your own responsibility, not anyone else’s) to develop good organizational structures for your own private activity, structures so good that you could be safely approximated as this primal agent of creation, always capable of going away to your desk (or wherever you work, no one cares, out of sight out of mind) and Just Doing It.

But if you know anything about people, you know that this isn’t actually a good system for getting things done.  And you can do things a better way.  You can do things as if you really cared about the end result, and not just about your own personal virtue.

In some social settings I am a wallflower, and in others I’m a strong and noticeable presence.  Indeed, it feels like I tend to end up on one side of that spectrum or another, with negligible likelihood of landing somewhere in the middle.

This has always been a mystery to me, and it probably always will, at least in part.  But here is a hypothesis that does a pretty job of explaining the pattern: I can only make a splash in a social setting if it is a setting where can I feel assured that, at least sometimes, I have the floor.  (In the public debate/meeting sense of the term, although this can extend to all behavior, not just speaking and being heard.)

I tend to be quiet and awkward at parties, and in casual hangout settings with more than 3 people or so.  Usually, when I reflect on this, I default to the (perhaps relatively flattering) explanation that I’m concerned about talking over other people, or drawing attention away from people who would put it to better use.  When there are so many people around me, what is the likelihood that I have the best (most informed, interesting, funny, entertaining, etc.) thing to say or do at any given moment?

And yet: in meetings and discussion classes, I talk frequently, confidently and at length, to the point that I have to remind myself to hold back so I don’t dominate the conversation or otherwise annoy others.  This puts the lie to the “don’t want to talk over others” explanation, since the exact same considerations ought to apply to these settings – indeed with more force, since they are closer to zero-sum.  (If a meeting or a class is on a strict schedule and can’t run over its time limit, then every extra second I talk is a second someone else can’t talk, if we ignore silences for the sake of a first approximation.)

The difference, I’m now thinking, is that in meetings and discussion classes, once I start to talk I know I have the floor.  In these settings it’s usually considered a faux pas to interrupt people, and people are also usually not allowed to get up and leave, so while I’m talking, I know everyone has to listen.  And once I know that, I’m in fact pretty confident (rightly or not) that I have things to say that are worth hearing.

What I’m not confident about, ultimately, is holding moment-to-moment attention in the face of competition.  If I know that at any time, someone could interrupt me with something more appealing – verbal or nonverbal – then I’m lost.  I don’t know how to be continuously appealing, robust to interruptions at every step.  I just know how to do things which, after they’ve been fully completed, I expect to have been appealing as entire wholes.  If I’m in a setting where no one gets to have the floor securely, this feels, inside my head, like I’m afraid of talking over other people.  And maybe there’s some truth to that – but it isn’t that I’m scared of not having good contributions, it’s that I’m scared of not having continuously appealing ones.  (There is a kind of shame associated with this, an awareness that other people have some talent I lack, and that it must be obvious that I can’t stack up to them when I try.)

I think my experiences in online venues also fit this pattern.  On forums and IRC, I’ve sometimes been a notable presence, but usually I have this awkward way of being in my own world, almost talking to myself as the conversation continues around me.  In these systems, messages are presented sequentially, but there are a whole lot of them and hardly anyone reads every single one, so you are competing for momentary attention; left to my own devices, I just ramble on and hope someone finds my messages more interesting than the ones interleaved with them.

This is also the way the tumblr dash works, but tumblr has two advantages for me here.  First, there’s the concept that everyone is creating their own “blog,” which appears on the dashes of their followers almost as an aftereffect.  This allows me to ramble on inside my own world for as long as I please (in this post, for example), and to feel like this is an expected way of engaging with the medium.  (If you don’t like reading this stuff, you don’t have to follow; if you’ve followed, apparently you do.)  Second, when I reply to someone via a reblog, it shows up in their notifications in a way that makes it harder to ignore than someone quoting you in a forum post, a way that makes them have to attend to your message (“give you the floor”), even if it isn’t continually appealing.  (At least it seems that way in my own experiences of receiving both sorts of responses.)

 (I’ve been more successful on Discord recently, which is a “competing for attention” venue.  But largely, I think, due to confidence built from knowing that people there already know me from tumblr.)

One of my recent mantras has been “much that is complex is shit”

Which is primarily meant to be a caution against over-complex theorizing, but I enjoy how it works at different levels verbally – many complicated ideas are bad via overfitting (shit in the figurative sense), but also, when a correct account of something has to be complicated (high Kolmogorov complexity), it’s often because that thing was part of the meaningless variegated noise of physical reality (such as literal fecal matter, which contains a lot of bacterial diversity etc. if you happen to care) – things that aren’t compressible aren’t eo ipso important to us, in fact frequently it’s the opposite

2017 felt like a lacuna in my life.  Technically, I did a bunch of things in 2017, and even some things that are now benefitting (or otherwise affecting) me in 2018 – but none of those were new things.  Just following through on plans I had in 2016, many of which took a very long time to get anywhere.  Or losing track of plans from 2016 (like Almost Nowhere), which I am now picking up again.

It feels like in 2016 I had a direction I was going in, and then 2017 was one of those movie title cards reading “A YEAR PASSED,” and now the plot has picked up again, as I am finally far enough along on my self-appointed path for things to become interesting again.  Natural enough for the audience watching the movie, but weird for me, since a year really did pass.

Did something similar happen to anyone else?  I was reading through some old posts and chat logs and stuff, and I kept having to remind myself that things dated 2016 were over a year ago.  They don’t feel like yesterday, but the time between them and now still doesn’t feel quite real – it’s like when you wake up after eight hours of sleep and you’re aware that time has passed, but you don’t remember the previous evening as “eight hours ago” the same way you would remember midday as “eight hours ago” in the evening.  But presumably 2017 was not a fake year for most of you, and posts and chatlogs from 2016 feel a full year+ old to you.

I was just re-reading the extant parts of Almost Nowhere tonight, and thinking excitedly (as one does) about how people were going to react to the next few chapters, and then realizing that I’ve always had it in the back of my mind as a project I would get back to any day now – tomorrow! next week! soon! – and yet it’s been nearly a year since it was updated, and I’m going to have less time than ever to work on it for the foreseeable future, and those “next few chapters” will never arrive at this rate

Of course, free time has never been the limiting factor – I had plenty of other things to I was supposed to be doing, back when I wrote the stuff I did finish – and if I really want to keep going, which I do, it’s always possible if I just make it happen

I have various hypotheses about why I can’t just write the way I used to, but none of them are nearly as plausible as the one that says “once you say ‘I can’t just write the way I used to,’ it becomes a self-fulfilling prophecy”

otter4hwpdumplings:

nostalgebraist:

@slatestarscratchpad​‘s new post on stimulant prescribing and ADHD is good.

One thing I’m curious about that was not addressed in the post is the role, in all of this, of computerized tests – specifically, “continuous performance tests.”

I had to take one of these – the TOVA (Test of Variables of Attention) – when I went in to get tested for ADHD in 2014.  (I was in grad school at the time, and wanted to get tested for the same reasons as the “Senior Regional Manipulators Of Tiny Numbers” Scott talks about.)  The tester said I didn’t have ADHD, and at the time I assumed my normal TOVA results weighed heavily in her decision, and (also) that this was normally how such things were decided.

But Scott’s post makes it sound like the usual procedure is a lot more of a human judgment call.  He mentions a variety of things that prescribers do to make themselves feel better about their decisions, but none of them are “administer a computerized test with no human oversight and always follow what it says (or always do so unless you can think of a really good reason not to).”  If nothing else, this would certainly reduce worries about human biases.

I say “if nothing else” there because the same thing would be true of any such test, even if it had no diagnostic value at all.  (Then your decisions would suck – but even then, not because of your biases!)  However, tests like the TOVA may indeed have a lot of diagnostic value.  That is, they may have good sensitivity and specificity in discriminating controls from people with ADHD diagnoses***.

(There are even some studies showing it can discriminate these groups from people who are “faking bad,” i.e. malingering.  This makes some sense if the distribution is light-tailed, e.g. normal, so that that if you overdo your faking by just a little bit you’ll stray from a region where 5% of the population lives to a region where 0.01% of it does.)


For one thing, if this is true, it means that we could just automate the whole process and get roughly the same results we were getting before, but without worries about human factors getting in the way.

Additionally, if true this is scientifically interesting, in part because of what it says about existing (non-computerized) diagnostic techniques.  Scott’s post describes a very fuzzy, human process with a lot of variation between clinicians.  But apparently this process has enough reliability to agree with a computerized test a lot of time, which would not be a priori obvious.

Moreover, if (as Scott says) ADHD is one extreme of a continuous/unimodal distribution, then we could use the TOVA to figure out where clinicians are already implicitly setting the cutoff.  Scott writes:

We could still have a principled definition of ADHD. It would be something like “People below the 5th percentile in ability to concentrate, as measured by this test.”

We aren’t doing this, but what we are doing may be accidentally similar to it.  The Schatz et. al. 2001 study, discussed further below, includes an ROC curve showing us how many false and true positives we get for various thresholds.  The thresholds are for “T scores,” which are apparently like z-scores except the mean is set to 50 and the SD to 15, so that e.g. a threshold of 65 (the recommended one) means you say everyone who’s 1.5 SDs or more above the mean of the reference population has ADHD.

If everything were normally distributed, you could get quantiles out of this, and translate clinical behavior into cutoffs separating X% of the population from  (100-X)% of it.  (Well, sort of – the “reference population” here is neither the full population nor the non-ADHD population, it’s sort of a mixture determined by the selection criteria used to make the normative stats.)  Of course, as usual, the people who made the reference stats don’t say anything about whether the distribution was normal.  But this kind of analysis could be done by someone, in principle, anyway.


(***Caveat: the most widely cited study I could find on this was is Forbes 1988, which – astonishingly – was not blinded.  That is, the TOVA was administered in the process of making the diagnostic decisions against which it was later compared, and were [Forbes’ words] “usually known before the final diagnosis was made.”  Forbes goes on to claim that different TOVA results would not have flipped any of the diagnoses, to which my reaction is “okay, great, so if that was true, why did you show them to the clinicians at all?

However, there are also studies like Schatz et. al. 2001 that give the TOVA to people who have already had a formal diagnosis done before the study started, and also to controls.  There are still worries like “are we sure the original diagnoses didn’t use the TOVA or a similar test?” and “given our screening procedures for controls, what base rate of undiagnosed ADHD should we expect in our control population, i.e. how sure are we that some of our control ‘false positives’ weren’t true positives?”, so I still am not impressed with the evidence quality I’ve seen.  That said, if you grant for the sake of argument that Schatz et. al. did things right, they get good sensitivity/specificity results too.  Oddly, they interpret their results as bad news for the TOVA, on the basis that it does worse than a test based on parent ratings, but since the original diagnoses themselves involved parent ratings, this doesn’t seem like a fair/useful basis for comparison.)

I use executive function tests like the TOVA in my research. The idea of placing anything except a very small amount of weight on their results for the purposes of a diagnosis makes me pretty uncomfortable.

Most good executive function tasks have low between-subjects variability (like the TOVA, Go-NoGo task, Flanker task, etc), but this is also why they make pretty poor tools for establishing clear individual differences. This idea was explored quite explicitly in a recent paper (Hedge, Powell, & Sumner, 2017), where they evaluated the variance and test-retest reliability of seven commonly used response tasks.

You should honestly consider getting re-evaluated, if you believe that the TOVA was the primary diagnostic tool used to diagnose you. “Real” Adult ADHD diagnoses include parent interviews, several scales (e.g., Brown ADD scales, non-ADHD tests, etc), a fairly comprehensive assessment of your personal background, and so forth.

Also, a cursory survey of the sample sizes for these TOVA studies is pretty damning. Any individual difference study with a sample size under 100 (per group) should be thought of as only preliminary.

I also want to push back on @slatestarscratchpad​‘s apparent trivializing of the DSM for the purposes of diagnosis, although this is only done kind of facetiously (I hope, anyway). The potential for people to malinger the DSM is altogether irrelevant when your main objective is to correctly diagnose individuals who do genuinely suffer from some kind of mental illness. Symptom clusters are, at present, the best tool we have to diagnose individuals and recommend appropriate treatments. In regards to the idea that ADHD could be defined as “people below the 5th percentile in ability to concentrate, as measured by this test“, that test will probably never exist for any mental illness ever. Ever. There is not a single neuropsychological test today for any mental illness that is better or even near to being as good at diagnosing an individual–or customizing their treatment–compared to symptoms and symptom clusters. Because identical symptoms and symptom clusters emerge out of wide and even non-overlapping range of breathtakingly complex neurocognitive abnormalities, the likeliness that we will stumble on some test that correctly diagnoses the cluster of symptoms we call OCD 99% of the time, or even 95% of the time, is low.

Granted, mere symptoms are still not good enough to help people get the right treatments, which is why there is a massive push among researchers to get clinicians and clinical researchers to consider mental illness with the kind of approach seen in the NIMH’s Research Domain Criteria (RDoC) project. Abandoning the categorical approach of the DSM (”ADHD”, “unipolar depression”, etc) will not only do more to actually help patients treat their symptoms, but even has the potential to solve the issue of bullshitting/malingering from all the Senior Regional Manipulator Of Tiny Numbers trying to extract drugs from their exhausted psychiatrists in one fell swoop.

Oh my god I need to lie down.

A few reactions:

(1) Thanks for the link to the Hedge, Powell, & Sumner paper – looks very interesting.

(2) When I said I thought my (non-)diagnosis was largely based on the TOVA, I don’t mean that the evaluator just did a quick TOVA and sent me on my way.  She did a bunch of stuff – including an intelligence test (prorated WAIS), getting questionnaires (BAARS-IV) from me and my father and my girlfriend, some other tests, and a conversation about my personal and mental history – and sent me a 9-page report on all of it afterwards.

From my perspective, though, most of this was clearly kinda useless.  She dutifully collected a lot of different kinds of information, but on the evidence of the written report, she didn’t use it to form some sophisticated multi-dimensional view of my case.  In a way, the opposite was true: if she had spent the entire several-hour interaction looking at exactly one aspect of my case, she might have been able to drill down into subtle details, but since she broke the interaction up into many smaller bits, each bit was – of necessity – a lot shallower.

For instance, on the questionnaires, each of the three respondents (me/girlfriend/father) gave markedly different answers from the other two, but instead of diving further into this discrepancy, she just noted it and went on with her interpretations.  Likewise, she had trouble reconciling my appearance of high life satisfaction in the interview with my relatively dark answers on an emotional functioning questionnaire, but rather than explore that further, she just decided on an interpretation (roughly, “he has a lot of problems but is unusually OK with that state of affairs”) and ran with it in the report.  And so on.

Now, perhaps this was just a bad clinician, and what she gave me was still not a “real” adult ADHD test.  But everything I said above could apply just as well to an earlier neuropsych evaluation I had as a teenager (not for ADHD), and to evaluations I’ve heard about from friends.  By which I mean, even if there’s a Right Way to do this stuff, I don’t think I trust actual working clinicians to execute it reliably in the real world.  (This is not necessarily an insult; they’re busy and there are a lot of people out there to treat.)

This is all a roundabout way of saying that I had hoped her assessment was largely based on the TOVA, since the whole “holistically integrate many streams of information” thing clearly failed, as I’ve seen it do in other cases, and pretty much expect it to do in the typical real world case.  A simple computerized test, or a set of them, may be worse than an evaluation done the Right Way by an ideal practitioner – but as a patient I can only access real practitioners, not ideal ones, and I’m not sure I trust them any more than I’d trust some well-designed but completely automatic test.  (Probably less, TBH.)

(3) Relatedly – I don’t think @slatestarscratchpad is arguing against symptom clusters.  He’s talking, in part, about how the understanding of “the ADHD symptom cluster” which is actually applied in practice does not fit the science very well, which seems like the same kind of  concerns that motivates RDoC.

Whether or not scientifically motivated mental illness categories will ever be diagnosable via “a single test” seems largely to depend on what we count as “a single test.  I take your point that a single neuropsychiatric test, in the sense we currently understand the phrase “neuropsychiatric test,” is not going to be fully diagnostic, because mental illnesses involve more than one dimension of neuropsychiatric function.  But that doesn’t mean it isn’t possible to take our best understanding of all the dimensions involved, distill it, and make a brief effective diagnostic tool that would fit the normal English meaning of the phrase “a single test.”  Cf. Scott’s old post “Does the Glasgow Coma Scale exist? Do comas?” (although I still disagree with him about the IQ case specifically).

(via otter4dumplings)

There are 2 great fiction books which I’ve arbitrarily decided I want to finish before the end of the year, one of which is extremely exciting and engrossing and the other of which I only have 20 pages left in (and it’s good too)

And yet crystalizing this for myself as a goal has suddenly made me feel like aimlessly browsing the internet to avoid reading, for all “responsibilities” must be avoided, this is the inviolable way of things

Deep down, my mind seems to believe that true reality is continuous and discrete ontologies are just fake plastic toys we come up with because they’re sometimes easy to think about

And this has, unconsciously, influenced my choice of focus in physics/math stuff – even though it seems like a very suspect assumption, and even though it ultimately stems from a non-rational feeling that, like, we humans should be ~too sinful~ to grasp true reality and thus it can’t involve discrete elements, since a sufficiently small set of discrete elements can “fit in the human mind all at once with nothing left out”

(and even a structure that is too large to grasp can be divided into such pieces)