Install Theme

journalgen:

Journal of the Commission on Mechavengeance and Anthroastrophysics

Reasons and Persons had a lot of the qualities I like about analytic philosophy and relatively few of the qualities I don’t like.

To some extent it’s just that it contains very little metaphysics, and the analytic treatment of metaphysics is probably #1 among the things I don’t like in analytic philosophy.

But also, there is something about Parfit’s way of investigating a question that is refreshing to me.  He does a lot of the usual analytic things, like judging general principles on the basis of extreme/contrived counterexamples.  But he has this quality of not stopping after he’s made a technical contribution.

After poking a hole in some principle, he thinks about how the principle might be fixed; he regularly mentions potential objections and concedes they have some force, or admits that he finds both the objection and its negation “reasonable”; again and again he notes the existence of people who have intuitions contrary to his own, or admits that his conclusions are hard to swallow and tries to make them go down easier for the reader, instead of just saying “QED, bitches.”

He doesn’t talk about it much in this book – though apparently he does in his final book – but he sees himself as trying to resolve disagreement, which is not quite the same as winning arguments or constructing proofs.  He doesn’t find it satisfying if others merely accept his conclusions grudgingly, out of an abstract allegiance to reason, with the attitude of the exasperated “I guess” meme guy.  He wants to find what can, upon reflection, be happily assented to by all.

neurodivergent-karen:
“ Last night I dreamed that I was an office worker who got commissioned to create this image for a tiny country run by amphibians. In the dream this image also got 4k notes for some reason?
So come on tumblr, we can’t have a...

neurodivergent-karen:

Last night I dreamed that I was an office worker who got commissioned to create this image for a tiny country run by amphibians. In the dream this image also got 4k notes for some reason?

So come on tumblr, we can’t have a tiny nation run by amphibians but there’s another part of my dream we can make a reality.

(via funereal-disease)

type12error:

nostalgebraist:

nostalgebraist:

I know next to nothing about bioinformatics, and have a very basic question about it:

I keep seeing those studies that try to identify genetic factors for traits or diseases in big data sets, and end up with results like “we identified 300 SNPs that collectively explain 5% of the variance.”  And this happens with things that are thought to be very heritable based on other evidence, so people talk like there’s something missing.

There are at least three missing pieces that could cause such a gap:

(1) factors not present in the SNP data at all (this is sort of my catch-all category)

(2) SNP-SNP interactions (nonlinearity)

(3) linear effects of so many SNPs that we can’t get significance for most of them with our sample size

Do we have any sense of which is the main factor?  I’m particularly interested in (2) – I know people use random forests and stuff sometimes, and it’d be interesting if we could get good cross-validation performance out of a nonlinear model, even if it wasn’t interpretable or didn’t have a well-defined hypothesis testing framework.

(@raginrayguns​ may know?)

Now that I think about it, (3) is poorly phrased, since whether or not you get significance depends on how you correct for multiple comparisons.  Since some of these studies use conservative corrections like Bonferroni, they have deliberately low power, focusing on identifying SNPs that are really associated with the trait and having few false positives.  If we expect something to be caused by very many SNPs, it’s not surprising that this will find relatively few of them.

If that’s the case, I guess I’m wondering why I only see these studies and not their complement, the studies that just try to predict outcomes.  Perhaps those studies exist, but if so it’s strange that (reputable) people do prediction on the basis of the conservative studies.  E.g. https://dna.land/ lets you upload genetic data and see a “Trait Prediction Report,” but their trait predictions seem to rely on studies like those I described in the OP, which identify a few SNPs with confidence but have little predictive value.

The “just try to predict outcomes” thing is called a polygenic score. They’re not uncommon.

(via type12error)

nostalgebraist:

I know next to nothing about bioinformatics, and have a very basic question about it:

I keep seeing those studies that try to identify genetic factors for traits or diseases in big data sets, and end up with results like “we identified 300 SNPs that collectively explain 5% of the variance.”  And this happens with things that are thought to be very heritable based on other evidence, so people talk like there’s something missing.

There are at least three missing pieces that could cause such a gap:

(1) factors not present in the SNP data at all (this is sort of my catch-all category)

(2) SNP-SNP interactions (nonlinearity)

(3) linear effects of so many SNPs that we can’t get significance for most of them with our sample size

Do we have any sense of which is the main factor?  I’m particularly interested in (2) – I know people use random forests and stuff sometimes, and it’d be interesting if we could get good cross-validation performance out of a nonlinear model, even if it wasn’t interpretable or didn’t have a well-defined hypothesis testing framework.

(@raginrayguns​ may know?)

Now that I think about it, (3) is poorly phrased, since whether or not you get significance depends on how you correct for multiple comparisons.  Since some of these studies use conservative corrections like Bonferroni, they have deliberately low power, focusing on identifying SNPs that are really associated with the trait and having few false positives.  If we expect something to be caused by very many SNPs, it’s not surprising that this will find relatively few of them.

If that’s the case, I guess I’m wondering why I only see these studies and not their complement, the studies that just try to predict outcomes.  Perhaps those studies exist, but if so it’s strange that (reputable) people do prediction on the basis of the conservative studies.  E.g. https://dna.land/ lets you upload genetic data and see a “Trait Prediction Report,” but their trait predictions seem to rely on studies like those I described in the OP, which identify a few SNPs with confidence but have little predictive value.

I know next to nothing about bioinformatics, and have a very basic question about it:

I keep seeing those studies that try to identify genetic factors for traits or diseases in big data sets, and end up with results like “we identified 300 SNPs that collectively explain 5% of the variance.”  And this happens with things that are thought to be very heritable based on other evidence, so people talk like there’s something missing.

There are at least three missing pieces that could cause such a gap:

(1) factors not present in the SNP data at all (this is sort of my catch-all category)

(2) SNP-SNP interactions (nonlinearity)

(3) linear effects of so many SNPs that we can’t get significance for most of them with our sample size

Do we have any sense of which is the main factor?  I’m particularly interested in (2) – I know people use random forests and stuff sometimes, and it’d be interesting if we could get good cross-validation performance out of a nonlinear model, even if it wasn’t interpretable or didn’t have a well-defined hypothesis testing framework.

(@raginrayguns​ may know?)

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think.

The rise of 'pseudo-AI': how tech firms quietly use humans to do bots' work →

argumate:

In 2016, Bloomberg highlighted the plight of the humans spending 12 hours a day pretending to be chatbots for calendar scheduling services such as X.ai and Clara. The job was so mind-numbing that human employees said they were looking forward to being replaced by bots.

fuckin’ hilarious

The privacy and transparency issues raised here are real and important, but I don’t like the conflation between various ways of involving humans in bot services, some of which would make the services either impossible to build or dramatically worse if removed.

The worst instance is this line:

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

The linked article talks about how “M” is going through a small, closed testing phase in which humans are doing a lot of the work, with the goal of generating training data that’s representative of the real problem domain.  The author of the Guardian article apparently does not realize that this is what “investment in AI” looks like.  For better or for worse, current “AI” systems depend on supervised learning, and there is no such thing as having invested enough that targeted training data is valueless.

But I also want to talk about this:

In 2008, Spinvox, a company that converted voicemails into text messages, was accused of using humans in overseas call centres rather than machines to do its work.

The Spinvox case sounds complicated, and again the transparency issues are real, but on a technical level it sounds like the company is doing something good and appropriate:

She added that with almost 100 million customers worldwide it would also be utterly impractical for people to transcribe even a relatively small percentage of the messages. […]

The company maintains that it uses call centre staff to ensure quality of service. The technology gives every text a “confidence” score and those texts with a very low score are pushed through to one of its five call centres across the world. They analyse the call, check the resultant text, and if new words appear that the database does not know, they will include them. The agents themselves have no knowledge of customer, individual or even from which market the voicemail has come. In Argentina, for instance, where SpinVox has 10 million customers through a deal with mobile phone network Telefónica, it has fewer than 70 call centre staff.

I’ve argued previously that the dream of “no human involvement” actually holds automation back, because it refuses to decompose a task into the parts that machines are good at (and can save us lots of time with) and the parts they are still bad at, making the latter the limiting factor in every application.  From this perspective, the hype over “bots” has been a very bad thing.

It’s fair for this article’s author to express disappointment over the gap between hype and reality, but the answer is not “companies should make good on their hype,” it’s “what they promised could only have ever been bad, so we shouldn’t demand it.”

tsutsifrutsi:

nostalgebraist:

I did finally finish the second season of Legion and … hoo boy

I was wondering where they were going the mental illness theme, and uh, they definitely went somewhere with it, that’s for sure!  I kind of wish they hadn’t, now!

On the upside, the last few episodes were emotionally involving, had moments that felt real and raw, and made a (last-minute) attempt to move the show beyond mere stylish randomness.  On the downside, they were a complete mess that felt like two or more distinct storylines jammed together inconsistently and executed too fast, and – more egregiously – contained the most weirdly, brazenly incoherent and unreal portrayal of mental illness I’ve seen in mainstream “serious” fiction in a long time.

Honestly, I’m less angry about it than just plain confused how this thing got into the world in the first place.  Like, do the writers actually expect the audience to share their strange (and factually inaccurate) assumptions?  Are they knowingly straying from reality in favor of a stereotypical cartoon notion of “insanity,” and if so, how (and why) do they expect this to sync up with all the parts of the show that appear to be about real (albeit stylized) things happening to real humans?  (I am a bit angry that the social justice flavored critiques of the ending have taken this stuff completely in stride, but I guess that’s par for the course)

Specifically, the ending involves a long, elaborate set of conflations/confusions between:

1. Common, if awful, personality flaws that people can have without being mentally ill (and many do)

2. Psychopathy

3. Schizophrenia

For every pair of these (1+2, 2+3, 3+1), there are one or more moments where the two are implied to be the same thing, or to be connected by some deep link too obvious to spell out, or the like.  More on this under a cut because spoilers

Keep reading

(Stop me if I’m way off; I haven’t actually watched the show.)

That actually sounds kind of sensible? Like, he is a relatively normal “bad person”; but he is being gaslighted by a group of abusive “friends” into believing that he is a crazy bad person; and this is extremely traumatic, enough to cause a weird dissociative fugue in pretty much anyone, A Clockwork Orange style—but this person likely does have specific traumas that are being dredged up here, making this an even more triggering event. So he ends up painting an impressionistic portrait of a moment of feeling like a world-killing Evil Overlord, (a Van Gogh mania in negative emotional valence—something usually more directly reacted to with self-harm, like with Van Gogh himself, or as described in, uh, that Nine Inch Nails album. You know the one. [All of them.])

Oh, and the beliefs of his “friends” are, I would guess, a statement about how society inescapably views mental illness and rotten character as two faces of the same coin. (See: everyone with NPD on Tumblr, who has to go around explaining all day that narcissism doesn’t somehow force you to do bad things to people, and nobody ever believes them and continues to think NPD by itself is a sufficient explanation for e.g. abusive parenting.)

Is this maybe a Poe’s Law thing? Is the story hitting you over the head insufficiently hard with the degree to which it’s implying that this is a societal satire: a portrait of a society that tries to both tell bad people they’re really just broken (medicalizing personality flaws) and broken people that they’re really just bad (moralizing mental illness)?

I like this from a pure “free play of interpretations” angle, but it is pretty inconsistent with the show’s moment-by-moment emotional cues (music, shot framing, etc.), and also inconsistent with stated authorial intent (in the interviews I link here).

I guess it’s conceivable that Noah Hawley is straight-up lying in interviews to maintain the sanctity of a planned, later twist – and this would have to be revealed as a twist, since if it’s “true” it has flown over the heads of every critic out there – but it seems implausible.

OTOH, now I’m thinking back to the end of the first season, when @disconcision predicted the show would eventually reveal the superhero stuff was all in his head the whole time.  Back then I was like “nah,” but after Season 2 I’m starting to think that’s the sort of “shocking” cliche this show would embrace, and it would provide room for Hawley’s statements to be technically true, correctly describing the current trajectory of the superhero narrative and implicitly silent about the mundane-reality narrative – or at least only applicable to it when translated across a bridge of metaphor.

Keep reading

(via tsutsifrutsi)