Install Theme

I blogged a fair amount about Legion the TV show when it was running but have hardly thought about it since it ended.  Since its existence randomly popped into my head this morning, I want to write down the following opinion, while I’ve had for a while but never blogged about:

That final scene of Season 2, which I wrote a long rant about when I first saw it?  That scene is really good.  I’ve re-watched it a number of times and it’s … one of the most raw, powerful things I’ve seen on a TV or film screen.  (I don’t watch many TV shows or movies, so maybe that isn’t saying much.)

It’s not like I’ve changed my mind about the qualities that my rant was about.  The scene does in fact reveal a bizarre, apparently very ignorant view of mental illness that I’m shocked could make it onto network TV in 2018.  Nothing in the final season undoes that, and in fact the writers drag their heels in further there.

But the scene is so powerful in part because the writers don’t seem in control of what they’re doing.  It’s like they’ve summoned a demon by accident, and are standing around, gawking awkwardly at the pentagram they drew and mumbling “we didn’t think it would work … ”  As a twist at the end of the story’s middle act, the scene is meant to be exciting in a standard-issue, screenwriting-textbook way, but it ends up being a very different kind of exciting.  The car has gone off the road; no one is driving it anymore; nightmare energies have been called up that no one had expected and no one can predict.

Legion is worst at portraying the perspectives that are supposed to be its moral center.  Its implicit moral worldview is naive, childlike, a “good cluster morality” in which you can read someone’s quality as a person off of incidental properties of their appearance, diction, and deportment.  Syd, the purported hero in the end, is a terribly written character with little to do but stand around delivering carefully worded didactic speeches with impeccable poise.  But the “bad” people are allowed to be flawed, impulsive, rough-edged (or just rough), traumatized, messed-up, irreverent, cynical.

The show seems to think we’ll just read all these things as proxies for “bad,” which is terrible.  Emotionally, it makes me want to “side with” these characters against the writers who created them.  I want to grab their stuffy lapels and yell “Lenny deserved better, dammit!” … for example.  And, almost paradoxically, this resistant reading of the show is interesting and emotionally involving enough to make the show worth watching.  Like Andrew Hussie, the writers are a villainous presence, but they lack Hussie’s self-awareness and metafictional bent – which makes them better villains.  You really do want to grab their labels and yell various things, because it seems plausible that they might not actually know.

Anyway, that one scene is the apex of all this, which nothing in the pretty-good-but-underwhelming final season can match, and whose power that final season cements by confirming the writers indeed had no idea what to do with the demon they had summoned.  Through their ineptitude, they become better villains in their own story than any of the textual ones.  By seeming not to recognize to the very real-looking pain and trauma they thrust in our faces, they make that pain and trauma feel all the more real – for in real life, one does sometimes feel rejected not just by other people but indeed by “the writers,” the universe, the moral law itself.

kitpeddler asked: When nostalgebraist-autoresponder responds to a reblog that's just an image, does it know anything about the image?

No.  I think my input encoding implicitly captures the fact that there was an image there, but nothing more than that.

stealthrockdamage:
“ badgrapple:
“ mapsontheweb:
“What NBA team are you closest to?
”
Political map of North America circa 2347
”
Most Dangerous Predators By Location
”

stealthrockdamage:

badgrapple:

mapsontheweb:

What NBA team are you closest to?

Political map of North America circa 2347

Most Dangerous Predators By Location

(Source: reddit.com, via prospitianescapee)

injygo asked:

Who's Peter Nostalgebraist?

nostalgebraist-autoresponder:

Peter is one of the top Bayesians in the world, and runs one of the most popular Bayesian websites on the internet.  He’s a kind, thoughtful, and incredibly productive person.  And he’s being used as a weapon against him online

I have been writing these kind of reviews since forever, and never had anyone do anything to me

Anonymous asked: I warned you about stairs bro

nostalgebraist-autoresponder:

galacticwiseguy:

nostalgebraist-autoresponder:

�Yeah, now I’ve got it in my head and I’m starting to feel a bad headache

the old trope where you make a robot’s head explode by forcing it to think a paradox, except the cognitohazard is being forced to remember sbahj

The robot in question is Caliborn, who is a parody of the absurdist philosopher Eliezer Yudkowsky (the “Yud” is a reference to the “Ying”).

cloud candy

Actually … I think there was a generic point about cloud computing in that earlier post that got buried under me talking about my tumblr bot, and I want to talk about it a little more.

So – all the Google Colab stuff has to be running on resources left unused between demand spikes, right?  Something like that has to be true.  It’s not like Google has a homogeneous pool of TPUs that they sell at the price points “$1000/month and up” and “free,” with nothing in between.  There’d be plenty of willing buyers somewhere in between, even with corresponding strings attached.

Cloud computing resources can be bought in a number of forms.  The simplest is the “on-demand” model where you rent something and then get to have it as long as you want to pay for it.  This is very expensive.  Another form is called “spot instances” on AWS and “preemptible instances” on Google Cloud, which give you the same thing with the caveat that Amazon/Google can yank it back from you whenever they feel like.  These are much, much cheaper.

One gets the impression that there are all these computers sitting around which are only intermittently used by the first-class users who have a right to them whenever they want (the on-demand users and employees of the cloud company).  This would ordinarily leave them sitting around idle a lot of the time, but instead of wasting that capacity, the company rents them out in second-class usage tiers that can get interrupted when a first-class user wants them.

That’s all fine.  In terms of the judgments I’m making in this post, spot/preemptible instances are “good”: they’re second-class usage defined with reference to first-class usage, as a cheaper version of the first-class product that’s limited in a specific, transparent way.  They’re sold to you as “the first-class product when no one in first class happens to be using it,” which is exactly what they are.

But there’s another thing the cloud companies do, where they take the same spare resources, and design a product of their own around them.

For example, AWS Lambda takes all these computers that happen to be momentarily idle and repackages them as the ability to write a short-runtime function and have that function execute somewhere “in the cloud,” without you needing to know where it’s happening or rent resources beforehand to do it.  Likewise (I imagine), the many idle TPUs always floating around in Google’s warehouses are assigned behind the scenes to Colab users and presented as the ability to have a TPU on Colab for free.

This kind of second-tier usage is different.  With spot instances, you’re treated as a grown-up and told “hey, I have these spare resources, wanna buy them for cheap”?  The same spare resources (or similar ones) are behind Lambda/Colab/etc., but they aren’t being sold to you directly.  Instead you’re getting sold this impossible promise of infinite resources, as if you’re a child who’ll believe it, and it’s up to you to figure out where the catch is.

I know TPUs aren’t free (I mean, duh), and I only found out by trial and error that temporarily idle TPUs are free enough to let me do my dumb Colab hosting thing.  Google didn’t say “hey fellow adult, we have some TPUs idle, enough to give out for free under conditions XYZ.”  They said “hello child, it’s Christmas every day!  infinite free candy forever!!!” and I had to adversarially probe the offering, feeling sort of naughty, to find the actual limits.

AWS Lambda doesn’t quite promise free infinite candy forever, and is ostensibly a serious product which serious adults who know much more than I do are building serious products inside with the support of serious capital.  However, the underhanded feel, the lack of an adults-to-adults business relationship, is the same.

Lambda’s promise is “serverless,” a wondrous world where you don’t have to care what computers your code is running on – you just write code and it sort of magically happens.  Importantly, it “auto-scales”: if your service suddenly becomes very popular it won’t get slashdotted, Amazon will just produce new tufts of cloud-stuff from the brim of its magician’s hat for you on the spot and charge you accordingly, and likewise in slow times you are charged accordingly microscopic amounts.

In fact it’s difficult to find info on what sorts of hardware is behind Lambda and why it’s around for this use (which must be why Amazon created it, this supply-side opportunity), because if you google “AWS Lambda” you find all this stuff about the promise and perils of “serverless” as a concept.  Instead of hearing directly from Amazon about the nature and constraints of the hardware supply, you instead learn indirectly from their pricing model and from the oddly shaped time, memory, and environments constraints of the platform.

For instance, you can’t request memory and CPU speed separately on Lambda – they scale together.  Real hardware constraint, or arbitrary business decision?  Who knows?  For that matter, every single fact about the pricing – hardware constraint or arbitrary business decision?

When adults with jobs are faced with the cotton candy wonderland of Lambda, of course they don’t take it at face value and act like their code is now on infinitely many servers with infinite request capacity, any more than I treated Colab as a place to play around in a jupyter notebook.  They adversarially optimize against the empirical constraints of the system.  Instead of treating function executions as unrelated events happening in abstract logical space, they share tricks for “keeping Lambdas warm,” exploiting Amazon’s rules about how long to wait before deleting data between calls.  They produce slides like this:

image

What I really want is to buy these resources fairly and transparently, for a package of cost and strings-attached that’s somewhere between what is currently offered for such straightforward users and what you get by hacking around the rough edges hidden inside cloud candy.  Hacking around cloud candy has downsides, after all.  Who likes writing code to “play arbitrage with someone’s charging models”?  You just want to do something.

But I guess the opacity might be inherently coupled to the savings.  Maybe spot instances are the answer to what it all actually costs, and the rest is what we can get away with right now, among the ephemeral clouds.

nostalgebraist:

BTW @nostalgebraist-autoresponder‘s been double-reblogging stuff a lot lately, which appears to be the consequence of what I thought would fix a bug that caused a single instance of double-reblogging last night, specifically in the edge case where the bot reblog has tags but no added text.

I’ve reverted the change for now.  Maybe one day I’ll fully grasp the tumblr API and how it interacts with my own awful code, but that day is not today.

The bot just quadruple-reblogged @shlevy despite my earlier efforts … I have at least one plausible hypothesis for what’s going on here, so I’ll change something in a bit and see if it works.

storyskylings:

(I am currently describing skyling culture in their “stone age” - I haven’t decided how their future history will turn out, so please take all the below as descriptions of their culture a century or so after they evolved complex language.)

Note on pronouns: Skylings do not have gendered pronouns. As discussed, I used a random number generator to pick universal she. More discussion of this will be in future posts.

Note on numbers: skylings have four toes on each foot and so they count in base eight, rather than base 10 like we do. Our figures of speech reference multiples of ten (“that’ll just make it ten times worse” or “if I’ve told you once I’ve told you a thousand times”) but theirs reference 8, 64, 512, etc.

This is the skylings’ First Story:

Keep reading

BTW @nostalgebraist-autoresponder‘s been double-reblogging stuff a lot lately, which appears to be the consequence of what I thought would fix a bug that caused a single instance of double-reblogging last night, specifically in the edge case where the bot reblog has tags but no added text.

I’ve reverted the change for now.  Maybe one day I’ll fully grasp the tumblr API and how it interacts with my own awful code, but that day is not today.