Install Theme

togglesbloggle asked: I wasn't going to say anything independently, but if you're sorta soliciting fannish externalities... I tend to describe you as one of my three favorite living authors, alongside Ada Palmer and Gene Wolfe, and I'd be happy to see you start writing more frequently. I'm particularly enamored of the ways that you look to modes of being that are unusual to see in fiction but very concrete for people like mathematicians, scientists, and moral realists. Horrors from Hilbert Space, so to speak.

This is extremely flattering, thank you!!  And also I really like that description.

falloffablog asked: Just to let you know I'm waiting with bated breath for the next instalment of Almost Nowhere! I love your anomalinglect. I am beginning to follow their sign-trains which begin with signs and lurch towards violence.

I’m glad you like it!  Unfortunately, if past experience is any guide, you will have to wait for a long time :(

It’s funny – I was going to say “I really want to write more of it soon,” but then I thought about how I could have written that sentence at any point in the last (I dunno) year, and yet over that interval I’ve rarely done as much as set aside dedicated time for writing, which wouldn’t be too hard if I really cared enough.  I guess that means I don’t really care enough – not in the revealed-preference sense of “care,” anyway, if maybe in some others.

My life outside of writing is a lot better and fuller than it used to be, so there’s more of a sense of writing time actually competing with other valuable things, rather than being “work I actually like” that I do almost to rebel against the work I don’t.  And I’ve never developed a way to write fiction that doesn’t feel self-indulgent and unhealthy relative to the habits I try to cultivate in other work – whenever I’ve been successful at it, it’s been through the “lock myself in a room with coffee and alcohol for 12 hours and come out the other end feeling utterly drained” approach, and it feels very weird to explicitly carve out that kind of block on a schedule rather than doing suddenly it on a whim.  (”Sunday – reserved for bohemian writer time.  Monday – discuss logistics of blah with so-and-so, 9:30 AM.”)  But, again putting the revealed-preferences goggles on, I ought to be able to overcome something merely feeling weird if I really care about the end goal.  So, maybe I really don’t care – but maybe I can make myself care again, like I used to, and people like you who express interest are a stronger impetus to that than my own internal need for self-expression, which can eat its fill at other troughs these days.

I’m not really a writer anymore, but I could make myself into one again.

definitely me not my wife etc

i am SO CUTE

The ingredients for pancakes can be seen to symbolise four points of significance at this time of year:
Eggs ~ Creation
Flour ~ The staff of life
Salt ~ Wholesomeness
Milk ~ Purity

psybersecurity:

@nostalgebraist was in my dream just now. The part involving him wasn’t super duper interesting, but he seems to enjoy hearing about this sort of thing, so why not describe it

I was in some sort of hippy-ish hostel, where there was a male side and a female side. My female friend invited me over to the female side in order to drink some beverage made of coffee, milk, and peas (?). In this hostel, all the beds had names for some reason. On the male side, all the beds had vaguely masculine, stately names like “Dennett” or “Hartworth”, but on the female side they all had obnoxious grrl-power names like “Sparkle Fairy” and “Cupcake Bitch”, and mixed in were also few to prove that you like, don’t need to be a girl to be on the girl side like “Gender Fuck” and “Armpit Hair”. Needless to say I found this humorously counterproductive

While I was waiting for the water to boil, I sat down on the couch and looked at a variety of incense sticks they had laid out still in their original packaging which, based on the text on the bag, seemed to be from a company who was trying to market flower-girl type products to a cynical conspiracy-bro sort of demographic, like “GMOs in the food and flouride in the water got you feeling all jacked up? Girlfriend is bitching at you again? Why not light up some incense and relax, man?” There was one variety of incense called “Marlboro Reds” which had packaging text arguing to the effect that cigarettes were unfairly maligned by the government because they were “spiritually” good for you. 

There was also a variety called “Nostalgebraist” which had the familiar green-horse picture underneath the name on the packaging. I realized that NB had based his internet presence off of the marketing for this stupid obscure brand of bro-incense and not told anyone, waiting for people to discover it, which I thought was hilarious. Furthermore I realized that the green horse picture was not actually a green horse, but an field of corn which inadvertently looked like a horse from afar via pareidolia. I thought it was hilarious that NB had noticed this weird effect and realized that if he made it an avatar-sized image no one would even notice that it had been a field of corn in the first place, again letting people discover it.

Anyway I went back to my room and a few hours later noticed that NB had uploaded a 12 minute video to his blog sitting on a couch ranting about this brand of incense - I guess the secret was out. He became very upset about the bizarre marketing in a tumblr-y identity politics way like “corporations are telling me that just because my body is a blah blah blah”, eventually screaming at the camera like “you know what? this shit really PISSES ME THE FUCK OFF”. But it seemed like after recording this rant, he had realized the absurdity of his unhinged anger, so he chose to humorously exaggerate it in order to project an ironic-self-awareness. Therefore the ending of this video was full of YouTube-poop style edits that distorted his face and voice gradually transforming him into some sort of terrifying entity in what was a pretty impressive work of lo-fi digital psychedelica. 

A few minutes after watching this video, I realized that the couch he had been sitting on looked very similar to the couch I had just been sitting on looking at the incense, and hey, something needed to call to mind this incense and prompt this rant, so maybe he was by coincidence staying at this same hostel right now, and I should go over and say hello? But then I realized that there was no reason he would be staying in the female side, so that wouldn’t make any sense. 

I suddenly remembered this again and had to explain to Esther why I was laughing hysterically out of nowhere

(via nostalgebraist)

There’s a lot of interesting stuff in here, and the way it’s written is a breath of fresh air to me. I’m so used to political opinion pieces full of nonsense arguments like “this is good because it’s Wisely Moderate / this is good because it’s Courageously Extreme / this is good because it has so many downsides and There’s No Such Thing as a Free Lunch” that it’s startling to read something that derives so much of its rhetorical force from the urgent need to get the job done, instead of the urgent need to hold positions with the correct, like, vibe

(That’s not to say the author is right about everything — I don’t know enough to say — but even if he’s wrong about everything I would still prefer that people write like this more often)

(h/t @chroniclesofrettek)

To the various people in reblogs talking about this post and its similarity to the Chinese Room argument, here’s my own 2 cents on the relation between the two.

The thought experiments themselves are indeed very similar.  Their argumentative goals are also similar in some respects – both arguments are (at least sort of) about intentionality, i.e. the quality that mental phenomena like beliefs and desires of “about-ness” or “being inherently about certain things.”

There are two big differences.  One is that the entity in the Chinese Room hypothetical can carry on a conversation, while my hypothetical was largely about the nature of beings that aren’t designed to have conversations but still can use language.

The second difference is much more fundamental.  I was trying to give the reader more intuition for a certain sort of system.  Because of the specific, limited way that this type of system is structured, it (straightforwardly and uncontroversially) doesn’t have any way to “link up” its linguistic concepts with real-world referents.  It might use the word “dog” in a very capable way, reflective of much implicit knowledge of what dogs are like and what role they play in the world – but there is absolutely no way for it to ever represent or learn about the fact that there are there is a special group of entities in a non-linguistic realm (dogs in physical reality) that all of that linguistic structure is distinct from, but making reference to.

This is something like a claim that these systems can’t have intentionality.  But I didn’t make this point to cast doubt on the intentionality of some much broader set of systems (say “AIs,” or in Searle’s case “things that don’t have the relevant ‘causal powers,’ whatever exactly those are”).

I left the possibility open that systems with different input/output setups could have intentionality, while Searle explicitly denies this in his rebuttal to the “Robot Reply,” a common reaction to the Chinese Room which goes “but if you hooked the Room up to robot eyes, robot arms, etc., wouldn’t it pick up intentionality from interacting with the real world?”  Searle says, no, just imagine the same guy in the Room applying the rules in this case – he can’t tell if there’s a robot hooked up or not anyway, and he has just as little understanding of what the Chinese terms are referring to.

Searle’s notion of intentionality is really weird and I get the sense it’s not very intuitive to most people.  He’s very exclusive about intentionality – or at least “intrinsic intentionality,” the special kind that characterizes minds – appearing to think that it’s this very special thing that requires more physical preconditions than most people realize.

To be a little crude, the whole study of conditions for Magical Really Truly Mindlike Intentionality™ in philosophy strikes me as a bunch of cosmic database nonsense, and I don’t have much interest in making arguments that try to block this or that system from having mindlike intentionality.  I was just talking about a system which, as it happens, lacks a certain kind of intentionality.  That lack is itself uncontroversial, I think, but it has some interesting consequences.  For Searle, on the other hand, establishing the lack is the whole point, and he wants to establish it as a move in a larger game he’s playing with other philosophers of mind who consider “precisely what has intentionality in this one [weird, perhaps ill-defined] sense?” a live and important question.

collapsedsquid:

collapsedsquid:

collapsedsquid:

The extent of GPT-2′s power seems to be that it can sometimes figure out what a noun is and replace it with another noun. Other than that it just seems like it’s overtrained.

nostalgebraist: GPT-2 has not been publicly released. If you’re using the small model they released alongside the announcement, it’s not GPT-2, it’s (basically) GPT-1 trained on their new data set.

I don’t know exactly what nomenclature they are using but the program I downloaded is called “GPT-2” and so I’ve been sticking with calling it that, or at least the “publicly available version” or whatever.

nostalgebraist: Partly just semantics, but I am annoyed with the way “the new research accomplishment” and “that thing I can play with” are getting blurred together in the public conversation, despite the very explicit and noteworthy decision *not* to release the new thing

Yeah I’ll be clear that this is not their final version but also that I’m not sure how much better the final version is, the recorded results that you can freely download look better but there’s a lot of shit there too. I think it looked overtrained as well.

The quality stuff is a judgment call and I’m not sure I disagree.

But like, (nearly) the whole substance of their new paper is that they made a way bigger model than GPT (the one you are using) and that had all these noteworthy effects.  Badmouthing “GPT-2″ based on experiences with GPT just seems wrong, since GPT is one of the reference points relative to which GPT-2 is being deemed an advance.

(Admittedly OpenAI is partly responsible for these nomenclature problems)

Following up on my earlier post about GPT-2: an important thing for laypeople to keep in mind about language models (which applies mutatis mutandis to various other model types) is that they don’t attempt to use or understand the communicative purposes of language.

They view a string of words as a thing to make forecasts about, like the weather.  You cannot talk to them, or tell them to do things – not because they “aren’t smart enough for that,” just because that isn’t how these models are intended to view their inputs.

An analogy:

Imagine a science fiction story in which it turns out that the weather is being manipulated in subtle ways by advanced aliens who are trying awkwardly to communicate with us.  As a result, every weather pattern we see is actually expressing a thought, in some alien “language of weather” – just as expressive and nuanced as a human language, let’s say, but also just as opaque to an outsider unaware of the phonology and syntax (etc.) and even the fact that these properties exist.

Suppose that, at the time this story takes place, humans have not yet “cracked the code” and learned the alien language.  In fact, let’s say the humans don’t even know about the aliens yet, and have no idea that weather might be communicating anything.

However, suppose also that our weather models have gotten pretty sophisticated and accurate, to the point that they can forecast the weather almost exactly up to a horizon of a few paragraphs days.  In the quest for better and better weather prediction, we’ve ended up building computer models that don’t just reproduce the phenomena of real fluid mechanics – they also reproduce the patterns in the alien messages (”syntax,” continuity of topic and style, etc.), as they manifest themselves in actual day-by-day, region-by-region weather.  Perhaps we mistake these for emergent consequences of some unresolved small-scale behavior.

Anyway, suppose we’ve gotten to that point.  From the aliens’ perspective, our weather forecasts might look just the way that impressive language model output looks to us. They might say things like, or analogous to, “wow, that sounds like something one of us would really say!”, or “they stayed on topic for a long time there,” or “perhaps they understand more of [advanced alien physics] than we realize, given that they are able to create accurate and coherent imitations of the [advanced alien physics] primers we’ve been trying to send them.”

Nonetheless, the humans and their simulations wouldn’t actually understand anything the aliens were saying.  At least not in the sense of “understand” that means “can make the intended associations between these words and non-linguistic experience.”  If the aliens were to encode the message “there is treasure at such-and-such location, go there!”, we wouldn’t be able to obey whether we wanted to or not.  We wouldn’t know they were talking about that location, or about treasure, or anything.

If the aliens really wanted us to go to that place, they could try to “drive us around” by (say) stirring up weather that portends future disaster somewhere so everyone evacuates, and then doing the same for the place they evacuated from, until their evacuation path eventually puts them in the right place.  The resulting series of weather phenomena would not say anything resembling “there is treasure at such-and-such location, go there!” – it would almost certainly say something totally different – but only this specific unrelated message would make us follow the injunction.  (This is analogous to prompting zero-shot results from a language model: you don’t tell it “write a summary!”, you write something that is often followed by a summary and let it predict the weather from there, so to speak.)

Importantly, our complete inability to communicate in the alien language is unrelated to our level of (implicit) understanding of the aliens’ world, including stuff like [advanced alien physics] that we do not know except in this implicit way.  Our weather models might well include – in some opaque way, threaded subtly through the details of their various approximate parameterizations for clouds, radiation, turbulence, etc. – a rich knowledge of the alien world, its customs, its ideas, and its perhaps foreign physical constraints and lack thereof.  But, despite having a working knowledge of how all these many things work in relation to one another, we wouldn’t know how to relate a single one of them to its appropriate referent in the physical world.  If they get into dire straights and send some desperate plea, we’ll be able to continue on its theme in a way that reflects a sensitive appreciation of their plight, but we will have no clue that our sensitive appreciation is an appreciation of some extra thing out there (them) of which the weather is just a mirror.  We’d just think we knew a whole lot about the weather. 

Meanwhile, we would have no versatility in producing the alien language: we can say what we’d guess a typical alien message-writer would say, but we can’t saying anything else.

None of this is to say that communication is some especially steep hurdle for AI, relative to understanding language on other levels.  (Maybe it is, maybe it isn’t.)  My point is just – the limitation of these models is not that they “only know language, not the world.”  In principle they might know everything you can know about the world from speech acts alone, which might be quite a lot, and is surely more than nothing.  Their limitation (which is a part of their design and not some “AI failure”) is they learn about both language and the world for the sake of predicting the linguistic behavior of humans who they cannot influence and vice versa, not for the sake of participating in any of the conventional uses of linguistic behavior.

etirabys:

Brain noised “go document yourself” at me as a potential insult (after reading the string “self-documenting code” one too many times today)