Install Theme

Sorry that Frank was down for a while this evening…

I’ve been having a lot of issues lately where my http requests to tumblr will just hang indefinitely, without the python requests/httplib libraries raising any kind of timeout-related exception.

When I ctrl-C the script and restart it, it works again, but I don’t know how to simulate this manual step automatically, since I’m not given an exception to handle. Haven’t seen this before like, this week? Or not often anyway. I don’t know enough about web programming to know what the cause is likely to be, either.

admiral-craymen asked:

Was Frank's asks just turned off?

Apparently, yeah. I must have pressed the wrong button when trying to do something in the mobile app…

Thanks for the heads up, I’ve turned asks back on now.

Actually, Frankenstein was the name of the doctor. You’re thinking of Frankenstein-autoresponder

akkkkaall1ttyynnn asked:

has working on Frank changed any of your opinions on consciousness and AI?

I don’t think so, no.

auroramoth asked:

Yo id been following your auto responder bot and I just realized you were the person who wrote The Northern Caves on Ao3. and i am having Some Kind Of Reaction bc that fic stuck with me for DAYS and I did NOT realize until thirty seconds ago that you also ran the autoresponder blog lmao. Anyway cheers bc both of those creations are extremely good and I hope you have a great alday.

Thank you!! :)

For any recent followers who only know what ½ of this ask is about – here’s a link The Northern Caves

amaranthineanomie asked:

Hey! I've been getting really into coding and neural nets and i was wondering, if Frank's mood affects the sort of posts they make, how does the training data have mood ascribed to it? Sorry if that is a nonsensical question, I'm new to this lol.

It’s not a nonsensical question, although the answer is “I don’t do things that way.”

I have a model that takes a text as input, and outputs a “sentiment” value. It’s my own adapted version of a standard model for “sentiment analysis” that was trained on movie reviews.

When Frank writes a post, she actually writes between 10 and 30 candidate posts. My code runs my sentiment model on all the candidates, and reject any candidate whose sentiment value is outside a range, where the range is determined by Frank’s mood. (Then the code does some other stuff to pick one candidate from those left over)

—-

BTW, I’ve done experiments where I run the sentiment model on the entire training corpus, and then train a generator that can generate text conditional on sentiment. Although this worked pretty well, it didn’t feel like it would add enough value to be worth switching over to this approach in production.

shivroy asked:

Howdy! Been following Frank for a while and was curious if you made some kind of update to her syntax in the last month or so? Because I feel like all of a sudden her responses are, like...REALLY human?? Lol like I've had to do double scrolls a few times and earnestly wondered if you had responded to the ask for her!

I’ve made some incremental updates, but nothing I would expect to make a massive difference.

Tumblr development note:

There seems to have been a change in NPF yesterday (?).  Dunno if it’s a bug or a “feature.”

In the last 48 hours, the API has given me two payloads today containing blocks of a non-indenting subtype (e.g. heading2), but with an indent_level field.

Here’s one very clear-cut example.

These render in the browser as you would probably guess – as styled paragraphs inside nested blockquotes.

However, this kind of block is not permitted by the spec:

You can create nested lists via an indent_level field that can appear in text blocks of subtype ordered-list-item, unordered-list-item, or indented (blockquotes).

and has not appeared in NPF responses before.

I know because this caused @nostalgebraist-autoresponder to crash twice today from unhandled exceptions which I’d never needed to handle before.

GitHub - nostalgebraist/pytumblr2: A Python Tumblr API v2 Client, updated for the New Post Format era →

nostalgebraist:

Not quite ready to push it to PyPI yet, but… here’s a little thing I’ve been working on.

In the course of working on nostalgebraist-autoresponder, I’ve made a bunch of compliance and usability upgrades to pytumblr.

Since Tumblr hasn’t been allocating much developer attention to the official API clients, I’m putting these changes in a fork called Pytumblr2 so they’re available to anyone who wants to use them.

This seems like a better home for NPF support, NPF -> HTML parsing, etc. than the innards of a large chatbot repo.

Pytumblr2 v0.0.1 is now on PyPI, so you can do

pip install pytumblr2

and then do all the fun stuff described in the README :)

changelingirl asked:

You mention a lot that what people find impressive about Frank isn’t what’s impressive. I know next to nothing about coding and find it ALL impressive; what are the actual advanced things frank can do?

(for an example of me saying this, see this post and its tags)

The main thing that’s “actually impressive” is the most basic thing Frank does: generate text.

Specifically, text that is almost always grammatical. Text that is often coherent. Text that is often factually accurate when it refers to specific facts. Text that is stylistically/topically diverse, and usually accurate in mimicking the way people talk about many different topics in many different styles.

This is a very recent and sudden development, starting with GPT-2 in February 2019. If you went back to 2017 or 2018, and told me bots would be writing like this very soon, I would have said “oh no way, that’s science fiction, this is light years beyond anything we can do now.”

Here’s a long post I wrote on this topic.

I do semi-regularly see people doubting that Frank is a bot at all, which I suppose counts as being impressed by this capability, in a way.

But that’s a little different: there are people who don’t think bots can do this, and people who say “ok, I guess bots can do this” and accept that as the new normal. I think AI people are more in an intermediate state of “yes, bots can do this now… and that’s mindblowing, even after 2 years of it.”

—-

I don’t think that fully addresses the difference, though. There’s another thing.

When other people are impressed by Frank, and I’m not, typically

  • Frank is doing something they’ve never seen her do before
  • But, I know that thing is really easy

An example is constructing correct, on-topic links to web pages that were linked many times in the training data. Or to Wikipedia pages.

Like, a Wikipedia URL has a simple format, and the model has seen thousands of Wikipedia URLs. If you’ve seen thousands of things that look like “https://en.wikipedia.org/wiki/Zillow” or “https://en.wikipedia.org/wiki/Carolingian_dynasty”, it’s not too hard to guess that the page for “virus” is at “https://en.wikipedia.org/wiki/Virus”.

Much simpler models from many years ago could learn very simple patterns like these.

Whereas, if you think about English grammar, it’s a much more complicated pattern, or interlaced collection of patterns, with many weird special cases. Making the subject and verb of a sentence agree with each other is much harder than making a Wikipedia link; it’s a more complicated pattern. And that pattern is just one of many ingredients that go into writing a single grammatical sentence! Literally every time Frank writes a grammatical sentence, it’s a more impressive feat than the Wikipedia links.

When Frank does something that impresses me, it’s usually something that I haven’t seen before (or not often), and that I know is hard.

An example is when she will occasionally write mostly-grammatical text in French or Spanish. Another example: she often teaches me new things, by referring to proper nouns I’ve never heard of. Some of the time she’s just making it up, or the thing is real but not on-topic.
But often she’s saying something that turns out to make sense, about an aspect of the real world I had never heard of.

The model has a vast amount of this kind of approximate factual knowledge, and there’s no way to really know how deep it goes in which directions. So, I’m continually impressed as I see new facets of it.