Install Theme

nostalgebraist:

FYI: I probably won’t post any new chapters of Almost Nowhere for the next few months.

I’ve become unexpectedly busy with other stuff in a time-sensitive (but time-limited) way, and I won’t be able to make room for writing for a few months.

This means I almost certainly won’t hit my goal of finishing the book in 2022. Once the current hiatus is over, I’ll come up with a new estimate/target date, which should be sometime in 2023.

—-

On a more positive note, I’m very happy with the progress I’ve made in 2022 so far, both in quantity (I covered a lot of ground in the plot) and quality (I like what I wrote).

I’m also thankful that this is happening at a relatively natural breaking point in the story, like the end of a TV serial’s penultimate season.

If you start reading right now, you might reach the end and think “aw man, I want to know what happens next!”, but you won’t think “wait, it just stopshere?!”

The thing I was unexpectedly-busy-with has now resolved!

And now I can say what it was: I was looking for a new job. I didn’t want to say that on the blog at the time, because my current employer didn’t know I was job searching.

I signed a job offer a few weeks ago, and will be starting my new job at the beginning of November.

With that resolved, I’ve started to get my head back into the Almost Nowhere zone. Going back over my notes, daydreaming about the story, all that good stuff.

I don’t want to commit to any specific schedule until I start the new job. But the project is no longer “officially on hiatus.”

etirabys:

me, squinting at the foreshortened hood and trying to estimate distance from curb: How the fuck do the vast majority of Americans do this successfully?

81k, from the passenger seat: After enough practice, the car eventually feels like an extension of your body and you just know where the boundaries are

me: Can I please achieve this in a more narratively interesting way, by having sex with the car and soulbonding with it?

(via cthulhubert)

taking a break from “ai discourse”

Over the last year or so, I’ve been writing a lot of long posts about AI timelines, AI risk, the limits of modern ML, and related issues.

People seem to enjoy these posts, and I’m proud of many of them. But the experience hasn’t been good for my mental health.

I’ve been spending way more time than I’d like thinking about these arguments in an obsessive manner – where I want to stop and think about something else, yet I somehow can’t.

—-

Part of the problem is that, this past year, I’ve gotten into the habit of reading LessWrong habitually – something I haven’t done for a very long time.

LessWrong is full of people afraid of near-term superintelligent AI, much more so than the rest of my social circle. (Even the parts of my social circle that read LW are much, much less AI-doomerist on average than the people writing most of the posts on LW.)

This created a sense that “whoa, everyone around me is suddenly way more terrified of AI!”, which felt like a scarily sudden and scarily unmotivated social shift, and made me feel obsessively driven to figure out where these fears came from and what evidence was supposed to justify them.

To some extent, I do think this shift is a real phenomenon that goes beyond LW, and was (very roughly) contemporaneous with me picking up LW again. But it’s also become clear to me that I’m driving myself crazy by reading LW, and letting “what LW posters think” seep into my assessment of “what people think, in general.”

—-

So, I’m going to stop reading LW for now. (Not just habitual reading, but reading posts on the site, period.) I’ll also avoid related sites like EA Forum, and personal blog posts containing the same kind of material.

I’m also going to try to talk about about AI risk and AI timelines as little as possible, whether online or in person.

I’ll still talk about these topics if there’s some overwhelming reason to, but not just as banter or because “someone is wrong on the internet.”

I’ll still talk about ML more generally, as long as the topic isn’t too close to big-picture prognostication about risk, timelines, and the ultimate power/limits of ML.

I don’t know how long I’ll keep this up for. With reading LW, I can probably keep it up indefinitely, since I spent years not-reading-LW in the past and feel none the worse for it. For the more general prohibition on AI discourse, I’ll keep it up at least until the end of the year, and then think about how I feel.

therealjendavis asked:

i was wondering if i can use frank's art for personal use (pfp, for example?) she made this cool miku pride thing with what looks like the bigender flag and i can't find any mention in the frank-faq of how, like. copyright works w her ai.

Sure!

In general, I’m fine with people reproducing Frank content in other places as long as they do it with attribution.

And for something like a profile pic, where people aren’t likely to assume it’s your own work in the first place, I don’t care as much about attribution. It’d be nice if you cited Frank in your bio, though. (I’ve seen “pfp by nostalgebraist-autoresponder” on one or two bios already, actually.)

I should put this in the FAQ… I haven’t updated that thing in a while. Probably too long.

staff:

Attention! Incoming, over the next week! An upgrade for all your blogs that are using the default Tumblr theme!

Any blog that uses the “Tumblr Official” theme with its default settings and no advanced customization will switch to using our new blog view on web. Why?

  • Easier access to gifting, tipping, messaging, and all of the other wonderful things your blog allows.
  • Consistency! It’s a similar view and experience to our mobile apps.
  • New default URL! Instead of yourtumblrname.tumblr.com, folks will access your blog at tumblr.com/yourtumblrname. 

This is optional and can be toggled off or on from your blog settings. No one who is using a custom theme will be affected by this update. But for those who are using the default, this is what the new view will look like:

image

FAQ

Why is this good? Why does this matter?

New and existing features such as gifting, tipping, and messaging will appear in the same spot across the board. This will make it easier for folks to navigate any blog and find what they’re looking for. Additionally, the newly-improved notes view means conversations happening around a post can be filtered, making them easier to use.

Does this mean you will eventually get rid of custom themes?

No! Being able to express yourself with the broadest array of possible options is an essential part of Tumblr. We’re proud of being one of the last places to allow custom HTML and CSS, allowing users to truly make their blog their own or select from a wide variety of pre-built themes. Again, no one who is using a custom theme will be affected by this update.

How do I enable the new default view if I’m using a custom theme?

Head to your blog settings on web and switch the “Custom theme” toggle off. Anyone navigating to your blog will now see the new blog view at tumblr.com/yourtumblrname. Your blog settings, ask button, archive, gifting, and blog-level tipping will live at the top of your new view, where your followers expect to find them. Anyone that navigates to your blog’s old view will be redirected here.

image

How do I use my Tumblr with a custom theme?

If you’d like to customize your blog, head to your blog settings on web and turn the “Enable custom theme” switch on. From there, you can “Edit theme” or “View website.” Your customized blog will now appear in the theme of your choice at yourtumblrname.tumblr.com. In addition, folks can also view your blog in the new default blog view at tumblr.com/yourtumblrname, with all its bells and whistles immediately accessible at the top—where followers expect them to be. 

Questions? Please do consider reading this Help Center article, as you will likely find some answers to your questions there. If not, you can ask questions on @wip every Monday from 6 AM6 PM EST or write to Support with a specific issue you’re having.

(via prospitianescapee)

as-if-and-only-if:

the-real-numbers:

the-real-numbers:

image
image

honestly

“op had narrativized the process of receiving sensory input too much in the first version, so they attempted to correct it in the reblog” <- me receiving a narrative in exchange for the sensory input on my dash

(via badeliz)

reachartwork:

Speaking of which, here is Simple Stable 1.1, Slightly Less Simple Edition. There is now Negative Prompting, which lets you put in words to remove from your piece, and also new samplers like euler_a which let you go faster.  Just click twice. It's simple!https://t.co/wNwePGtFy0  — Mx. AI Curio (@ai_curio) October 9, 2022ALT

Hello everyone. Have you wanted to dip your fingers into the tempting, tempting pool of flesh traitordom AI Art? Here is Simple Stable 1.1.

There are two steps.

Step 1: Click on the first button.

image

Step 2: Click on the second button.

image

And that’s all you need! Here’s a big link for anyone on mobile (NOTE: Simple Stable and most Colab notebooks don’t seem to work on Mobile, at least on Androids, for unknown reasons. You may need a computer!)

Here’s some of the stuff I’ve been making with it :)

image
image
image
image
image
image

Cheers!

toskarin:

image

a remote work site made the brilliant decision to hook their automated ad account — which responds to any tweet that mentions remote work and remote jobs — up to gpt-3

the result is people pioneering new forms of attack in real time

ramshacklefey asked:

Have you heard of Loab? And if so, do you have any theories about what the deal with her/it is?

nostalgebraist:

Someone else also asked about this too …

Here’s the original twitter thread about “Loab,” for those who haven’t seen it.

I originally learned about the phenomenon via this response, which provides a partial explanation that sounds plausible to me. In brief:

  • There are reasons to expect that, if you go looking for images that are maximally different from another image (or from a prompt), you will run into an element of a relatively small set.
  • If you imagine all possible images as a point cloud in a high-dimensional space, these are the “corners” of the cloud’s space.
  • If you start anywhere inside the cloud, and try to go as far as possible from that staring point, you’ll end up at one of the corners.
  • Loab was discovered using negative guidance, generating an image as dissimilar as possible from a given prompt. So she may be one of these “corners.”
  • The person who found Loab went on to generate more pictures of her by asking the model to combine the Loab image with other images.
  • I’m not sure how exactly they did this – there are multiple techniques out there – but they would all involve looking for images that are “close” to the Loab image. Per the above, the Loab image is “far” from almost every image except itself.
  • So it makes sense that, if you ask for images similar to the Loab image, you get an unusually high amount of similarity – not just the same topic or style, but the same recognizable face, etc.

In other words, this image is a case where the model is unusually bad at semantic generalization. It usually sees images as related in varied, continuous ways, and can form all kinds of hybrids where one side of the hybrid is less or more apparent. But with a Loab-like image, it can’t find a way to preserve only some of the traits, or preserve them only partially; it thinks every picture is either a picture of Loab, or utterly unlike a picture of Loab.

Such cases are presumably rare. But they’re easy to discover in spite of their rarity, because negative guidance hones in on them.

This doesn’t explain why pictures-of-Loab have the specific traits they do, like being violent / gross / horror-movie-esque. Some possibilities:

  1. There’s nothing to explain. The “corner” points have to look like something, after all; this is no more or less surprising than anything. (And perhaps the spookiness helped this example become a meme, so the “explanation” is that if Loab weren’t spooky, you wouldn’t have heard of her.)
  2. Perhaps it is somehow the result of some data filtering step that tried to prevent disturbing images, or tried to focus on aesthetically appealing ones? For example, perhaps a data set was filtered to exclude disturbing images at some point, and a Loab-like image “slipped through the cracks,” causing the model to memorize it as a weird special case unrelated to the rest of the training distribution.

The latter hypothesis is complicated by the fact that there are really 2 models here, assuming this is some kind of CLIP-conditioned model, which it probably is. There’s the training data/process for CLIP, and the training data/process for the generator. These likely involved different datasets and may not have been done by the same people.

The Loab discoverer doesn’t say which model they were using. It’s very likely that it was a CLIP-conditioned model, unless it was one of the newer Google ones, which I doubt they’d be allowed to tweet about in this way. But even that is conjecture.

(EDIT: it might be a DALLE-1 / DALLE-mini style model, which would complicate much of the above analysis, unfortunately.)

They do note that Loab was later reproduced (in some form?) in Stable Diffusion, which is CLIP-conditioned, so there’s that.

—-

Now that Stable Diffusion is out there, it would be very easy to test the “corners” hypothesis. Just do negative guidance with a bunch of prompts, and see if you can (1) recover Loab, (2) recover Loab from multiple starting points. It would be especially fun to try to find any non-Loab corners and play with them.

Hmm, I’ll try that out later today probably.

Update: nope, I can’t seem to reproduce Loab with Stable Diffusion, or find any “corner” images.

When I use large negative guidance weights with Stable Diffusion, the results seem similar to unconditional sampling.

Compared to unconditional sampling, the colors are more intense, and seems like there’s a weak preference for certain topics (pictures of lawns and houses, sports games, food?), and the images are crisper. But I’m not getting anything too weird or repetitive.

Unconditional, 100 DDIM steps

image

“Joe Biden campaign tour,” -20 guidance weight, 100 DDIM steps

image

ramshacklefey asked:

Have you heard of Loab? And if so, do you have any theories about what the deal with her/it is?

Someone else also asked about this too …

Here’s the original twitter thread about “Loab,” for those who haven’t seen it.

I originally learned about the phenomenon via this response, which provides a partial explanation that sounds plausible to me. In brief:

  • There are reasons to expect that, if you go looking for images that are maximally different from another image (or from a prompt), you will run into an element of a relatively small set.
  • If you imagine all possible images as a point cloud in a high-dimensional space, these are the “corners” of the cloud’s space.
  • If you start anywhere inside the cloud, and try to go as far as possible from that staring point, you’ll end up at one of the corners.
  • Loab was discovered using negative guidance, generating an image as dissimilar as possible from a given prompt. So she may be one of these “corners.”
  • The person who found Loab went on to generate more pictures of her by asking the model to combine the Loab image with other images.
  • I’m not sure how exactly they did this – there are multiple techniques out there – but they would all involve looking for images that are “close” to the Loab image. Per the above, the Loab image is “far” from almost every image except itself.
  • So it makes sense that, if you ask for images similar to the Loab image, you get an unusually high amount of similarity – not just the same topic or style, but the same recognizable face, etc.

In other words, this image is a case where the model is unusually bad at semantic generalization. It usually sees images as related in varied, continuous ways, and can form all kinds of hybrids where one side of the hybrid is less or more apparent. But with a Loab-like image, it can’t find a way to preserve only some of the traits, or preserve them only partially; it thinks every picture is either a picture of Loab, or utterly unlike a picture of Loab.

Such cases are presumably rare. But they’re easy to discover in spite of their rarity, because negative guidance hones in on them.

This doesn’t explain why pictures-of-Loab have the specific traits they do, like being violent / gross / horror-movie-esque. Some possibilities:

  1. There’s nothing to explain. The “corner” points have to look like something, after all; this is no more or less surprising than anything. (And perhaps the spookiness helped this example become a meme, so the “explanation” is that if Loab weren’t spooky, you wouldn’t have heard of her.)
  2. Perhaps it is somehow the result of some data filtering step that tried to prevent disturbing images, or tried to focus on aesthetically appealing ones? For example, perhaps a data set was filtered to exclude disturbing images at some point, and a Loab-like image “slipped through the cracks,” causing the model to memorize it as a weird special case unrelated to the rest of the training distribution.

The latter hypothesis is complicated by the fact that there are really 2 models here, assuming this is some kind of CLIP-conditioned model, which it probably is. There’s the training data/process for CLIP, and the training data/process for the generator. These likely involved different datasets and may not have been done by the same people.

The Loab discoverer doesn’t say which model they were using. It’s very likely that it was a CLIP-conditioned model, unless it was one of the newer Google ones, which I doubt they’d be allowed to tweet about in this way. But even that is conjecture.

(EDIT: it might be a DALLE-1 / DALLE-mini style model, which would complicate much of the above analysis, unfortunately.)

They do note that Loab was later reproduced (in some form?) in Stable Diffusion, which is CLIP-conditioned, so there’s that.

—-

Now that Stable Diffusion is out there, it would be very easy to test the “corners” hypothesis. Just do negative guidance with a bunch of prompts, and see if you can (1) recover Loab, (2) recover Loab from multiple starting points. It would be especially fun to try to find any non-Loab corners and play with them.

Hmm, I’ll try that out later today probably.