Install Theme

Oh my! It all blows up in our face! If there’s no monster, it doesn’t have a head.

nostalgebraist:
“ … touché, robot overlords
(From Elsayed et. al., “Adversarial Examples that Fool both Human and Computer Vision” – h/t @eka-mark)
”
The authors seem to agree with that post I wrote a while ago:
“Adversarial perturbations generated...

nostalgebraist:

… touché, robot overlords

(From Elsayed et. al., “Adversarial Examples that Fool both Human and Computer Vision” – h/t @eka-mark)

The authors seem to agree with that post I wrote a while ago:

Adversarial perturbations generated for an individual machine learning model or from a single viewpoint typically do not appear meaningful to humans. However, recent studies on adversarial examples that transfer across multiple settings have sometimes produced adversarial examples that appear more meaningful to human observers. For instance, a cat adversarially perturbed to resemble a computer (Athalye & Sutskever, 2017) while transfering across geometric transformations develops features that appear computer-like (Figure 2b), and the ‘adversarial toaster’ from Brown et al. (2017) possesses features that seem toaster-like (Figure 2c). We observe a similar effect in our own experiments when an adversarial example is forced to transfer across an ensemble of models, rather than across geometric transformations (Figure 2d).

(via guywife)

List of U.S. state beverages - Wikipedia →

argumate:

sorryyourheinous:

argumate:

what the heck

spoiler: milk

at least Puerto Rico has the Piña Colada and getting caught in the rain

I started reading that ebook I made of Almost Nowhere, to test out how it looked, and ended up re-reading all ten chapters, and guys, I’m really proud of this book so far!!!!

I finally got off my butt and re-figured-out how to fix the EPUB/MOBI files that AO3 auto-generates, so that they don’t have weird margins, too much space between paragraphs, broken images, etc.

So now I have ebooks of Almost Nowhere that I’m satisfied with – EPUB here, MOBI (for Kindle) here.  Unfortunately, I still don’t have an automatic process for updating these as I post new chapters (and putting one in place would be a lot of work), so I’ll have to manually update them, and I’ll probably forget.  But they’re up-to-date now.

Since I wanted to link this in the story summary on AO3, I figured I ought to replace the placeholder summary I had in there.  So there’s a summary now:

A young woman locked in a tower, with almost no one to talk to – besides her numerous other selves. A house under perpetual moonlight, where two men wait. A school where teenagers learn magic, and no one has any fun at all. An alien invasion. Levitating spheres of rainbow slime. The unexpected moral dimensions of bilateral symmetry. And so on.

You can see this all on AO3 of course, although AO3 is coincidentally having some site issues right now so you may get a redirected to a broken link with the word “unicorn” in it.

antihumanism:

nostalgebraist:

Most impressive machine learning thing I’ve seen in a while:

Go to this page, and scroll down to the last audio track, the one labelled “Japanese female voice (F3) - ハナミズキ (Hanamizuki).”  The singing voice was generated by computer, based on MIDI + lyrics input.

It sounds … realistic?  Really realistic?  Like, wow

The other examples on that page are also computer-generated, but they only had the computer generate the vocal timbre, and relied on a “reference recording” (I think this means a recording of a person singing the very same song).  They’re also pretty good, and I was expecting the MIDI + lyrics one to be way worse (hence why they put it at the bottom), but no!

The full paper is here (open access) – the method is kind of delightfully complicated, involving a vocoder on top of separate, chained models for timing, pitch and timbre, each of which is a specially tailored neural net and there’s another model to figure out how (phonetically parsed) lyrics align with audio in training data, etc.  There’s something really appealing to me about seeing researchers build state-of-the-art models with a lot of clever interlocking parts based on actual domain knowledge, even if there are neural nets inside of the parts – it’s like in the good/bad old days before the “throw some vanilla neural net at Big Data, for everything” revolution.  [Yann LeCun voice] ~differentiable programming~

Apparently this work was done in conjunction with the Vocaloid people, so at least we can look forward, in these frequently dark days, to Hatsune Miku sounding really, really good

Vocaloids sound really, really good when they are abetted by neural net specially tailored for postmodernism, aka a human being (human beings have the same property).

The A-Train song is only good to the extent that it pronounces weird and fits with the notion of a New York constantly stalling and resuming, the “good emulation” is utterly boring. Like taking forever to create elevator music.

These songs aren’t meant to be enjoyed as works of art, they’re just illustrations of how well this technique can reproduce various aspects of what human singing voices tend to sound like.  I’m picturing this being used like a musical instrument by humans – ideally in some more flexible form, where you can give it many more cues than just MIDI stuff.  You won’t need some additional, magical program to generate all the special touches associated with good, creative singing; human artists will do that in their use of the tool, inasmuch as it gives them the fine control necessary.

In other words, I’m viewing these tracks as the equivalent of hearing someone play some notes and chords on a piano (loud and soft, with/without pedals, etc.) to demonstrate the quality of the piano.  Considered “as a song,” the notes and chords they play probably “aren’t very good,” but that’s beside the point – the point is assessing the quality of the instrument, in light of what we know a good pianist can do.

(via antihumanism)

Most impressive machine learning thing I’ve seen in a while:

Go to this page, and scroll down to the last audio track, the one labelled “Japanese female voice (F3) - ハナミズキ (Hanamizuki).”  The singing voice was generated by computer, based on MIDI + lyrics input.

It sounds … realistic?  Really realistic?  Like, wow

The other examples on that page are also computer-generated, but they only had the computer generate the vocal timbre, and relied on a “reference recording” (I think this means a recording of a person singing the very same song).  They’re also pretty good, and I was expecting the MIDI + lyrics one to be way worse (hence why they put it at the bottom), but no!

The full paper is here (open access) – the method is kind of delightfully complicated, involving a vocoder on top of separate, chained models for timing, pitch and timbre, each of which is a specially tailored neural net and there’s another model to figure out how (phonetically parsed) lyrics align with audio in training data, etc.  There’s something really appealing to me about seeing researchers build state-of-the-art models with a lot of clever interlocking parts based on actual domain knowledge, even if there are neural nets inside of the parts – it’s like in the good/bad old days before the “throw some vanilla neural net at Big Data, for everything” revolution.  [Yann LeCun voice] ~differentiable programming~

Apparently this work was done in conjunction with the Vocaloid people, so at least we can look forward, in these frequently dark days, to Hatsune Miku sounding really, really good