Install Theme

you-have-just-experienced-things replied to your post “At one juncture in his jargon-polluted soliloquy of a business analyst…”

Misread this as ‘data vaping’ which is a phrase I now wish was real
image
image

At one juncture in his jargon-polluted soliloquy of a business analyst he used the words ‘data vamping,’ which I believe was a neologism of his own devising.

Now to clarify, there’s nothing wrong with enjoying anime, and we can’t know for sure these were watched by Bin Laden himself.

People really like the word “Mesopotamia,” it seems

People really like the word “Mesopotamia,” it seems

4 Ancient Civilizations That Independently Developed Rudimentary Versions Of ‘Seinfeld’ →

multiheaded1793:

The earliest evidence of Seinfeld in Mesopotamian culture dates back to somewhere around 3500 B.C., as indicated by cuneiform stone carvings that describe a fertility god named “Observant Jerry” or, depending on the translation, “Jerry the Ever-Watchful,” who performs an eternal stand-up comedy set about agriculture for three demons named Nervous George, Elaine the Cosmic Problem, and Kramer: The One for whom Doors Mean Nothing. Also in Observant Jerry’s eternal stand-up audience is one river crocodile and two other Jerry Seinfelds that the cuneiform tablets refer to as “the lesser Jerrys.”

(via averyterrible)

windypoplarsroom:
“ Daniel Álvarez Hernández y María Julia Díaz Garrido
”

windypoplarsroom:

Daniel Álvarez Hernández y María Julia Díaz Garrido

(via shabbytigers)

nostalgebraist:

This just isn’t a response to what I said.

Then I don’t understand what you said.  You can ask me questions to probe why, or just disengage if you don’t think it’s worth the time.

Perhaps more constructively: I linked to this paper.  This paper argues against what I thought was one of your claims, namely that the imperceptible adversarial examples were a result of the complicated shape of the learned manifold, which violates the “tiny change produces tiny result” intuition by being extremely nonlinear.  No, the authors say, the problem is one that also appears in linear classifiers, and the undesirable response is linear in the perturbation size even in NNs.

But maybe you were not making a claim which this paper disputes?

This just isn’t a response to what I said.

Then I don’t understand what you said.  You can ask me questions to probe why, or just disengage if you don’t think it’s worth the time.

evolution-is-just-a-theorem:

nostalgebraist:

evolution-is-just-a-theorem:

nostalgebraist:

That said, I think we need a healthy skepticism towards image classification results, too.  State-of-the-art architectures like Inception seem to work by learning features that reliably discriminate each object from all others, but fall short of defining that object.

They are good at, say, recognizing a dog no matter what posture the dog is in – but they do this not through an understanding of body kinesiology (”a dog’s body can deform in this way and still be a dog, but not that one”), they do it by identifying the least deformable part of a dog and then looking just for that part.  This is why Deep Dream images were so full of dog heads, specifically.  (And of eyes in general.)

The characterizations they learn are like the “featherless biped” definition of a human – actually very good at discriminating humans from other things in almost every circumstance, yet clearly not capturing the fundamental concept, which means they produce false positives on things that are not even close to human (like Diogenes’ plucked chicken).

Auto-generating “plucked chicken” examples has become its own pursuit in neural net research, under the name of “adversarial examples.”  A recent paper (many thanks to whichever tumblr user put this on my dash) shows that you can get your robo-Diogenes to produce 3D objects that are basically always misclassified by some network, even when seen from different angles and under different lighting.

For example, they made this weird turtle, which the network reliably thinks is a rifle (seriously), in all three of these pics and many others:

image

It’s easy to come up with a plausible-sounding story for why this worked.  This turtle has a pattern on its back, but it’s not the very distinctive pattern we’re used to seeing on turtles.  And so of course the network learned to recognize turtles by looking for that pattern, because it’s a great way to reliably tell them apart from other roundish things with four feet and a head.  But this is a “featherless biped” type of definition, and can be defeated by a turtle that doesn’t have the pattern.

I think your attempt to figure out why this particular turtle is classified as a rifle is a mistake. Interpretability is hard, and it is very rare for the features a net learns to map cleanly to the features a human uses.

Is this true?  When you do gradient ascent to produce an image that is given high probability for some class, you get forms that that are quite recognizable to a human.  There were some pictures of this in the first part of Google’s “Inceptionism” post, but my favorite examples are from Audun M. Øygard’s implementation of the same idea:

image

“Screws”

image

“Pug”

There are many others behind the link; I wanted to reproduce more here, but because this is a “text post” tumblr refuses to put multiple images in a single row, and I don’t want the post to be too long.

In these examples, I note two things: (1) I can actually see the object, or at least what looks like part of it, and (2) it tends to spam copies of the most distinctive part of the object rather than the whole thing, which fits with my contention in the OP.

(Although one of the examples is “Loggerhead turtle” and it has a bunch of flippers and not very distinct shell patterns!  But there may be a difference between “Loggerhead turtle” and just “turtle” as classes, or it may reflect a difference in versions of the Inception architecture?  Would be possible in principle for me to test this myself, I guess)

Yes, it is in fact true. Obviously neural nets have to be picking up on some of the same features humans do, but determining which features “caused” a particular outcome is basically impossible (currently). Like, really, interpretability is hard. You can’t just look at some pictures and guess. You’re welcome to go peruse the literature on interpretability and see for yourself how difficult it is.

A more concrete argument: for 2D image classification, most adversarial examples are visually indistinguishable (to humans) from their source image. They’re not “this is a slightly weird looking turtle that’s been misclassified by the net and maybe the weirdness has something to do with it”. They’re “this image is a rifle and that image is a turtle, but if I want to find any differences between these images I’m going to need a magnifying glass”.

The manifold learned by neural nets is just… weird. You can move such a small distance that humans can’t tell the difference, and end up in a completely different category.

EDIT: Maybe this will help people understand the weirdness. We tend to break things down into obvious features. For the turtle we might have one for the shell pattern, one for the color, one for the size, whatever. A neural net will have completely bonkers features, like it looks for an edge on this side of the image and a corner on the opposite side and then a bit of color here and there. (Made up example, but possible). They don’t have to be anything remotely sane. And if they were, adversarial examples probably wouldn’t exist.

Ah, now I see what you’re saying.

I’m not proposing a theory that would rule out the 2D adversarial examples with near-invisible perturbations.  Rather, I think we need to take a fuller view of the types of adversarial examples that work, and also scrutinize what humans are actually capable of.

There was a paper that explained those 2D adversarial examples by saying that if you want to maximize the dot product of a weight vector and some perturbation, and the typical magnitude of the weight vector’s elements stay constant with increasing dimension, then the higher the dimension, the bigger you can make the dot product with a perturbation of fixed max norm.  This is an essentially linear phenomenon, and explains adversarial examples generated by small perturbations.

But not all adversarial examples are generated by small perturbations.  In that paper with the turtle and the baseball, they maximized an expected value over various translations, rotations, and lighting changes, and the perturbations they got (which I assume were as small as they could make them – they said they tried 4 values for the size parameter) were not imperceptible.  In the case of the baseball, you can really see high-level “espresso” features in the object, to the extent that a number of tumblr rebloggers said they might have thought some of the pictures were espressos (as would I).

Notably, these 3D translation-invariant examples are much closer to the real human visual task, where we get to constantly move our heads and object are also frequently moving.  The same people also did 2D physical examples that you can print on a piece of paper, and you can see things getting more perceptible as the range of transformations increases – see here, where a nearly imperceptible transformation can make a cat look like a computer under different zooms, but to get zooms + translations + rotations + mean shifts, you need to add some very perceptible, perhaps computer-y lines and angles to the cat pic.

(via just-evo-now)