There can be no doubt that some curious electric telepathy does sometimes link together the present and the future. Not always, however!

There can be no doubt that some curious electric telepathy does sometimes link together the present and the future. Not always, however!
Statue in the Park of Versailles by Giovanni Boldini
Has “alright” (as opposed to “all right”) become fully standard English by now? I keep noticing it in The Will to Battle, which doesn’t have any (other) copy-editing problems that I’ve noticed, and I feel like I’ve been seeing it a lot recently in internet writing that would be publishable as-is in all (other) respects. Yet when I was growing up, I only saw “alright” in amateur writing, and as far as I remember, published/copy-edited text always used “all right.”
BTW, I wasn’t asking here whether “alright” is “objectively” correct, or whether people should feel OK using it, or whether language change is good or bad, or any descriptivist vs. prescriptivist stuff. I was attempting to ask a value-neutral question about (roughly) professional editing standards.
I think it used to be standard procedure for editors to change “alright” to “all right.” This is the kind of impression that you grow more sure of over time – because I have this impression, it stands out to me when a professionally edited text includes “alright,” so I’m not going to miss it when it happens, and in years and years of reading I feel like I’ve seen it happen a very small number of times. (And when taken together, “all right” / “alright” are common, so presumably I’ve read a very large number of “all right”s in the same time I’ve read so few “alright”s.)
Now my “alright” alarm is going off a lot more often than it used to, which is making me think that editing standards are changed. Or maybe they haven’t. Or maybe they weren’t what I thought they were to begin with. I’m just curious which of these options is true.
(Quite a few people have responded to say that they perceive “alright” and “all right” as two different things with different meanings, which is pretty cool. I had always seen them as variant spellings of the same lexical item (which itself could mean multiple things – “doing just fine,” “acknowledged,” “acceptable”).)
Has “alright” (as opposed to “all right”) become fully standard English by now? I keep noticing it in The Will to Battle, which doesn’t have any (other) copy-editing problems that I’ve noticed, and I feel like I’ve been seeing it a lot recently in internet writing that would be publishable as-is in all (other) respects. Yet when I was growing up, I only saw “alright” in amateur writing, and as far as I remember, published/copy-edited text always used “all right.”
:D
(For people who haven’t played the game, Alfman is the guy on the right here.)
In his essay Les virtuts cìviques del caganer (“The Civic Virtues of the Defecator”), American anthropologist Brad Erickson argues that Catalans use the caganer to process and respond to contemporary social issues such as immigration and the imposition of public civility regulations.
The ACLU does work on this! The Trump administration also recently rolled back restrictions on civil asset forfeiture but I kind of doubt you were previously unsure whether to support the current administration.
I’m not sure what else. @comparativelysuperlative mentioned submitting a brief on this, do you know who else is working on it?
I worked for the Institute for Justice, who are some of the loudest against civil forfeiture. If you’ve heard that as of 2014 there’s more money seized by police than by burglars, that’s from a report by IJ. Actually, any time you see any number at all about civil forfeiture in America, check the source and it’s usually IJ.
They do a lot of advocacy and more lawsuits on this than the ACLU does. Not that number of lawsuits is really the best metric.
You can totally donate to IJ. If you do, and this probably goes for everywhere else too, try and include a line saying it’s to help fight civil forfeiture abuse. They like this because it tells them what matters most to people who aren’t wacky libertarian lawyers, plus it avoids having your donation go to something you like less. (In particular, Tumblr anons might disagree with them on school choice and free speech.) They don’t seem to be very funding-constrained, though, so it might be a better bet to give to the ACLU and earmark it.
Other groups on the same brief were the Cato Institute, DKT Liberty Project, Drug Policy Alliance, Americans for Forfeiture Reform, and California Attorneys for Criminal Justice. I don’t really know much about most of these except that they agree.
If you’re looking for places to donate, consider looking up who’s running for DA. Philadelphia had one of the worst civil forfeiture systems, straight-up “that yellow tape means your house is our house now” until they got super sued. (The suit got as far as class certification, they agreed to stop the worst parts, and it’s still ongoing.) Then the city elected a civil rights lawyer to the top prosecutor spot, and once he’s in office he can just tell his subordinates to stop.
If you happen to have someone running for DA who isn’t insane about civil forfeiture, they’ll have a lot of leverage. If you specifically want a court ruling that it’s unconstitutional, well, so does IJ.
it doesn’t matter how good you’re doing, those sad nights will creep up on you from time to time and that’s ok. doesn’t mean all your progress is gone
(via earlgraytay)
I found myself saying this in a conversation the other night, and it seems worth writing down:
AI skeptics put themselves at a disadvantage whenever they make arguments of the form “when will a computer ever do [intelligent-seeming thing]?” Because once you describe [intelligent-seeming thing] in a precise way, some AI developer can write down an objective function for that thing and optimize for it. Given enough computing time, data, and a good enough model class, this will often work.
The exceptions are things like the Turing Test – or its little sibling, Winograd schemas – which look to practical AI developers like “cheating,” because they require simultaneous competence at many different things. “One step at a time,” the AI developers will say. Some people will work on language comprehension, others on language production, others on knowledge representation, others on social understanding; no one will “work on the Turing Test,” because that’s a fool’s errand, but eventually the Turing test will be passed.
The problem with this is that no one knows how to integrate the smaller steps – and this is the stronger argument in the AI skeptic’s quiver. We can now make computers do individual tasks that seem quite human-like, but only by ruthlessly optimizing for each task in isolation, ignoring everything but the task. This came up (sort of) when we were talking about image recognition and adversarial examples a while ago. The Inception architecture can – impressively! – distinguish between photos of all kinds of real things. But it has been optimized to do precisely that, and the genie grants the wish you told it, not the wish you wanted.
These image-classifying nets don’t know what a panda is, or even how to recognize a panda in a photo. They know how to tell when a photo is likely to be a panda and not any of the other types of things they were trained on (in the classic examples there are 1000 such types, although of course one can do more). Asked to invent an image of a panda, they will spam the canvas with incomplete, distorted fragments of (say) panda faces and paws. The result does not, in fact, look like a panda, but the net never needed to know what a panda really looked like. It only needed to distinguish real photos of pandas from real photos of turtles, bulldogs, etc., and the optimization process ruthlessly and rationally jettisons knowledge with less distinction-making value. Why learn, for instance, about the arrangement of body parts in space? Plenty of things have four limbs, a torso and a head. (It might have learned such things if it had been asked to distinguish animals from man-made objects, say. But that wasn’t the task.)
We talk a lot about “the things computers can do these days.” Even AI-literate people do this, perhaps noticing the slight anthropomorphization, but considering it harmless. But it isn’t really true that “computers” can recognize pandas and play superhuman Go. Inception-for-image-recognition can’t play Go, and AlphaGo can’t identify a panda. A single computing machine could do both of these things if both pieces of software were present, but they’d be entirely separate, having nothing to do with one another. When we talk about “transfer learning,” we mean that something that has been ruthlessly optimized for one task may have a head start when ruthlessly optimized for a similar task (losing its aptitude for the original task in the process).
There are currently no “computers that do things” in the sense that the anthropomorphization connotes – single programs that can do some things, and are learning to do others, accruing dividends from the many relationships between domains of knowledge and practice. Instead, for every distinct thing to be done, there is a different “computer” (indeed many computers, with different parameters and architectures), as if you were to restart the whole process of gestation, infancy, and childhood each time you did something new – a whole new maturation process aimed at washing this particular dish you’ve never washed before, and then another for using this new household tool you’ve never used before, and on and on. Through that bizarre process, “humans” might indeed learn to do many distinct things, but you wouldn’t, indeed no one would – no one who would be able to act in the world, if the cosmic babysitters were ever to abdicate their role and leave the “humans” to their own devices.
I originally read this as claiming that we will *never* be able to integrate the steps, but after checking again that’s not necessarily what you mean.
There are people working on Winograd schemas (or, well, there are papers published on both. I’m not sure any group works on them consistently). Progress so far is nonexistent, but research is happening. I am not sure why you think that AI researchers call this problem “cheating”. Maybe you only meant engineers by practical AI developers? No one cares what they think, so I hope not.
Transfer learning doesn’t refer to what you think it does. The term you want is “multi task learning”. There have been results in this field (though not many). To my knowledge 1706.05137 is state of the art. (Not claiming the state of the art is good, of course).
I agree that AI skeptics must argue that we can’t integrate multiple different architectures, or produce a single architecture that is good at many different tasks. I don’t think particularly good arguments for this claim exist, and you haven’t provided one here.
I definitely don’t have an argument that this is impossible, nor do I think it is impossible (since humans exist). I’m just pointing out a gap which is frequently overlooked, clearly in the popular press but also in conversations among people who should know better.
I do think this gap has “AI skeptical” implications, not about the impossibility of AI, but about the difficulty of putting upper bounds on how much more work is left to do, specifically work that likely doesn’t fit into the current machine learning research paradigm. That is, we’re doing a lot of impressive applied research right now, but I think more basic research is needed, and it’s a lot harder to predict how long basic research will take or how difficult it will end up being.
(Insofar as I’m arguing against AI enthusiasm, I’m mostly arguing against people who take various recent successes in deep learning to mean that we’ve found all the key insights already, and the road from now to human-level AI looks like another decade or two of the sort of architectural/training advances that make for a noteworthy machine learning paper in 2017, and nothing more than that.)
I think the “cheating” point is true, actually – I know people are working on Winograd schemas, but my actual belief is that if these things will get solved, it will be by an integration of more basic research rather than a head-on attack. If you like you can read that paragraph as if that parenthetical wasn’t there – my point would still be there, and would come across without getting tangled up in my personal beliefs about the difficulty of Winograd schemas.
(Gary Marcus said he won’t be impressed with AI until it can do Winograd schemas well, which does strike me as “cheating,” in that I think he’s really asking for something very deep that will leave him unimpressed for a long time, even if substantial progress is made. It’s how I’d feel about someone who dismisses all AI research until the very moment something reliably passes the Turing test, at which point they’re startled because they’ve ignored all the past research accomplishments that served as building blocks.)
Thanks for the pointer on multi-task learning.
(via just-evo-now)
I found myself saying this in a conversation the other night, and it seems worth writing down:
AI skeptics put themselves at a disadvantage whenever they make arguments of the form “when will a computer ever do [intelligent-seeming thing]?” Because once you describe [intelligent-seeming thing] in a precise way, some AI developer can write down an objective function for that thing and optimize for it. Given enough computing time, data, and a good enough model class, this will often work.
The exceptions are things like the Turing Test – or its little sibling, Winograd schemas – which look to practical AI developers like “cheating,” because they require simultaneous competence at many different things. “One step at a time,” the AI developers will say. Some people will work on language comprehension, others on language production, others on knowledge representation, others on social understanding; no one will “work on the Turing Test,” because that’s a fool’s errand, but eventually the Turing test will be passed.
The problem with this is that no one knows how to integrate the smaller steps – and this is the stronger argument in the AI skeptic’s quiver. We can now make computers do individual tasks that seem quite human-like, but only by ruthlessly optimizing for each task in isolation, ignoring everything but the task. This came up (sort of) when we were talking about image recognition and adversarial examples a while ago. The Inception architecture can – impressively! – distinguish between photos of all kinds of real things. But it has been optimized to do precisely that, and the genie grants the wish you told it, not the wish you wanted.
These image-classifying nets don’t know what a panda is, or even how to recognize a panda in a photo. They know how to tell when a photo is likely to be a panda and not any of the other types of things they were trained on (in the classic examples there are 1000 such types, although of course one can do more). Asked to invent an image of a panda, they will spam the canvas with incomplete, distorted fragments of (say) panda faces and paws. The result does not, in fact, look like a panda, but the net never needed to know what a panda really looked like. It only needed to distinguish real photos of pandas from real photos of turtles, bulldogs, etc., and the optimization process ruthlessly and rationally jettisons knowledge with less distinction-making value. Why learn, for instance, about the arrangement of body parts in space? Plenty of things have four limbs, a torso and a head. (It might have learned such things if it had been asked to distinguish animals from man-made objects, say. But that wasn’t the task.)
We talk a lot about “the things computers can do these days.” Even AI-literate people do this, perhaps noticing the slight anthropomorphization, but considering it harmless. But it isn’t really true that “computers” can recognize pandas and play superhuman Go. Inception-for-image-recognition can’t play Go, and AlphaGo can’t identify a panda. A single computing machine could do both of these things if both pieces of software were present, but they’d be entirely separate, having nothing to do with one another. When we talk about “transfer learning,” we mean that something that has been ruthlessly optimized for one task may have a head start when ruthlessly optimized for a similar task (losing its aptitude for the original task in the process).
There are currently no “computers that do things” in the sense that the anthropomorphization connotes – single programs that can do some things, and are learning to do others, accruing dividends from the many relationships between domains of knowledge and practice. Instead, for every distinct thing to be done, there is a different “computer” (indeed many computers, with different parameters and architectures), as if you were to restart the whole process of gestation, infancy, and childhood each time you did something new – a whole new maturation process aimed at washing this particular dish you’ve never washed before, and then another for using this new household tool you’ve never used before, and on and on. Through that bizarre process, “humans” might indeed learn to do many distinct things, but you wouldn’t, indeed no one would – no one who would be able to act in the world, if the cosmic babysitters were ever to abdicate their role and leave the “humans” to their own devices.