Self-driving cars
I keep seeing people talk about how awesome self-driving cars will be, and how they’ll save lives because computer don’t make mistakes. Whenever I see that, I’m wondering where the hell those people get their software from, cause the software I deal with is buggy shit.
And then, I heard that google wanted to sell their car without a steering wheel. Seriously? What happens if the sensors get blocked, say, in the middle of a storm? What if it bugs out due to a road condition issue? Do they really want a car that can be bricked so easily and that at best could leave you stranded and make a road hazard? If this was a mature technology, then we’d know what the issues would be. It is not a mature technology. I seriously doubt their competency in general for deciding not to include a manual steering system, and aggressively lobbying not to include one.
I mean, they seem to be doing OK, with their demos. I’m concerned that their good record could be a result of the fact that they get to choose when and where to test. I’m also concerned about what will happens to the cars as they age. I just am weirded out by this constant optimism I seem to get from everywhere, even people I would have thought had a decent amount of technological pessimism through hard-earned experience. Robotic technology has been overhyped shit since forever.
Yeah, I hadn’t really thought too hard about this but it seems like such an unpromising use of AI – the consequences for mistakes are gigantic and it’s basically impossible (?) to get a good sense of how well the algorithm will perform in practice without actually putting it into practice (dynamics will emerge from self-driving cars interacting, people changing their driving style in response to them, etc).
“Your life is now in the hands of this thing that did a little bit better than you on the training set” is a scary concept.
Software is buggy, humans are much buggier.
But they’re not buggy in the same ways. When AI fails, it tends to fail in weird ways that a human never would, which may seem obviously catastrophic to a human – the sort of idea so bad you’d usually forget it was a possibility. It’s certainly possible that performance would be better on the whole, but even then it is still scary because I’m risk averse and want to at least know what sorts of risks I’m facing.
To be concrete, I don’t want to be in a vehicle that could suddenly slam to a halt on the freeway because a programmer forgot about a rare-but-possible edge case somewhere.
(I don’t know much about self-driving cars and it would be nice to be wrong here.)
Robotics software doesn’t work like regular software. It’s not designed around a single blob of code that centrally controls a whole system of actuators, where a bug in that code will screw you over.
Instead, most of the way we design robots is an outgrowth of the subsumption architecture of Rodney Brooks (now better known as “the creator of the Roomba.”)
This architecture is used today in, for example, driverless subway systems. A subway system doesn’t have one “brain” that takes in all the facts about where all the cars are, what they see, etc., and then outputs acceleration commands to each car. Instead, each part of the subway system “sees” and “thinks” and “reacts” for itself—with the local parts being trusted to take local action, e.g. proximity sensors being directly capable of stopping the train; and with central parts mostly just giving strategic suggestions for the local processors to take-into-account. (Except, also, each layer’s suggestions to the layers below carry a weight-value, that can be boosted enough to dominate the lower layer’s calculation on how it should react. This is similar to a brain’s ability to override low-level modules like fight-or-flight response with reasoned responses like “be bigger than the bear.”)
Basically, when designing a subsumption system (or other hierarchical control system), there are safeguards engineered in at every level. But that’s the wrong way to think about it, because that still suggests centralization. What you’ve got is more like a military command structure: strategy comes down from the top, but individual soldiers, agents with their own preference-functions, are tasked with implementing that strategy. The safeguards are the “men on the ground”, the actual components getting things done. They’re agents in their own right, and the (maybe-buggy) high-level strategy has to go through them, and be considered right by them, to get run.
The vertebrate nervous system is subsumptive, in an important way. You, a brain, can decide to put your hand on a red-hot stove element. But your hand doesn’t like that idea very much, and will flinch away reflexively. The actual component implementing the strategy—the nerves in your hand—are capable of evaluating your strategy for themselves, and deciding against it. (You’re capable of overriding that decision with a forceful-enough command—if, say, you’re trying to reach through boiling water to rescue a friend—but you really have to force it; that much executive function isn’t available on a whim.)
Which is all to say: driverless cars work like that. They don’t stop dead on the road when there’s a bug in the brain, because “stopping dead on the freeway” is one of those dumb ideas that the individual components wouldn’t bother to implement. The correct strategy for a car is actually almost always “keep going at whatever speed and torque angle are required to optimize the distance between you, the cars in front of and behind you, and your lane markers.” Any central strategic calculation has to be better than that default strategy, or the “nervous system” of the car will just keep doing the default strategy, effectively “flinching away” from doing something so stupid as stopping on the freeway.
But I have heard that plane autopilots at least will just abort and handoff control to human operators in some cases. In a driverless car without a steering wheel, that is not a possibility, so the only option would be to stop.
Your scenario assumes that the lane markers and nearby cars are identifiable, but if they are not, then what does the car do? I pointed out sensor obstructions, there is also roadwork and other possible issues. There’s this basic problem of incoherent input that I don’t think this system solves. (or, in a worst case, could result in different systems in the car trying to do different things, which could be worse.)
[oops, essay-length]
(via tsutsifrutsi)

