Uber vs. California - Robot cars not ready for roads?

The reports required by California for companies testing there give an interesting view:

California Autonomous Vehicle Disengagement Reports

There’s a pretty detail summary here: https://www.recode.net/2017/2/2/14474800/waymo-self-driving-dmv-disengagements

Note that only unexpected/emergency conditions where operators have to take over driving are required to be reported; “routine” disengagements (when the operators know in advance that the system can’t handle the conditions are not).

Only Waymo (formerly Google’s system) seems to be able to go more than a couple of hours without a reportable disengagement, and Waymo’s report emphasizes that many routine, unreportable disengagements happen during the day. Waymo also seems to be having increasing difficulty eliminating the last fraction of reportable disengagements.

Nvidia, which I suspect has the most advanced system outside of Waymo’s, doesn’t really test in California so it’s hard to pin down where they are.

If I were a driver, I’d want to know the bounds of the algorithm. For example, if I knew that white semi trailer trucks were a bit of an issue, I might want to be on the lookout for that situation. Optimizing the performance from 98% to 99% is really halving the number of errors remaining (and they remain because they’re hard), so it’s nice to know where you’re in trouble so you can avoid that small fraction of problem situations. This “almost solved” problem in some ways makes things worse, as one can be lulled into a false sense of security where it’s just fine to watch that harry potter video until your body fuses with the side of a multi-ton truck.

If the car is smart enough to throw up it’s hands in the air and say “I don’t know what to do, please help”, and gives you a few seconds of warning, that’s at least a good start in the right direction.

It’s like playing the worst QTE game ever.

All they need to start with is better cruise control.

That’s already a thing:
http://nissannews.com/en-US/nissan/usa/releases/intelligent-cruise-control-icc-tech

Subaru has it as well (I have it on my forrester). Covers adaptive cruse control, pre-collision braking, lane departure, etc. I’ve had the pre-collision breaking go off once when someone in front of me on the highway in typical boston high speed traffic slammed their breaks on. Though I was aware of what was happening, it was odd to feel the break pedal going down a split second before I got to it.

http://www.manchestersubaru.com/subaru-eyesight-system.htm

The real winners from self-driving cars — sellers of booze .

This gets better every day!

So a while back, there was still a push for legislation to cover drunk drivers being charged even in self driving cars. This doesn’t make sense, as they are really pushing for self driving cabs or cab alternatives.

I’m hoping there is a sensible person somewhere that realizes we need to move to a point legally that we accept drunk people will use self driving cars, and for better or worse, that should be legal.

The sticking point for me is that, from what I hear, self-driving car AI still get to a point where their algorithms get confused and they declare, “Jesus, take the wheel!” At this moment, a suitable, licensed, and sober driver takes over for the AI that can’t figure out how to deal with the mirrored car it’s driving behind, or slalom around the livestock that just fell off the truck in front, or move over for the long funeral cortege that needs to get to the cemetery.

If at some point the tech is so good that a human is simply cargo, then I’m fine with that cargo being drunk. Are we there yet?

No, but the problem is that self driving public transit WILL be the first to take the wheel so to speak. And the the argument will be, "if I don’t get a ticket in a self driving cab/Uber/Lyft, then why do I get one in my self driving car?

Arguing disengagements at that point is a blow to self driving cars as a whole. People want to not worry and just ride. If-then statements regarding which self driving method people use will only further delay adoption.

A lot of companies think that the idea of having a computer hand off driving back to a human when it gets confused is a horrible idea. They think that the idea of an assistive pilot like the Tesla autopilot is a disaster waiting to happen. A human is never going to be paying enough attention in a mostly self-driving car to take over in an emergency situation. It’s either 100% safely driven by the computer, or you shouldn’t release it.

I know that’s what Google’s targeting. Here’s an article talking about other companies trying to skip the assistive step as a bad idea:

Are self-driving cars able to notice trucks like these?

Can self-driving cars be trolled?

I’m 100% okay with those being outlawed if it makes for better self-driving cars. Can we add jacked up 4x4’s and gun racks as part of the equation? Trump/Pence stickers?

The systems that include a LIDAR would have no problem with that. Also the truck pattern happens to make a great stereo imaging target, so those would be OK as well. It might possibly be confused at long distance recognition. In general though, I think the things that will confuse the vehicles won’t be the same things that confuse people, and will be hard to predict. Google does spend a lot of time mining for hard scenarios, and running millions of variations in simulations.

Like these glasses.

Hmm…

'Scuse me while I write this Fast and the Furious 14 spec script.

I would think having passengers that are drunk would be a big plus, as in an accident, you’ll be all floppy and limp and less likely to be hurt than if you’re sober and tense. (Or at least that applies to people falling in bathtubs and down stairs.)

Totally applies to auto collisions. Drunk drivers get off way easier (in a physical impairment sense) than their victims.


I’ve done enough driving in Detroit winters to know that human drivers suck at driving in the snow. Autonomous vehicles have got to be an improvement.

This … is probably true.