Uber vs. California - Robot cars not ready for roads?

I disagree! Our nervous systems are highly adapted to tracking and predicting moving objects. We evolved as hunters and tool manipulators, and we have excellent, movement-detecting eyesight that can predict where future objects will be based on their paths across our visual field and our estimates of distance. This helps when throwing a spear at a fleeing deer (objects moving well in excess of 10mph), and it helps when driving a car.

In addition to that, a ton of driving skills overlap with social skills - predicting the way a pedestrian is running means they’re going to dart into the road, or onto the crossing; navigating a four-way junction with no clear right-of-way markers; anticipating when a driver looks ‘lost’ and knowing to give distance because of early stops, and a ton of other things you never even think about that aid in avoiding getting into a situation where reflexes are even needed (a situation a computer might have some advantage). These are all things that work because other people are driving these other cars, and we are very good at anticipating the movements of other cars based on our experience with cars but also our ability to understand the people driving them.

Yes, some people drink and get chronically distracted while driving, and do drugs. In principle, we don’t let them drive cars! It’s just hard to detect/stop them until it’s too late.

We’ve done really well with self-driving cars, but it’s a really hard problem running into the hardest final pieces. The last 5% is probably harder than the first 95%.

I guess is that we can probably make self-driving cars that can operate on the same roads as humans with similar accident rates, but I very much doubt all the pieces (technology, infrastructure, legislation) will be in happen before 2050, and I’m not even sure about that.

I expect self-driving haulage vehicles, moving along segregated lanes between out-of-city warehouses by 2030, at best.

Stated very well, thank you.

Timex’s assertion that there are already autonomous cars that drive better than most humans is just wrong. There is no autonomous car that drives as well as even the little old lady who only drives to church on Sunday. The best autonomous program (Waymo) still requires human intervention in routine circumstances, for all the reasons you lay out. Waymo is trying to implement this intervention with remote operation from a call-center, because that’s the only way to justify their continued existence economically. It’s difficult to imagine this being a satisfactory experience for the remote operators, the passengers, or other drivers on the road.

I’d also note that every part of the road network, including the behavior of pedestrians and other drivers, is adapted to accomodate human failures, not the failures of computer programs. That was demonstrated by the Arizona fatality.

There’s no reason this couldn’t (and won’t) change, hence my statement above about infrastructure. I see no reason we can’t accommodate the errors of self-driving cars, just as we do human drivers. But it won’t be how the optimists are imagining it, and it won’t be quick.

See I find @Fifth_Fret’s position to be excessivlely pessimistic. It’s not going to be 30 years, though 10 is reasonable. The different stages are being solved, but it will take time. That said they are solveable. Yes the last stages are more difficult, but right now the toughest challenges revolve around mixed autonomous and driven vehicles.

I’d say we’re probably about 5 years away from large scale autonomous fleet vehicles like semis being viable, and probably about 10 from passenger vehicles.

Well, the nice thing about these sort of timescales is that we might both be around to compare notes when 2025 rolls around!

Sure, a deer can run at around 30mph. But humans themselves didn’t generally move that fast. We can track stuff, but ancient humans weren’t ever put in situation where they were crusing around at 80mph. Ultimately, I don’t think that the idea that humans can’t do this is a scientifically valid argument anyway, as our brains are elastic to the extent that we’re good at learning new skills… but still, most humans are objectively bad at being able to concentrate on driving, and when things unexpected happen, our reflexes rarely are fast enough to salvage the situation. Really, the only reason driving works at all is that we establish rules so you can normally assume what other cars are gonna do. In regions where driving laws are more lax, it gets nuts.

See, the thing is though… humans often just fuck up in tons of those circumstances. They fail to detect stuff in front of them. They get distracted and drive off the road. They fail in countless ways, all the time.

But the difference is that you don’t say that they “need human intervention in those circumstances.” You just accept that they fuck up.

Hell, in a bunch of these cases we are talking about here with autonomous vehicles, you have a human in the loop but somehow the fuckup is entirely placed upon the robot part of the team, and the human is absolved of responsibility. With that uber car, you had a person in the driver’s seat… they didn’t do anything either. Due to logs, we know that the computer saw the person it hit… and it identified that it needed to apply emergency braking (which was inexplicably disabled). The human had the power to stop the car, but didn’t. We have no idea if they even saw the person at all.

Well said. Gotta remember the transformational potential of getting this thing right. There is a massive economic carrot encouraging serious investment into solving this problem. It’ll get there sooner than later.

Even the worst human drivers manage to get where they are going almost always, with accident rates that are economically manageable through insurance, and without needing to call home for remote control. There is no existing autonomous vehicle that can do that in any real world scenario. The best autonomous vehicle is objectively worse than the worst human who is able to get a license.

Waymo publicizes very low “disengagement rates”, implying that they can go thousands of miles without human intervention. The fine print for this is that the publicized rates are only “unplanned” or “emergency” disengagements, not routine disengagements that actually happen every few miles of real urban driving (autonomous vehicles are quite capable of driving a long way on interstates without requiring help.)

I believe the problems of autonomous driving on regular streets can be solved, but that’s only because I believe the problems of Artificial General Intelligence can be solved. I would not be surprised that to get a competent autonomous driver, you needed something approaching an AGI comparable to a human child.

Whoa, really? I think the driving problems can be solved (by basically brute force), but I really don’t think we’re anywhere close to AGI. Where do you think we’re pushing close to solutions on this?

I don’t think we’re close, I think it’s 20 or more years down the line. But in principle, solvable.

I think the hardware we have now is only an order of magnitude or so short of what is needed. If someone were to spend gobs of money on it, Manhattan project style, might get there in considerably less than 20 years.

I don’t think the hardware is the problem, but that’s a debate for another thread.

But if the self-driving industry is in crisis, nobody told Waymo. Over the last 18 months, the company has been methodically laying groundwork to launch a commercial driverless car service.

Uber, Nvidia, and Toyota all suspended self-driving car testing in the wake of the March Uber crash—but not Waymo. Waymo continued logging miles in Arizona and elsewhere. And days after the crash, the company announced a deal with Jaguar Land Rover to build 20,000 fully self-driving I-PACE cars.

Then on Thursday, Waymo announced a massive deal for 62,000 Chrysler Pacifica minivans—by far the biggest deal for self-driving vehicles so far. Waymo wouldn’t be making deals this big unless the company was very confident that its technology was on track for commercial use within the next year or two.

When people use stories about Tesla, Uber, or Ford to argue that self-driving cars are still many years away, they ignore the fact that Google—now Waymo—has been working on this problem way longer than anyone else. In October 2015, Google was already confident enough in its technology to let a blind man take an unaccompanied test drive on Austin streets. Almost three years later, it’s not clear if anyone else has managed to build technology as sophisticated as Waymo had three years ago.

So it might be true that the rest of the industry is failing to live up to early self-driving car hype. But Waymo is in a class by itself.

The IEEE article linked from the Ars Technica piece contains a link to the actual Waymo application for California testing: http://www.documentcloud.org/documents/4458000-Waymo-Driverless-AV-Application.html.

Most interesting tidbit for me is that they don’t allow for remote driving of their vehicles. The Fleet Support Staff must support the vehicle at a higher level of abstraction.

I’m not surprised. I can feel the lag just moving a mouse pointer with remote desktop against two devices hard wired to good internet (but not at the same location). I can’t see remote driving a car via cell tech being low latency enough to not cause more issues.

I live in a Red state, and I can’t imagine them investing in stuff like this in the next 20 years if it requires changes to road and highway infrastructure. I.e. they fail at the whole innovation thing, and lag behind the rest of the country at nearly everything.

This is actually interesting to read. So the Tesla only started veering off and accelerating 7 seconds prior to the crash. He had the hands on the wheel 34 out of the last 1 minute of driving (but not the 6 seconds prior). So he had 7 seconds to realize the car was accelerating into the barrier, which to me doesn’t seem like much.

It also makes me think that autopilot requires more attention than manual driving. While manual driving you know how to feel what’s going on and everyone’s instincts are in tune, so even when distracted you still have some inkling of what’s going on. With autonomous driving you need an additional skillset to identify what the car’s intentions are so you can adjust for them in case of an emergency. I think it’s less of a matter of “if he was paying attention he would have avoided this accident” and more of a “situation may have made it take more than 7 seconds to fully grasp what was going on”.

I’m not addressing your overall point or the rest of your post because I haven’t given it much thought, but if you’re driving and you look away from the road for 7 seconds, that’s a really long time. That’s not short at all. We’ve all done it probably, that time you go to adjust the radio and something abnormal is going on with it and you get distracted from the road. Then you look up and realize how long you were looking away and get that sinking feeling in the pit of your stomach, where you know you fucked up and you’re really lucky nothing happened in the interim. That’s what happens to me after 7 seconds.

Watch a driving video to put yourself in the mood and watch the clock on the video while you block the driver view and then see how long you go before you start to feel uncomfortable. https://www.youtube.com/watch?v=CiAspHr78tg It’s all subjective, but it’s definitely less than 7 seconds for me.

I know autopilot might put you in a different frame of mind, I just wanted to address only the point that 7 seconds is a while.

In fact in that driving video if you start it at 44 or 45 seconds, in that 7 seconds you would hit a biker. https://youtu.be/CiAspHr78tg?t=44

I was hit by a car when the driver (she admitted this) looked down to change the station on her radio. She said it was like 2 seconds. So yeah, 7 seconds in a car is forever.

Yeah, anyone looking away from the road for 7 seconds should never be on the road at all. Fuck that.

Meh, I think 7 seconds while actively controlling the car vs 7 seconds while passively controlling the car are different things.

“She said it was like 2 seconds” is pretty meaningless. People usually suck at estimating the duration of time that has passed.