Uber vs. California - Robot cars not ready for roads?

This story is leaning towards a malfunction:

In a blog post, software architect and entrepreneur Brad Templeton highlights some of the big issues with the video:

  1. On this empty road, the LIDAR is very capable of detecting her. If it was operating, there is no way that it did not detect her 3 to 4 seconds before the impact, if not earlier. She would have come into range just over 5 seconds before impact.

  2. On the dash-cam style video, we only see her 1.5 seconds before impact. However, the human eye and quality cameras have a much better dynamic range than this video, and should have also been able to see her even before 5 seconds. From just the dash-cam video, no human could brake in time with just 1.5 seconds warning. The best humans react in just under a second, many take 1.5 to 2.5 seconds.

  3. The human safety driver did not see her because she was not looking at the road. She seems to spend most of the time before the accident looking down to her right, in a style that suggests looking at a phone.

  4. While a basic radar which filters out objects which are not moving towards the car would not necessarily see her, a more advanced radar also should have detected her and her bicycle (though triggered no braking) as soon as she entered the lane to the left, probably 4 seconds before impact at least. Braking could trigger 2 seconds before, in theory enough time.)

To be clear, while the car had the right-of-way and the victim was clearly unwise to cross there, especially without checking regularly in the direction of traffic, this is a situation where any properly operating robocar following “good practices,” let alone “best practices,” should have avoided the accident regardless of pedestrian error. That would not be true if the pedestrian were crossing the other way, moving immediately into the right lane from the right sidewalk. In that case no technique could have avoided the event.

ARS has a completely different take on this with two people supplying alternate videos of the same road - showing that visibility is MUCH higher than suggested in the Uber video (and callout to @kerzain for pointing this out earlier in the thread)

It would not surprise me if Uber released a video that they purposely darkened to try to support their defense. And that is because Uber has done unethical things time and again in supporting their business. They are a bunch of dicks.

Noted Luddites Ars Technica. Fake news.

Although this number seems bad, the statistics they are using, which are self-reported “safety-related” disengagements in California, are easy for their competitors to fudge. Waymo has stated that “routine” disengagements, when the driver has to take over because there is a situtation the autonomous system can’t handle, are not reported in the California disengagement statistics. If Uber is reporting every disengagement, and Waymo is reporting only ones that it classifies as “safety-related”, it’s a real apples-to-oranges comparison. There does not exist an autonomous system that can go for very many minutes in real-world, non-interstate driving without operator intervention.

What are you basing that on? Even the article addresses that near the end. Waymo right now is operating autonomous vehicles in Arizona without safety drivers, and yet you don’t hear about vehicles just sitting around not moving because they are requiring a disengagement.

Edit: Also again, your whole post is all addressed in that article. Even though ti’s apples to orange and even if you assume that Google is way under-reporting and Uber is way over-reporting (let’s be honest, Uber’s reputation is not of being overly compliant) the numbers are still way mis-aligned. Uber’s self driving cars have been seen almost hitting bikers in California’s bike lanes, running through red lights, etc… All that agrees with the idea that Uber is being forced to have more disengagements than Google does.

Edit2: Actually your comment is even more weird because even Chevy has shown videos of their autonomous vehicles navigating hands free for 30+ minutes in the busy streets of San Francisco.

The Waymo cars “without safety drivers” in Arizona have remote drivers in what is essentially a call center, and these drivers have to take over the driving routinely. I have looked at the California disengagement reports; the ones from Waymo say that although safety-related disengaments happen rarely, “routine disengagements” happen 100s of times a day.

I’m sure Uber’s self-driving does in fact suck, but the fact is that no self-driving system is ready for prime-time yet, and comparing disengagement rates as the article you linked does can be misleading.

Its a bus full of employees and the passenger is sitting in the drivers seat but doesn’t have a steering wheel.

At least we’ve figured out where all the jobs will come from in the post-robot age.

This did not take long; then again, it was pretty obvious Uber fucked-up big time.

My bet is that Dara wants to wrap this up, ASAP, including selling off the autonomous car division very soon. Uber is bleeding money and will run out of cash in 2019, and Dara has been busy trying to staunch the bleeding.

If you mean moving to a model of buying someone else’s autonomous technology, I could see that. But everything I’ve read says that there is no long-term future for Uber without autonomous vehicles, once that technology is widely adopted. The bleeding that you refer to is what startups/emerging technologies do—it’s part of the business plan. It, in and of itself, is not something that they should be panic about, per se.

I didn’t realize Tesla robot cars were also killing people, but here it is:

Federal investigators this week began examining the March 23 crash of a Model X sport-utility vehicle that was traveling south on Highway 101, near Mountain View, Calif., before it struck a barrier, then was hit by two other vehicles and caught fire. The driver of the Model X was killed.

Tesla said its vehicle logs show the driver’s hands weren’t detected on the wheel for six seconds before the collision, and he took no action despite having five seconds and about 500 feet of unobstructed view of a concrete highway divider.

To be fair and more complete:
https://www.sfgate.com/business/article/Tesla-Says-Driver-s-Hands-Weren-t-on-Wheel-at-12795521.php

“The driver had received several visual and one audible hands-on warning earlier in the drive,” Tesla said. “The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.”

People probably will always die in car crashes. Right now, any fatality involving autopilot is big news (and the timing on this one is especially bad for Tesla), but the real question is: do people die more or less frequently with autopilot technology?

Tesla defended its Autopilot program in the blog post, saying the system made it 3.7 times less likely for a person in the U.S. to be involved in a fatal accident. Statistics for the U.S. show one automotive fatality every 86 million miles driven across all vehicles, compared with 320 million miles in those equipped with Autopilot hardware, it said.

Probably not enough miles driven with autopilot yet to know if the long-term number will stay at 1 per 300 million miles, but that is the important statistic.

I thought the autopilot on Tesla was supposed to be more of an advanced cruise control, rather then a driverless cars system.

Yes, despite the misleading name, autopilot is not intended to be autonomous.

That’s fine though, in a world with autonomous vehicles they should just keep bluring the lines and misleading people.

Isn’t that what airplane and boat/ship “autopilot” was for decades and decades? I think it’s only recently (and in SciFi) that the term has been used to refer to a system that can actually substitute for a human.

An autopilot is a system used to control the trajectory of an aircraft without constant ‘hands-on’ control by a human operator being required. Autopilots do not replace human operators, but instead they assist them in controlling the aircraft. This allows them to focus on broader aspects of operations such as monitoring the trajectory, weather and systems.

An entirely fair point. Their usage is mostly consistent with how the term is used describing aircraft. Colloquial usage and understanding of that is not super accurate, either, unfortunately. I.e., they’re not lying, and I don’t think they’re intending to deceive, really, but many drivers are relying on the system to do more than it’s really able to deliver.

They should rename it to “Woahpilot”.