Uber vs. California - Robot cars not ready for roads?

Haha, don’t worry, they can identify and hassle mothers and fathers on a regular basis due to bottles of soy milk. I am sure they didn’t let any bottled water or milk through… just like they were taught.

Not to turn this into a TSA thread, but the purpose of the TSA isn’t really to catch that stuff – it’s to convince potential terrorists that their chances of being caught are too high to make it worth the risk.

But they are trained to catch that stuff and are supposed to.

Although maybe the purpose of the human driver isn’t really to catch these mistakes, but assume liability when mistakes happen… hmmm

Uber putting a driver behind the wheel seems like a ploy to make the public and the government they get the okay from to feel like these cars are safef than they are… too.

As I said in another thread I don’t see a smart lawyer in front of a jury letting rich Uber get away from blame by saying Bob the co-pilot was at fault.

I agree. But it is better for PR to be able to blame the driver.

To be fair, I doubt that was their main motivating force for including the human driver. There are a lot of good (and bad) reasons for having one.

heh, like changing a flat tire?

Absolutely. “Car can break down” and “car sounds/feels funny, it might be breaking down” are probably the #1 reasons to include a human at all times.

Let’s hope they don’t read ABC News then!

You can’t rely on human co-pilots to be attentive 100% of the time and instantly react to an emergency. Humans in regular cars don’t do that, which is part of the problem the autopilots are meant to solve.

For the current co-pilots, I would expect them to take over when there’s an unusual situation ahead. Maybe they’re approaching the scene of an accident with debris in the road, or maybe there’s a dark line of heavy rain ahead. But reacting to something like a pedestrian stepping from the dark right into your path from a short distance? Nope.

Yeah an issue for a human. I still don’t understand because the articles I have read suggest for these sensors, light or dark isn’t supposed to matter. When they were reporting she might have committed suicide or stepped suddenly in front of the car, i was actually expecting sudden movement, from the pedestrian… but it looks like she casually walked across the road, was in the road, long before the headlights hit her.

Well I assume that’s correct in that it’s how the sensors are supposed to work. We just don’t really know anything about what happened yet. Did the sensors not work at all? Meaning did they truly not detect her? Because it was dark? Because they weren’t configured correctly and didn’t have the right field of vision? Or did the sensors “see” her, but the software or some other system failed to categorize her correctly as an obstacle somehow?

Or hey, maybe the brakes on the car just failed? I mean, that’s very unlikely to have been the cause just because I imagine we’d have heard about it immediately if it were that simple, but there will always be the possibility of some kind of vehicular malfunction that wouldn’t matter who or what was driving.

The sensors had to completely fail, the car didn’t brake or attempt to avoid hitting her at all. If there was any sort of night sensors, they flat out didn’t work. A human wearing night vision goggles would have seen her and avoided her fairly easily. I strongly suspect there is no night vision/LIDAR of any sort on the vehicle. The only other reasonable conclusion is that the car decided to ignore her even though it saw her, which seems unlikely (though not impossible, at the end of the day programs fail, maybe it was a bug).

Even a person without any sort of night vision would have swerved or slammed on a the brakes, imo (they’d almost assuredly still have hit her). The footage seems to show the car completely ignoring her.

Perhaps the engineer who wrote the software subscribed to the Timex Deadly Traps school of thought.

The Uber cars have LIDAR (you can see the scanner on top of the car in the video from the scene, and Uber was in a big IP fight with Waymo about their LIDAR tech). The LIDAR must’ve failed to categorize the victim as an approaching obstacle.

Hmm. Here’s some dashcam video of the same location at night. Quite a difference.

This makes it seem more likely that a human driver may have reacted earlier. I’m not saying a regular person driving would’ve stopped in time, but that Uber AI didn’t even try.

I’m jealous of anyone who’s used so little software in their life that they imagine a bug in the program to be “unlikely”. But regardless of which is more likely, the point is we still don’t know from this video what part of Uber’s system failed (LIDAR, some other hardware, something in software, some combination, something it’s programmed to catch but failed to, an edge case exposed in the circumstances of the sensor data that it wasn’t programmed to detect, etc.), and I’d say it’s also nearly impossible to know what an attentive human driver would’ve seen in this scenario.

Uber and the police almost certainly have more information about one or both of those things, but just based on the video, I really don’t think there are any meaningful conclusions we can draw.

I’m giving them a massive benefit of the doubt to be somewhat neutral, but I personally think the system failed completely and ran into the woman that it knew was there or the sensors that were supposedly on the car aren’t actually there or working.

It might have classified the signal as a vehicle/bike in the next lane over and not get the trajectory of the object quick enough to realize it was moving against normal flow. You wouldn’t want your robot car to slam on the brakes every time it is passing a bike in the bike lane, but it does need to know which direction the bike is traveling to make that determination.

I guess I don’t quite follow, who is the “them” that you’re giving the benefit of a doubt to?

We may not be disagreeing.

Something failed, because a woman was hit by a car. There’s a video of it happening, but I don’t think the video tells us anything useful for diagnosing the failure as a bunch of forum pundits. I wasn’t specifically trying to refute anything you (or Nesrie) said in my first post this afternoon, just weighing in against anyone who’s ready to make pretty much any conclusions with the information we have.