Uber vs. California - Robot cars not ready for roads?

Just wanted to chime in that some of my surmises earlier in the thread were wrong: this was a pedestrian walking a bike from the curb into the road. Seems entirely plausible that a vehicle under human control could not have stopped in time either.

We had a group of bikers here who tried to push a bill that said they didn’t have to stop at stop signs. It has to be near the top of a dumb law list somewhere. I could see it then, the cars are stopping but the bikes are not… what could possibly go wrong?

A different group of bikers called those other bikers lazy and stupid, and it didn’t happen.

I thought the point of this technology was that it was better than human control, not that it was plausible that it was no worse.

Better does not mean perfect.

Did I mention perfect somewhere?

In this incident, the car reacted measurably worse than an average human, unless the reports of it not even braking were in error.

You have no idea how a human would have reacted. People get hit by cars that don’t brake all the time.

I would suggest waiting until the actual, clear facts are presented before coming to a conclusion.

This.

LIDAR should be better at “seeing” in the dark than human vision, for sure. All of these robot cars should have LIDAR (I don’t know if Uber is “doing it right”).

Scroll down to the bottom of the article after the image for some good summarizing.

That’s a failure of the human driver, generally, though there are cases where there truly is nothing the driver can do. In this case, I agree we certainly don’t have the info we’d need to be definitive on anything really, and I’m very interested in the exact sequence of events and timing. As I said above, to me the real question is whether this was a situation like the ones where a human could not have reacted in time, but we’d expect a computer-controlled car to do so, a case where a human normally would have reacted in time, or a case where nothing, human nor computer, could have prevented the accident because of the timing or whatever.

Computer controlled cars won’t prevent all accidents, because, well, people. But we do have a reasonable expectation that they won’t cause more accidents than would human drivers (admittedly, a low bar).

Computers are never going to be able to assess a situation.

Like if you’re driving and you see kids playing near the street or someone heading to the corner staring at their phone oblivious to the world. In those kinds of scenarios a person can see it and slow down, even though nothing is in the road, because they realize that might change drastically and quickly.

A computer is going to look at the street because it can’t possibly filter out everything else and make judgement calls. Stuff on the sidewalk is on the sidewalk and not a problem. It would probably see a kid careening down the sidewalk towards the street as something irrelevant until the kid was in the street at which point inertia probably made that recognition irrelevant.

As a rule, putting a computer in charge of whether someone lives or dies is a terrible idea.

I don’t think judgement calls like that are necessarily an impossible problem, but I do think claims that autonomous cars are safer than human drivers are very premature. The auto accident fatality rate in the US is 1.2 per 100 million vehicle miles traveled; fully-autonomous cars have driven maybe 10 million miles. We’re less than 1% of the way to where the statistics are meaningful at all, much less meaningfully comparable.

All of what you wrote is 100% false. My car already watches for stuff (people, things, other cars) that are going to cross my path and warns me before it happens. It is absolutely possible for a computer system to get a 360-degree view of the environment around a vehicle and decide what things in it are potential threats to safe navigation.

However, if a threat comes out of nowhere, without regard to its own safety, no human or car can prevent an accident.

From the article I posted:

Waymo seems to favor a higher-detail view of the world, with Krafcik saying “The detail we capture is so high that, not only can we detect pedestrians all around us, but we can tell which direction they’re facing. This is incredibly important, as it helps us more accurately predict where someone will walk next.”

It’s definitely something they think about. Of course it is. How many of us encounter that sort of situation on at least a monthly basis, if not more regularly. Pedestrians make driving hard. The people making these systems are definitely not ignoring them.

That’s not true. Humans do this every day. We’re just not always successful. Not every cat, dog or deer that crosses the street gets struck. Defensive and alert drivers certainly avoid these all the time, but we’re not always successful and sometimes hitting them is the safer choice.

Robot driving systems are designed to “be careful” in the same way humans are in these scenarios. We need more info on this one to know if it was better or worse performance than a human being. That being said, if anyone got it wrong, it’s effing Uber.

I certainly don’t know if a human would have been better at this decision or not. This sounds like the other topic we had that went round and round with medical denials, or some other auto process that occurred… forget which, but Menzo said no human can prevent an accident when a threat comes out of nowhere, and that is absolutely not true. It happens somewhere every day, multiple times a day.

I think you and he are arguing against different definitions of out of nowhere then.

I’m reasonably sure he means it was impossible to perceive in any way the theoretical pedestrian/problem before impact.

You clearly mean something different if you think it would be feasible to prevent that accident.

Edit: Sorry for all the edits, I know it just pops notifications like popcorn. :/

OK, fair. I’m just trying to defend against people who are going to freak out every time there’s an accident involving an autonomous vehicle.

I personally think it’s appropriate for us to require widespread adoption of this new technology to improve upon human drivers at the very first iteration. At some decent scale, these cars should be better at avoiding accidents than humans, and a pilot program should last a meaningful amount of time to determine if that’s true.

If not, keep working. If it is, then we should move the goalposts for iterations. 5% improvement each year until car accidents are extremely rare because most vehicles are autonomous.

But that will take a while, and it’s frustrating to me seeing people fighting against even figuring out if it’s possible.

Well that’s what I get for speaking for someone else. :P

Yes, this.

You may once again refer to me as the Menzo whisperer.