Uber vs. California - Robot cars not ready for roads?

What the fuck. The solution to “emergency braking maneuvers breaking up normal operation” fucking well isn’t “disable emergency braking maneuvers unless a human presses a button in time.”

How much you wanna bet that that was a decision made at a middle-management level and not an engineering level when they were told that fixing the emergency-braking heuristic was going to take another couple months?

I’m not sure I understand the outrage about this.

A person got run over because of derpy decisions.

I have a subaru with the eyesight package, which has automatic braking. You have to very explicitly opt out of using the system by pressing a console button to disable the whole deal, at which point there are warning signs on the dash that the system is disabled.

I have NO IDEA why you’d opt into the saftey feature. My car should be much older, shittier tech, and 99.5% of the time it works great. When it doesn’t, it tells you in no uncertain way that there MIGHT be a problem up ahead, and it’s really not a big deal to look up, adjust your space in the lane by a few inches for that guy lazily exiting the highway from your lane, and move on with your day.

Robot car saw a person in the road and decided to let them die.

I mean… if you’re cool with cars just running people over, I guess it’s no big deal.

Maybe how they were testing, what they expected the driver to do and what the system was capable of doing should have been looked at, in-depth, by people outside Uber before they went into testing on public roads… just saying.

You mean a human (or group of humans) made the decision to disable emergency braking.

This sort of thing has cropped up before, and it has always been attributable to human error.

I mean… sure like 1.3 seconds before impact or whatever. That it saw a person 6 seconds before impact and didn’t know what the fuck they were is the bigger issue.

Also stopping a car doing 45 mph in 1.3 seconds is… generous.

Stopping from 55 mph takes about 4.5 seconds.

Nice.

I’m not sure that’s right. It made sensor contact with something 6 seconds out. That doesn’t mean that there was sufficient data there to resolve it (e.g., it might not be purely a ID software issue, it may legitimately be a data deficiency issue).

I’m not sure a typical person’s sensors (e.g., human eyeballs) would have done better. That’s 378 feet out. Even if a person were able to spot that something was out there, I don’t you’d be able to resolve it as a bicycle right away, either. Now, you don’t necessarily need to resolve what something is to avoid that you don’t want to hit it.

So, the report says it first made contact at 6 seconds out. The report further says that the object was categorized, variously, as several things. It doesn’t say that it didn’t (correctly) ID it as a bike until 1.3 seconds. Those alternative categorizations could have all happened in less than 4.7 seconds.

Further, it isn’t clear from the report whether the categorization made a substantial difference (e.g., the car wouldn’t brake if it was another car but would do so if it was a bike). If the logic determines that a collision should be avoided whether it is a “unknown object”, “vehicle”, or “bicycle”, then that’s understandable.

In the end, I don’t draw as strong a conclusion as you that the bigger issue is that it wasn’t able to immediately correctly ID what the object was.

On braking distance, I don’t think you necessarily have to come to a full stop to avoid a collision. Some breaking, some steering, is how most of us avoid hitting something in the road. It’s very possible that if the emergency braking protocol were enabled that a collision could have been avoided.

It would cost more (in dollars and manpower) to oversee the program with that level of scrutiny than it probably would to design and implement the system in the first place, even in a perfect world where no accidents happened. I think banning them totally from AZ roads was an overreaction.

A computer should be faster than human response. If it’s programmed correctly it should respond faster as well. It was programmed badly. It failed. Human response is not the issue here.

The human mind is an amazing computer for non-deterministic/heuristical calculations (e.g., informed guessing). Can a computer complete complex mathematical equations faster than a human? Of course. By orders of magnitude. Can it draw complex conclusions from open-world data as well? No. I wouldn’t make any assumptions about how fast a computer should be able to identify an object on dark road, compared to a human with access to the same data. The computer can perform tons of calculations on the data that it receives—but arriving at conclusions on what that is, when it goes beyond simple, deterministic pattern-matching is tough.

We (humans) make decisions all the time based on partial data. We then constantly reevaluate as more data comes in. It’s much harder for the computer to push forward with uncertain data. Just look at how hard it was for programmers to come up with computers that could beat chess masters. It took years, and years, and years, and wasn’t solvable by just throwing more brute force cycles at the problem.

To reiterate, I don’t think it is necessarily a huge problem that this car couldn’t instantaneously classify the sensor hit as a bicycle. There’s nothing magical about lidar that means getting bounced back infrared light must mean that the system can instantly tell what that light bounced off of. I think ShivaX may be thinking that the computer had a perfectly clear lidar image and couldn’t process it—e.g., you put the Mona Lisa 5 ft in front of a mentally-ill person and they say “tree”. I don’t see any reason to believe that’s the case, here.

In contrast, I think it is a problem if, for example, the programming required definitive classification to proceed with action. For these self-driving cars to work well, they’ll have to be more human-like and work with imperfect data and imperfect conclusions.

I don’t see how providing regulatory boards with the information Uber just released now, about how they disabled the emergency breaking system, would have cost more. It might or might not have led to more questions, but it’s information they already had.

I’m astonished I got the quote that close from memory!

I meant setting up and staffing the supervisory boards themselves. Is there a sufficient enough number of qualified people if Uber and Google have already hired most of them? In each and every state? Is a supervisory board in Arkansas with a $100k budget going to be able to pore over the source code and tell the difference between a bug and a properly working feature, before the car even rolls out onto the street? Even a 100% properly working car is going to be programmed to make life and death decisions at some point. Should the car choose between the elderly woman crossing the street on the left, the bicyclist on the right, or the baby in the back seat? Which choice is the bug?

[edit]

Lots of edits!

I think you took what I said and assumed I meant set-up a new FDA for each state, when really what I am talking about is a document that would outline things like, the emergency break is turned off for xyz reason. Yes we expect the driver to manually grab the wheel in a number of seconds to avoid an accident. While our car can identify objects with this radar, it’s our proprietary software that determines what to do with it. We have x number of cameras on these locations of the car which should give the car the ability to “see” x distance around this percentage of the vehicle.

No code reading, just a review. I think some of these questions should have been answered before testing, not after. Some of them, of course, you would not know to ask until after an incident but to think you’d have to read source code to do Q&A ahead of testing is simply not true.

IMO, 75% chance a document of that type would just make a lot of people confused and upset, and act irrationally out of fear. Sort of like gluten or the anti-vax crowd.

Except one would be facts and the others are just made-up bullshit.

They all started out as people misinterpreting facts, or being mistrustful of facts, or being afraid of facts, or having too many facts thrown at them than they can process.

The document would be too long and detailed for anyone to properly digest and understand. Plus, the software is probably being rewritten on a frequent basis. Do they have to report every commit they make to the software repository? Probably, the courts are going to find the best solution.