Uber vs. California - Robot cars not ready for roads?

Wasn’t that question asked last year? If a self-driving car finds itself in a situation where it either crashes or veers into a group of pedestrians, what does it do? What do the programmers tell it to do? Save the passengers and take out the pedestrians? Or save the pedestrians and sacrifice the passengers?

The science: http://science.sciencemag.org/content/352/6293/1573

The summary:

Don’t assume that self-driving cars will be able to categorize things it sees beyond “signal” and “obstacle”.

If it can’t avoid a collision entirely by steering, the car will just slam on the anti-lock brakes and hope for the best. It will try to avoid a collision, but if a collision can’t be avoided, it will be in no position to judge the best collision to have.

This is a flawed argument that I see repeated constantly.

The self driving vehicle will be keeping enough space behind the school bus to have enough time to come to a complete stop if the bus suddenly does. All vehicles are supposed to leave this amount of room but drivers don’t in the real world. Self driving vehicles will, and they will be able to react to sudden stops by the school bus much quicker than humans can.

So self driving vehicles won’t be getting into these situations to begin with and will stay in situations where it can stop in place as safely as it can.

I’m assuming this won’t include hazmat trucks… it might bee one driver but 100s of gallons of gas, beer or even milk in a stream is a eco disaster. Red Cross trains for that.

You’re not paying attention. It’s not the trucks fault. In this scenario it’s the other cars fault. If you think semi-trucks leave a hundreds of yards between them and the car in front of them, that’s nuts. That much space and another car will just move in front of them.

Yes but the truck is constantly adjusting it’s speed (and slowing down) if it sees other cars putting it in an unsafe position so that it can be as safe as possible.

The only decision it will make is is it safe to proceed at my current speed yes or no. If no it will slow down or stop. If another car proceeds to ram into it then that’s on the other vehicle but the truck will try to stop and not try to take evasive action, because the even with human drivers evasive action can cause just as many accidents (and probably even more damage) then just getting hit.

Have you ever been in a semi-truck while it drives long distance and in traffic, also with a load, a fully loaded semi-truck. Not to put a too fine point on it, they don’t stop or start or move like a passenger vehicles… and that has nothing to do with a human controlling it.

Ever driven a major freeway anywhere it was actually possible to leave that safe distance between you and another car? It doesn’t happen. In LA cars drive 70 mph with just little room, and if you try to leave room someone always slides in. If the AI tried it would soon burn itself out trying.

I’m wondering how they’re going to program the AI to use the escape ramp or emergency runaway… aka the long piece often deep gravel near steep downgrades. It’s where they go if the brakes get too hot and stop functioning. In theory the AI should actually do better than a human to know how to avoid that… but that might conflict with all the breaking it’s going to do when cars keep sliding into it’s safe space.

You are still missing my point. Obviously it’s possible to drive trucks with these massive loads today and do so safely. The majority of the trucks do not get into accidents today with human drivers because the humans know how to drive them safely most of the time. They know how to interpret the traffic ahead and how everything is moving to gauge how fast or slow they need to be driving in order to not get into an accident.

It’s not like these loads are heavier because computers are driving it instead of humans. The only difference is that the human drivers are using instinct, experience, and immediate senses to determine how fast or slow they should be driving while by the time self-driving trucks are fully running and not in test mode anymore they are using massive amount of sensors and data that are instantaneous and not distractable (in the same way human drivers are).

Furthermore, the computer can instantly calculate how much space it needs to slow down given how much it currently weights and actual data on how much friction the tires are encountering on the roads, where as a human truck driver is just guessing.

All of your arguments are against semis in general and have zero to do with automation. Either semis are unsafe for the roads and should not be on highways as they currently are, or you believe that a human with only 2 eyes (that can only look one direction at a time) will always be safer than a computer with much, much more data and less emotion tied to it.

If you don’t believe that a computer can ever self drive a semi on the highway safely then I don’t understand how you can ever believe that a car can self drive in a downtown environment such as New York City. A car is still several tons and instead of predictable traffic you have traffic lights, stop signs, one way streets, pedestrians, buildings, sharp turns, etc… That is a fundamentally much, much harder problem (and much less safe) then a self driving semi that can do calculations to determine if it needs to slow down because a car cut in front of it until it thinks it can safely stop if the car ahead stops.

What’s your point. If a car cuts in front of it it will slow down in order to get to a minimally safe distance but it’s not like it’s going to pound the breaks to get to that distance immediately. It will still keep on going forward and making progress, which will still be cheaper for the shipping companies because it doesn’t have to stop to sleep, get snacks, etc… It will literally be no different than it is now.

I will state this simply. I don’t, but if I car hits another car, there is a chance of survival, even for a pedestrian, and and a very good chance there is not going to be a massive accident. When a semi is involved in accident… it’s a different story and often not a great chance of survival.

Everything you’ve said suggests to me you have very little if not zero experience with semi-trucks. Since you didn’t really answer the question; I’m going to assume it’s zero.

And let me make something clear, driving cars are not a safe way to travel. It’s one of the most dangerous things we do on a daily basis. Semi’s do not improve that statistic. They’re not safe. You keep talking about slowing down as if they can just press the breaks and actually stop… that’s not how it works.

And driving a car at full speed hitting a motocyclist or a pedestrian is very likely to result in a casualty as well, and the areas that non-semi automation is being looked for heavily ups these scenarios.

You keep trying to talk to me like I’m an idiot. Take any situation, can a human stop the semi safely? If yes then why do you believe that a computer with MUCH more information, that’s not tired or distracted couldn’t. If no then at the very least it’s a zero sum game because the accident would have happened anyway with the possibility that no accident may have occurred with the truck noticing environmental and traffic changes a couple seconds earlier.

And let’s not even talk about how human drivers tend to take evasive action. Like how the truck driver a week ago realized he was drifting in the other lane, over corrected and jackknifed right into another car blocking the whole highway (oh btw no fatalities). Or the time before that when the driver wasn’t paying attention to the giant flashing arrow saying that the left lane was closed until the last second and tried to merge right into me.

Let’s look at how last year when I was stuck on the Florida Turnpike because a semi driver fell asleep at the wheel, went through the media, struck a car and went completely across opposing traffic (injuries but no fatalities in that too).

There are so many stories I can give just from personal experience, but these are all accidents that are avoidable but happen due to the fact that highway driving is extremely monotonous and mind-numbing with employers that demand things get shipped as fast as possible. Outside of the falling asleep scenario it’s extremely likely that once self driving vehicles graduate out of test (which I don’t believe will happen for at least 5, maybe 10 years) a computer could have avoided the catalyst of the scenarios to begin with, and when something happens it can react much faster than human can (and Tesla’s autopilot has already started to prove this as there’s much more evidence of it reacting to potential crashes before a human has, even with a few mistakes).

I haven’t seen anything by you explaining why a flawed, bored, and tired human will always be better at making decisions than a computer with a lot of sensors and real time information from all angles.

Sure but they aren’t going away, and with all the excitement of self driving vehicles revolving around consumer transportation that scares me a lot more than semis do because those are already driving around areas where pedestrians are out and about with unpredictable people, motorcyclists, and traffic.

No I don’t think I am. I just don’t know why it’s difficult to understand why I think putting early stage tech on vehicles that weigh up to around 80,000 lbs (more actually with appropriate over-sized permits) is a good idea. 20 to 30 times heavier than passenger vehicles when the passenger vehicle runs aren’t exactly… smooth.

I also don’t think putting self flying AI on fully loaded commercial jets is a great idea either… We have AI that can perform surgical assistance too, but if we’re going to have them do that sort of work solo, I would imagine we wouldn’t have them perform the most complicated surgeries we can come up with.

It’s not the idea of AI that bothers me. It’s the stage we’re at with it and why we’re putting it on the more dangerous vehicles now.

That simplistic calculation is possible but useless. And the AI would need to also be able to predict slick roads from rain/ice and how it effects stopping distance, as well as calculate the stopping times of the cars in front, which may or may not have ABS, may have an extra ton of cargo in the trunk, etc, How about identifying and predicting every moving vehicle which could move into your lane and could violate that safe stopping distance?

Operating with 100% safe worst case stopping distance will guarantee you are always slowing down to get further back from cars cutting in front because your AI will want to leave soooooooo much space.

There’s no way self-driving passenger cars will be programmed to kill the passengers. The simplistic idea that they will minimize casualties is silly. “Minimize casualties” is basically not the rule for anything. If I’m attacked by 6 men with knives and I shoot them all, that’s not considered murder. If I run a school bus off the road because I’m trying to avoid a drunk driver going the wrong way or an avalanche that suddenly threatened me people will blame the external force, not me, even though I will feel pretty horrible about being involved. Same is true if bad weather causes me to lose control. No one will tell me, or say behind my back, “you should have died rather than risk those kids!” The parents will wish I had, but it just isn’t what society expects. The rule we actually use is mostly about Pareto efficiency - how can we improve things for as many people as possible without hurting anyone?

The idea that automation must be perfect makes as much sense as the idea that these vehicles must have a perfect safety record and be completely error-free.

The basic answer to all these “tough questions” is: if there’s a clear-cut answer, the system will do that, if not, the developers will choose and be just as correct as any driver choosing the same option.

Have you seen the internet lately?

The difficulty is not in gathering information, it’s in synthesizing it into an accurate picture of the world. That isn’t on the computers. It’s on the programmers, and they’re just as fallible as human drivers.

This reminds me of how while I was in Japan I kept hearing about Japan’s master plan for the fact that they’re younger generation is so much smaller than the exiting one and a refusal to open up immigration restrictions to allow for incoming workers was to give all the menial jobs to robots. After that I went to a tech demonstration of some kind in Tokyo, and watched a robot fail repeatedly to climb stairs.

I’m sure we’ll get there someday, but before we all get our Rosies to help wash dishes, vacuum floors and run errands for us, we might want to help them climb stairs… after that… descend them.

I for one look forward to American Truck Simulator 3, where the game not only drives my truck for me, but keeps playing typewriter noises on the radio because it’s what the AI finds the most soothing to its operation.

Actually, the game might be ‘played’ from the viewpoint of the back bumper, because the only place for a human at that point will be hanging off the tailhook, trying to hold on until we get past the overhead Skynet drones.

Sure there will always be bugs, and the liability aspect will be what keeps this back as far as possible. Bugs were what caused that famous Tesla auto-pilot crash where the sensors didn’t notice the truck passing in front of it due to the reflection and coloring. Even with that though you can find much more evidence just on youtube of autopilot preventing an accident on a highway than causing one.

Self driving systems won’t ever be perfect. Accidents will happen and some of them will be bad. That being said those same really bad accidents happen already with human drivers making mistakes. Until we have a safer means of transportation that doesn’t rely on several ton vehicles moving at high speeds I look forward to the day when those are controlled by automated systems with failsafes instead of 100% by humans, because i do believe that in 10-15 years they can be safer than human drivers.