Make an informed decision to first pull out of the road before detaching the pedestrian? How bad a driver are you?!
I mean yeah, it went 1 car length to get off the road
With a human attached. That part is pretty important here.
I think Pyperkub’s point is that human drivers could have reasonably/predictably done both: either did what the AI car did or just stopped in the middle of the road. Both have there issues. I think the key point is: what standard do we hold AI vehicles to? In either case (pull to side or stop in the middle) I highly suspect a human driver of the 2nd car (i.e., not the nissan that ran) would not have been cited and would not have been held liable. Do we hold the AI to a higher standard? This assumes that the data would show that the AI was not aware it was dragging a human (which would also be possible for a human driver).
Yes otherwise what’s the point.
As an ultimate goal? Sure. But setting the standard as “the same for human drivers” allows for adoption and growth and also frees people up from the task of driving.
I disagree. I think the standard should be much much higher before they are even allowed on the roads.
Otherwise why not just hire a human?
Humans are not always perfect drivers, but they are always accountable for what they do. We are in an early development time when the self-driving systems want the liability to stay with the human operator while they experiment and get better without being destroyed by the consequences of failure. I don’t know the liability plan when the self-driving systems are mature and always make the “best” choices.
It goes down to my original issue with AVs, everyone gets so impressed by what it can see they become defensive when you point out what they can’t see. The car obviously couldn’t see the human or if it could understand that it was a human; I’m calling it table stakes right now for an AV to be able to figure out what a human is when it is in contact with the vehicle. If it can’t do that, get it off the road.
Because why?
A self-driving car that’s as good as a human (not better, not worse, just about the same) is economically clearly better than tying up a human on a menial task.
Spoken like someone who has never been the victim of a hit-and-run accident. I’ve had two, and part of the attraction of robot cars to me is that they won’t use flight/dishonesty/violence when they’ve been involved in a collision…
If the legal system ensures it’s cheaper to have killed a victim of an accident instead of leaving them alive but crippled, you don’t think that will find its way into the value judgements built into the software?
Eh, we have plenty of evidence of plenty of human drivers being a LOT worse in the same situation is my point here. Again, we have that evidence in this EXACT case - the hit and run driver.
Should these cars be better? Yes, but to breathlessly state that they won’t be, and that this is some insanely egregious indictment of self-driving cars is more about an agenda than it is about safe driving.
Heck, per the hit and run driver, it can be viewed as the autopilot being better than the average human, even if it wasn’t perfect. And I suspect GM/Cruise have been FAR more accountable than any human in that case. At least we know they have insurance…
Wasn’t there reporting that it wasn’t just the specific accident that caused Cruise to lose its operating license? But rather that it turns out they were concealing information from regulators?
You’re giving way too much credit to the software here. This isn’t Three Laws of Robotics stuff, with the software calibrating it’s response to the possibility of different kinds of injury. All it knows is “Collision Bad”.

Wasn’t there reporting that it wasn’t just the specific accident that caused Cruise to lose its operating license? But rather that it turns out they were concealing information from regulators?
Yes, I was posting it as an example disputing this statement overall:
Human drivers have licensing requirements and liability for their mistakes.
As well as an example where breathless sensational reporting skips over things which don’t fit the narrative being sold.
While I haven’t paid too much attention to the state of the entire autopilot/robot cars environment (I commute to West SF, where I see some of the robot cars, and our family cars are all pre-autopilot), I do think that a LOT of the criticism gets overblown for PR agendas/Clicks rather than an accurate look at where we are and where we can/should end up, especially vis a vis what we currently have (often fallible human drivers, sometimes VERY fallible).
If a self driving car runs over a pedestrian, or causes a crash, or damages someone’s property or public infrastructure, who is liable?
The tech needs to be so much better before it shows up on our streets. Basically it needs to be perfect, especially if it’s going to take jobs away from people.
The driver should be liable. And then he or she can sue Tesla. But the driver is the one who engaged the self driving system.

The driver should be liable. And then he or she can sue Tesla. But the driver is the one who engaged the self driving system.
That’s not even what we’re talking about… we are talking about completely driverless cars.