It seems like Cruise really screwed up and will probably set back a lot of autonomous vehicle progress. They deserve everything coming to them.
That being said, I'm always nervous about these sorts of things because our institutions are often not good at handling tradeoffs. An exaggerated example:
Imagine autonomous vehicles cut the vehicle-involved death rate by 2/3rds, but a significant portion of the remaining 1/3 of deaths would have been easily avoidable if a human was driving.
Do you think our regulators would handle this situation well? The public?
I completely share this concern and try not to feed into unwarranted hysteria. Injuries and deaths involving AVs get disproportionate coverage, and there's a danger this could create a perception that these vehicles are less safe than they actually are.
But this is why it's so damaging for Cruise to be caught trying to mislead the public. There will inevitably be cases where an AV is involved in a crash it couldn't have prevented, and we need to be able to believe companies when they tell us that's what happened. After October 2, people are going to be much more skeptical any time Cruise claims that there was nothing it could do to avoid future crashes. And that will make it more difficult to push back against unwarranted criticism of the technology.
Agreed. So important whn companies are giben the chance to trail this incredibly important technology in a live environment that they are honest. I still believe driverless vehicles are the way forward - just need these guys to be prfoessional, scientific and honest.
It’s not as simple as, “our AV solution kills less people so it’s logical to use it.” If your kid is killed by an AV, where is the justice? Who is held responsible? A corporation? Your child is dead so you receive a check in the mail and some fake words of apology from a faceless entity?
What if there is no one responsible? What if the absolute best technology and implementation we have available leads to the situation I described?
Another way to look at it, is how do you imagine this being different from our current world or the world prior to any autonomous vehicles?
Say engineers at a car company cut corners even though they knew there was risks. Who is responsible? What kind of justice is there?
Who is responsible when a seat belt kills someone? Does anyone get justice?
Who is responsible if autonomous vehicle technology is held back by incompetent implementation of regulations or public hysteria and a kid is killed in a situation that autonomous vehicles would have prevented?
It could be like the vaccine mishap fund for vaccine related adverse events. The autonomous vehicle firms eventually will be doing a harm reduction service, so their liabilities should be socialized.
Here is the whole challenge of implementing AI technologies in any area of human life: accountability. No autonomous device will ever have true agency, I believe. Some say they will. Well, let’s talk then about setting them loose on an unsuspecting world. But wait, do we actually think machines with true agency will be a good thing? We need to think about these things hard and long.
When AVs are part of fleets, the company that owns them will have insurance policies, so that should help. Individual owners will also have insurance policies. Figuring out who's at fault is always difficult, but all the sensing devices on AVs should help with that. It would be like many things (malpractice suits, for instance) where just passing out money will be preferable to a long trial to assign blame.
Sure. Fairly cold-blooded, but I take your point. On a less “only-costs-matter” basis, one may wonder whether human input can (or should) be wrung out of everything.
An agent has the power or authority to act. Animals have power to act, but not true agency. They make choices but by instinct. Humans have conscious agency that laws can hold them accountable for exercising. By “true agency” I mean human agency, not animal or machine power to “choose” within animal or machine limits. Animal “agency” by instinct is superior to any imagined “machine agency.” It took longer to develop.
It seems like Cruise really screwed up and will probably set back a lot of autonomous vehicle progress. They deserve everything coming to them.
That being said, I'm always nervous about these sorts of things because our institutions are often not good at handling tradeoffs. An exaggerated example:
Imagine autonomous vehicles cut the vehicle-involved death rate by 2/3rds, but a significant portion of the remaining 1/3 of deaths would have been easily avoidable if a human was driving.
Do you think our regulators would handle this situation well? The public?
I completely share this concern and try not to feed into unwarranted hysteria. Injuries and deaths involving AVs get disproportionate coverage, and there's a danger this could create a perception that these vehicles are less safe than they actually are.
But this is why it's so damaging for Cruise to be caught trying to mislead the public. There will inevitably be cases where an AV is involved in a crash it couldn't have prevented, and we need to be able to believe companies when they tell us that's what happened. After October 2, people are going to be much more skeptical any time Cruise claims that there was nothing it could do to avoid future crashes. And that will make it more difficult to push back against unwarranted criticism of the technology.
Agreed. So important whn companies are giben the chance to trail this incredibly important technology in a live environment that they are honest. I still believe driverless vehicles are the way forward - just need these guys to be prfoessional, scientific and honest.
Yep, 100% agree. That's what I meant by "will probably set back a lot of autonomous vehicle progress".
It’s not as simple as, “our AV solution kills less people so it’s logical to use it.” If your kid is killed by an AV, where is the justice? Who is held responsible? A corporation? Your child is dead so you receive a check in the mail and some fake words of apology from a faceless entity?
What if there is no one responsible? What if the absolute best technology and implementation we have available leads to the situation I described?
Another way to look at it, is how do you imagine this being different from our current world or the world prior to any autonomous vehicles?
Say engineers at a car company cut corners even though they knew there was risks. Who is responsible? What kind of justice is there?
Who is responsible when a seat belt kills someone? Does anyone get justice?
Who is responsible if autonomous vehicle technology is held back by incompetent implementation of regulations or public hysteria and a kid is killed in a situation that autonomous vehicles would have prevented?
It could be like the vaccine mishap fund for vaccine related adverse events. The autonomous vehicle firms eventually will be doing a harm reduction service, so their liabilities should be socialized.
Comments on hacker news also suggest that Cruise cars simply drive worse than Waymo.
“Actually, nobody suggested that there not be a coverup.”
Here is the whole challenge of implementing AI technologies in any area of human life: accountability. No autonomous device will ever have true agency, I believe. Some say they will. Well, let’s talk then about setting them loose on an unsuspecting world. But wait, do we actually think machines with true agency will be a good thing? We need to think about these things hard and long.
When AVs are part of fleets, the company that owns them will have insurance policies, so that should help. Individual owners will also have insurance policies. Figuring out who's at fault is always difficult, but all the sensing devices on AVs should help with that. It would be like many things (malpractice suits, for instance) where just passing out money will be preferable to a long trial to assign blame.
Sure. Fairly cold-blooded, but I take your point. On a less “only-costs-matter” basis, one may wonder whether human input can (or should) be wrung out of everything.
Might make it safer if safety were the highest priority. Pretty hard to program humanity from big data magic we barely comprehend.
Might make it easier for someone to apologize if it didn't mean financial ruin.
What do you mean by agency? Haven't AI already come up with tactics that no one instructed them to carry out?
An agent has the power or authority to act. Animals have power to act, but not true agency. They make choices but by instinct. Humans have conscious agency that laws can hold them accountable for exercising. By “true agency” I mean human agency, not animal or machine power to “choose” within animal or machine limits. Animal “agency” by instinct is superior to any imagined “machine agency.” It took longer to develop.
Betting the CEO will be replaced. Betting the company will need to do a big reset.
3 weeks later, Cruise's CEO is now out: https://www.forbes.com/sites/bradtempleton/2023/11/19/kyle-vogt-resigns-as-ceo-of-gms-cruise-robotaxi-unit/amp/
Thanks! Working on a piece about this now.
Looking forward to it
And now after another 3 weeks, 9 Cruise execs dismissed: https://www.reuters.com/business/autos-transportation/gms-cruise-robotaxi-unit-dismisses-nine-people-after-safety-investigation-2023-12-13/
It does seem wise Todo a strong reset like this.
Do we know how the Waymo would be have in the same situation?
Perhaps we should recreate it!