Interestingly, due to the required reporting by Waymo and Cruise, we effectively have a highly accurate record of the ability of normal drivers. Obviously, these records may be slightly biased by the driving style of a driverless car; however, but when stopped they are indistinguishible from cars with drivers. Waymo was hit 17 times while stationary, compared to hitting 2 stationary vehicles—this is a vast improvement in safety!
Nice post. The CMU professor Phillip Koopman brings more rigor to comparing AVs to humans and deciding when to launch. He wrote a book on it! If you’re looking to dive deeper, check out this 1hr talk on YouTube https://m.youtube.com/watch?v=UTdR_HE3DDw
I think that while you are definitely considering this, you still aren't seriously _enough_ considering the heterogeneity of human driver behavior. Waymo/Cruise vehicles are (a) not driving in rural settings (b) not driving in bad weather (mostly; SF and Phx don't have a lot of bad weather) (c) always driving expensive, late-model cars (d) are driven at fairly low speeds. Combined with the factors you do mention, the human crash rate and even more the human serious injury rate for that situation is most likely very low, but crucially we have no real idea what it is. Giving every driver in the US a Jaguar I-Pace would almost certainly reduce the accident rate a bunch too.
Of course, how relevant that is depends on what question we're trying to ask about self-driving (eg, is Waymo increasing the safety of ride-hail in SF?). The Waymo study of high-severity accidents in Phoenix is useful for that, but there are still a lot of confounders.
I think you might be right but it's a tricky situation because policymakers have to make a decision with the information that's available.
My sense is that SF has a lot more crashes per mile, but those crashes tend to be lower-speed and hence less deadly. However, this is somewhat offset by a higher rate of pedestrian deaths. I don't think we have data for this in SF, but you can get some sense of it by looking at DC, a "state" that's basically all urban. DC's deaths per 100M miles traveled is only modestly below the national average. So I think these two effects basically cancel each other out.
But I agree that there's a lot of uncertainty. I think it's interesting to just look at the ratio of who is at fault for accidents. It looks to me like other cars crash into Waymo and Cruise vehicles a lot more than the AVs crash into other vehicles. Maybe we'll find out that this is true for low-severity crashes but not high-severity crashes, but in a situation of limited evidence it seems to be a plus for Waymo and Cruise.
I agree that it's hard because we have to look at the data we have which is heavily obscured by these issues.
But my point about heterogeneity applies to the at-fault data too -- it's very possible (and I think likely) that almost every driver is crashed into less than they crash into other vehicles. Similarly, Waymo vehicles almost certainly run red lights less often than average, but if you look at the places with public ticket data it seems like running red lights is concentrated among a few repeat offenders. And so data based on miles driven and incident frequency is fundamentally unable to answer the question we want to ask, which is whether replacing a particular vehicle or trip with an AV is safer.
Along these lines, if this technology is being used to replace taxis I think the most relevant comparison would be their driving safety compared to the average taxi driver.
Yes, and the study that Tim mentioned by Cruise about ride hail in SF suggests they're not bad in comparison right now, which is heartening, but obviously that's a smaller sample.
One consideration that I haven't seen discussed in safety articles about self-driving cars is induced demand. If self-driving cars are safer than human drivers but still substantially more dangerous than transit, and the availability of self-driving cars causes people to travel more passenger miles, the absolute number of car crash fatalities could still rise.
Let's say the advent of anti-lock brakes led to more people feeling comfortable driving in the rain/snow, due to decreased fatalities per mile traveled, but the absolute number went up because of the increased miles traveled. We still shouldn't ban anti-lock brakes, right?
If the induced demand from anti-lock brakes were large enough that fatalities increased on net, than some public policy response would be appropriate, although there may be better options than banning anti-lock brakes.
The induced demand from self driving cars is likely to be much higher than for anti-lock brakes. People can work, watch TV or sleep while a fully self-driving car is driving. This will allow them to drive in a lot of cases they couldn't previously.
I think there are steps we should take to make human driving safer, (speed cameras, better drunk driving enforcement, etc), and also that we should push for a more stringent safety standard on self-driving cars than just safer than human drivers. At a minimum, the standard should be safer than a human driver who is sober, not speeding and not talking on their cell phone. But considering induced demand, an even higher standard may be appropriate.
I do think that self driving car development should be encouraged because it has the potential to bring huge safety and transportation benefits. But since driving has large negative externalities, induced demand is a huge potential problem that deserves policy consideration. A lot of the problems with our transportation system today are because induced demand wasn't properly considered when roads for cars were being developed.
See for example work on airbags by George Hoffer (Testing for Offsetting Behavior and Adverse Recruitment Among Drivers of Airbag-Equipped Vehicles): "Earlier studies reported that an insurance industry index of personal-injury claims rose after automobiles adopted driver's side airbags and that drivers of airbag-equipped vehicles were more likely to be at fault in fatal multivehicle accidents. These findings can be explained by the offsetting behavior hypothesis or by at-risk drivers systematically selecting vehicles with airbags (i.e., adverse recruitment). We test for offsetting behavior and adverse recruitment after airbag adoption using a database containing information on fatal accidents including information on drivers' previous records and drivers' actions that contributed to the occurrence of the accident. Further, we reexamine the personal injury claims index data for newly airbag-equipped vehicles and show that the rise in the index after airbag adoption may be attributable to moral hazard and a new vehicle ownership pattern. "
I agree with you that Waymo seems to be doing a better job than Cruise. From records for yearly miles driven with and without a test driver, reports filed, news articles, and responses from company leaders, the overall picture appears to be that Waymo has proceeded more gradually while Cruise has less regard for their impact on others.
On the other hand, you’re giving short shrift here to the difference in conditions under which these companies miles have been accumulated vs human drivers. Both companies have been heavily restricted in their operation with respect to location, time, speed, and weather while human drivers are not. We might find a far better safety record for human drivers in city conditions if we lowered speed limits and gave substantial fines for speed violations, as discussed in this article comparing driving in Finland with driving in the US. (https://heatmap.news/politics/helsinki-cars-pedestrians-bikes-finland ) With an increase in driverless vehicles we are likely to find new classes of safety issues, such as those that arise around autonomous vehicle confusion in construction or emergency scenes. While the cars in those situations may not actively hit someone, they can nonetheless be responsible for worse outcomes for those impacted - consider this report filed with SF regarding a delay to an ambulance which a Cruise vehicle caused this August. (https://journa.host/@cfarivar/110985056243205061 which links to https://www.forbes.com/sites/cyrusfarivar/2023/08/30/cruise-robotaxis-waymo-san-francisco-firefighters )
That last relates to two big issues I see with the entire discussion around vehicle safety. First, we talk about the safety of a car or of a driver without considering that safety is a system property. It is not the car or the driver that is “safe” but they are able to operate safely in some location under some set of conditions. None of us is truly safe driving in a blizzard; we are even less safe if the side of the road we’re driving in a blizzard is a cliff. We blame drivers for accidents when the road, signage, or vehicle have design flaws. And second, vehicle safety is considered only with respect to vehicle occupants, despite those vehicles having an impact on people who are not in them. I was surprised when I read the CPUC and DMV safety requirements for Cruise and Waymo because the focus was almost entirely on passengers. To me, that seems like dereliction of duty; our regulatory agencies should be considering safety for all, not just safety for those in vehicles. That’s a system-wide regulatory problem, not just one for driverless cars, but the issues arising in SF are pointing up some of the shortcomings.
Isn't this argument (that we can be relatively confident they're safer than humans already) pretty sensitive to the actual numbers around "how much safer/less safe is SF driving than the weighted average of human miles" and "what fraction of accidents are caused by drunk people/teenagers/road raging people/old people"? Even for Waymo, the expected number of "serious accidents" under the null of no-better-than-humans is below 10 (and since serious is so heavily correlated with speed, I'd guess that SF is better than average?)
A much more minor point, but I don't think "the read-ending driver is always at fault" works as a way of describing the world in big cities, even if that's how the law works
Yes, there is a significant amount of uncertainty, which is why I headlined the article "may already be safer" rather than "are already safer." We need more data—both more miles of Waymo and Cruise driving and better statistics on crashes involving only human drivers.
With that said, I think it's really striking how much more often other cars hit Waymos than vice versa. I think you're right that we can't assume the front car in a rear-end crash is blameless, but if you read the details of those crashes a lot of them were clearly not the Waymo's fault (for example, the car behind stopped before moving forward and crashing into the Waymo).
So I think Waymo is probably a lot better than the average human driver but might be "only" as good as an average driver. And I think Cruise is probably a little better than the average human driver but might be a little worse.
I think it's super clear that Waymo and Cruise are both a lot better than a drunk driver or a teenager in his first few weeks behind the wheel. So even if self-driving cars were a little worwse than the average alert 45-year-old, I think there'd probably still be a safety benefit to having them on the roads because I bet a lot of drunk people and teenagers will use them.
Great article. I agree with you, broadly. I do concur, as you assert, that we don't know enough yet: we don't have good data on human drivers driving in the same domain (e.g. same areas of SF on similar roads and similar times) to know the real answer. (And, ironically, it is really something to read Waymo's paper and conclude that we could all eliminate fatalities by just driving the speed limit, not running red lights, and watching for unprotected left turns -- all these billions spent to ensure adoption of 3 behaviors that any sane driver would agree are sensible and feasible anyway!) But one statistical nit to pick: you and others emphasize how many times Way or Cru vehicles are HIT versus them HITTING other cars. And you compare W + C accident rates to human accident rates. But for true apples to apples, shouldn't we adjust the human crash rate for "times I was hit versus me hitting someone else," too? If I think of (danger: anecdotal evidence!) how many accidents my acquaintances have been in, many (half?) are in the category of "I was hit." If we give AVs a statistical "pass" for situations where they are hit, shouldn't we do the same for humans? I guess I am saying safe driving is not just an offensive skill ("I won't hit anyone") but a defensive one ("I will avoid being hit."). You glancingly refer to this with your "most crashes involve two cars" comment, but I'd like to see someone explicitly adjust for this. Maybe, even, we will find out that AVs are much safer than humans, but that is the sum of being twice as safe in offensive situations but only half as safe in defensive ones.
“Waymo is probably a lot better than the average human driver but might be "only" as good as an average driver.”
Should this be reversed? Your belief is that Waymo is worse than the average human driver (because the human badness is in the left tail) but the waymo mile is more safe than the average human mile?
I think you're just parsing the sentence wrong. I'm not saying both things could be true simultaneously. I'm saying that I think Waymo is probably better than average but it's possible i'm wrong and Waymo is only average.
Thanks for sharing. I am not sure that one crash for every 60,000 miles on average sounds very reassuring. You do make a good point though on why the experiment should keep going.
I think people are more likely to collide with a self driving vehicle with no one in the driver seat. I think curiosity and malintent make them targets.
Driverless cars are almost undoubtedly safer drivers than me, not due to poor skill but the opposite. I am a very skilled driver and MOST of the time- a defensive one.
But when someone drives dangerously and it sets me off and they react poorly to my warnings- I become an insanely dangerous, aggressive, all round horrible driver. I doubt you've ever seen someone get out of their vehicle at red lights more than me. I think I'm addicted to the feeling now. The road is far better off without me than with me (well that's true about everyone but this isn't an article on why you should take public transit and bike lol).
Self driving cars can't come soon enough. I'd volunteer for a program like that in a heartbeat. It's why I try to cycle whenever I can, not just for the environment but because I don't stay as angry for as long. Unfortunately driving on the road with a bike is quite dangerous, so I don't do it super often and when I see a cyclist on the road and I'm not in a rush (not frequently, thanks AutDHD) I'll throw my cruise control on behind them, throw them a thumbs up and just enjoy being a big meat shield for them. People bully other vehicles less than cyclists so I like to think that I'm keeping them safer.
I have almost 100% opposite experience than you. Been getting around almost exclusively on e-bike for about 3 years. Rate at which vehicles fuck with bicycles: about 1/20 - 1/50
Average # of rides it would take to get killed by a driver's negligence (this is a different set of drivers from the 'let's fuck with the bicycle' crowd) if I didn't pretend I was completely invisible: 3-5
Part of it is where I am, trying to run bikes off the road is considered a sport to some around here.
Oh I totally agree. I've had so much abuse thrown towards me while biking. I'll yell back for them to stop and face me like a man but they're all cowards or have the foresight to know they're about to have their yolk scrambled with a U-lock.
Please let me know when you next plan to hit the road! (grin) I see a new VC opportunity: alerts pushed to one's car when known Road Rage Offenders are nearby!!!
If they don’t already, couldn’t these services utilize remote human intervention in some of these non destructive corner cases? For example, say in case of accident, obstruction or traffic jam causing scenario, an alert is triggered and some remote human takes control of the vehicle temporarily (to get out of intersection, or to side of the road, etc.)? And so you’d have support of a human backup driver for every X self-driving vehicles on the road?
Much promise imho for the significant number of A to B routes that are highly standardized, and less standardized routes could be geofence restricted or given more precautions…
Yes, both Waymo and Cruise do that. If the cars get confused, they'll "phone home" and get remote guidance from a human operator. The human doesn't actually drive the car but it'll give "hints" like "go to the left of those cones."
See for example work on airbags by George Hoffer (Testing for Offsetting Behavior and Adverse Recruitment Among Drivers of Airbag-Equipped Vehicles): "Earlier studies reported that an insurance industry index of personal-injury claims rose after automobiles adopted driver's side airbags and that drivers of airbag-equipped vehicles were more likely to be at fault in fatal multivehicle accidents. These findings can be explained by the offsetting behavior hypothesis or by at-risk drivers systematically selecting vehicles with airbags (i.e., adverse recruitment). We test for offsetting behavior and adverse recruitment after airbag adoption using a database containing information on fatal accidents including information on drivers' previous records and drivers' actions that contributed to the occurrence of the accident. Further, we reexamine the personal injury claims index data for newly airbag-equipped vehicles and show that the rise in the index after airbag adoption may be attributable to moral hazard and a new vehicle ownership pattern. "
Who is considered at fault in these accidents , the only thing I could find "California, the law explicitly states that “an operator of an autonomous vehicle is the person who is seated in the driver’s seat, or if there is no person in the driver’s seat, causes the autonomous technology to engage.” ... Copywrited 2023 but I hope it is old. If true though the companies or third party could claim , the hiring passenger is liable.
Here's a (heuristic, but working from what we have) argument that this is probably underrating humans driving comparable miles: death rates per 100m miles vary a LOT (more than 3x!) between states:
In particular MA (which features an unusually high number of modern cars driving slowly in cities) is the absolute lowest. If SF driving is comparable to MA (and there's no reason to think of MA as a lower bound, since 1/3 of it by pop is outside the Boston metro), that could be a significantly different baseline
I’m puzzled by your conclusion that letting Cruise operate is a “close call.” It seems like it’s better than human drivers, and even if it was only even with them we benefit from the technology improving. This seems like a sop to the anti-progress forces, but it just makes their positions seem stronger than they are.
I said it's a closer call, not that it's a close call. And I said it's a close call on whether Cruise is safer than a human, not on whether to allow the experiment to continue. As I said in the piece I favor allowing the experiment to continue and would like to see the DMV reverse its halving of Cruise's fleet.
I am a little nervous about Cruise expanding to a dozen more cities all at the same time before it has a clearer safety case. So I wouldn't be sad if Cruise faced some short-term resistance in some of the new cities it has announced in recent months.
Part of the furor about AVs (this does not EXCUSE trying to cut them back, I am only seeking an EXPLANATION of the desire to do so) is for years if not decades Waymo and Cruise and Tesla et al. have been touting that these things are so incredibly safe one would (I infer) have to be an idiot to stand in their way (bad choice of words by me). The relentless repeat of the 95% of accidents are human error, the relentless repeat of Why do we allow CARnage on the roads, etc. They completely and utterly pushed aside the OTHER arguments for AVs (convenience, can work in the car, the disabled and elderly would greatly benefit) and hammered away on safety. So THEY set the bar high, and thus (in part) the backlash when they fall short. They set the terms of the AV debate ("it's all about safety"), and now they have to face the consequences. IMHO. (Personally, I see AVs in various domain as inevitable, especially if the price of AV tech continues to fall and I can own my OWN PERSONAL AV - why pay Waymo every time I want to go somewhere - so here I am only commenting on why I think some people are ticked off.)
I meant they should not be surprised by the backlash. If you launch into a market effectively promising much better safety, if in the initial roll-out much better safety is not immediately evident, you have to expect to catch a lot of flack. I didn't mean "consequences" in terms of legal or political action or anything like that. And I agree that the safety record is "quite good." I just meant that by framing the goal of AVs as safety-safety-safety any blip in the safety record causes (predictable) criticism. Remember when Google first launched its little pod cars? There was a wonderful ad they ran showing a blind person riding around in it, overjoyed that finally he had the mobility otherwise denied to him. And at the same time there was talk by AV execs at AV conferences about productivity gains ("you can do emails while commuting!"). But the positioning of AVs swung from convenience AND safety AND benefiting the handicapped to just safety-safety-safety, at least IMHO. It's similar to what Tesla did in EVs. We were moving slowly along with EVs attuned to the obvious use case: small urban runabouts (no range issues, lots of charging: see Nissan Leaf). The Tesla burst on the scene and reframed the EV value proposition: huge range due to huge batteries, starting a range arms' race. Is this a bad thing? No! (Though if we look at damage to the environment from battery mining maybe it is better to make 5 100-mile EV runabouts than 1 500-mile sedan, as the former will be likely see much better utilization per pound of lithium.) But the Tesla reframing made the EV debate "all about range," and so any EV maker without huge range flops. (Whereas in China short-range EVs sell very very well.) It's not a bad thing or a good thing, but when a company or an industry sets the terms of the debate, then they have to live with the debate that ensues. That is all I meant.
Thank you -- well written and clear; quite insightful. I’m in the Middle East; many different cultures and approaches to driving on display here every day. Absolutely no question AVs would make driving safer, just by dint of reducing variation (speed, optional use of turn signals, following distances). Probably also faster travel times if all the vehicles were networked to cooperate on flow and merging, etc.
Would be boring to be honest. Proper programming and experience would lead to the perfect driver who never needs to pee, never gets distracted, never has to drink, never gets stressed out, etc. Allowing AI into sports will ruin them IMO.
Interestingly, due to the required reporting by Waymo and Cruise, we effectively have a highly accurate record of the ability of normal drivers. Obviously, these records may be slightly biased by the driving style of a driverless car; however, but when stopped they are indistinguishible from cars with drivers. Waymo was hit 17 times while stationary, compared to hitting 2 stationary vehicles—this is a vast improvement in safety!
Nice post. The CMU professor Phillip Koopman brings more rigor to comparing AVs to humans and deciding when to launch. He wrote a book on it! If you’re looking to dive deeper, check out this 1hr talk on YouTube https://m.youtube.com/watch?v=UTdR_HE3DDw
I'll check it out thanks!
I think that while you are definitely considering this, you still aren't seriously _enough_ considering the heterogeneity of human driver behavior. Waymo/Cruise vehicles are (a) not driving in rural settings (b) not driving in bad weather (mostly; SF and Phx don't have a lot of bad weather) (c) always driving expensive, late-model cars (d) are driven at fairly low speeds. Combined with the factors you do mention, the human crash rate and even more the human serious injury rate for that situation is most likely very low, but crucially we have no real idea what it is. Giving every driver in the US a Jaguar I-Pace would almost certainly reduce the accident rate a bunch too.
Of course, how relevant that is depends on what question we're trying to ask about self-driving (eg, is Waymo increasing the safety of ride-hail in SF?). The Waymo study of high-severity accidents in Phoenix is useful for that, but there are still a lot of confounders.
I think you might be right but it's a tricky situation because policymakers have to make a decision with the information that's available.
My sense is that SF has a lot more crashes per mile, but those crashes tend to be lower-speed and hence less deadly. However, this is somewhat offset by a higher rate of pedestrian deaths. I don't think we have data for this in SF, but you can get some sense of it by looking at DC, a "state" that's basically all urban. DC's deaths per 100M miles traveled is only modestly below the national average. So I think these two effects basically cancel each other out.
But I agree that there's a lot of uncertainty. I think it's interesting to just look at the ratio of who is at fault for accidents. It looks to me like other cars crash into Waymo and Cruise vehicles a lot more than the AVs crash into other vehicles. Maybe we'll find out that this is true for low-severity crashes but not high-severity crashes, but in a situation of limited evidence it seems to be a plus for Waymo and Cruise.
I agree that it's hard because we have to look at the data we have which is heavily obscured by these issues.
But my point about heterogeneity applies to the at-fault data too -- it's very possible (and I think likely) that almost every driver is crashed into less than they crash into other vehicles. Similarly, Waymo vehicles almost certainly run red lights less often than average, but if you look at the places with public ticket data it seems like running red lights is concentrated among a few repeat offenders. And so data based on miles driven and incident frequency is fundamentally unable to answer the question we want to ask, which is whether replacing a particular vehicle or trip with an AV is safer.
Along these lines, if this technology is being used to replace taxis I think the most relevant comparison would be their driving safety compared to the average taxi driver.
Yes, and the study that Tim mentioned by Cruise about ride hail in SF suggests they're not bad in comparison right now, which is heartening, but obviously that's a smaller sample.
One consideration that I haven't seen discussed in safety articles about self-driving cars is induced demand. If self-driving cars are safer than human drivers but still substantially more dangerous than transit, and the availability of self-driving cars causes people to travel more passenger miles, the absolute number of car crash fatalities could still rise.
Let's say the advent of anti-lock brakes led to more people feeling comfortable driving in the rain/snow, due to decreased fatalities per mile traveled, but the absolute number went up because of the increased miles traveled. We still shouldn't ban anti-lock brakes, right?
If the induced demand from anti-lock brakes were large enough that fatalities increased on net, than some public policy response would be appropriate, although there may be better options than banning anti-lock brakes.
The induced demand from self driving cars is likely to be much higher than for anti-lock brakes. People can work, watch TV or sleep while a fully self-driving car is driving. This will allow them to drive in a lot of cases they couldn't previously.
I think there are steps we should take to make human driving safer, (speed cameras, better drunk driving enforcement, etc), and also that we should push for a more stringent safety standard on self-driving cars than just safer than human drivers. At a minimum, the standard should be safer than a human driver who is sober, not speeding and not talking on their cell phone. But considering induced demand, an even higher standard may be appropriate.
I do think that self driving car development should be encouraged because it has the potential to bring huge safety and transportation benefits. But since driving has large negative externalities, induced demand is a huge potential problem that deserves policy consideration. A lot of the problems with our transportation system today are because induced demand wasn't properly considered when roads for cars were being developed.
Glenn Mercer
just now
See for example work on airbags by George Hoffer (Testing for Offsetting Behavior and Adverse Recruitment Among Drivers of Airbag-Equipped Vehicles): "Earlier studies reported that an insurance industry index of personal-injury claims rose after automobiles adopted driver's side airbags and that drivers of airbag-equipped vehicles were more likely to be at fault in fatal multivehicle accidents. These findings can be explained by the offsetting behavior hypothesis or by at-risk drivers systematically selecting vehicles with airbags (i.e., adverse recruitment). We test for offsetting behavior and adverse recruitment after airbag adoption using a database containing information on fatal accidents including information on drivers' previous records and drivers' actions that contributed to the occurrence of the accident. Further, we reexamine the personal injury claims index data for newly airbag-equipped vehicles and show that the rise in the index after airbag adoption may be attributable to moral hazard and a new vehicle ownership pattern. "
I agree with you that Waymo seems to be doing a better job than Cruise. From records for yearly miles driven with and without a test driver, reports filed, news articles, and responses from company leaders, the overall picture appears to be that Waymo has proceeded more gradually while Cruise has less regard for their impact on others.
On the other hand, you’re giving short shrift here to the difference in conditions under which these companies miles have been accumulated vs human drivers. Both companies have been heavily restricted in their operation with respect to location, time, speed, and weather while human drivers are not. We might find a far better safety record for human drivers in city conditions if we lowered speed limits and gave substantial fines for speed violations, as discussed in this article comparing driving in Finland with driving in the US. (https://heatmap.news/politics/helsinki-cars-pedestrians-bikes-finland ) With an increase in driverless vehicles we are likely to find new classes of safety issues, such as those that arise around autonomous vehicle confusion in construction or emergency scenes. While the cars in those situations may not actively hit someone, they can nonetheless be responsible for worse outcomes for those impacted - consider this report filed with SF regarding a delay to an ambulance which a Cruise vehicle caused this August. (https://journa.host/@cfarivar/110985056243205061 which links to https://www.forbes.com/sites/cyrusfarivar/2023/08/30/cruise-robotaxis-waymo-san-francisco-firefighters )
That last relates to two big issues I see with the entire discussion around vehicle safety. First, we talk about the safety of a car or of a driver without considering that safety is a system property. It is not the car or the driver that is “safe” but they are able to operate safely in some location under some set of conditions. None of us is truly safe driving in a blizzard; we are even less safe if the side of the road we’re driving in a blizzard is a cliff. We blame drivers for accidents when the road, signage, or vehicle have design flaws. And second, vehicle safety is considered only with respect to vehicle occupants, despite those vehicles having an impact on people who are not in them. I was surprised when I read the CPUC and DMV safety requirements for Cruise and Waymo because the focus was almost entirely on passengers. To me, that seems like dereliction of duty; our regulatory agencies should be considering safety for all, not just safety for those in vehicles. That’s a system-wide regulatory problem, not just one for driverless cars, but the issues arising in SF are pointing up some of the shortcomings.
Isn't this argument (that we can be relatively confident they're safer than humans already) pretty sensitive to the actual numbers around "how much safer/less safe is SF driving than the weighted average of human miles" and "what fraction of accidents are caused by drunk people/teenagers/road raging people/old people"? Even for Waymo, the expected number of "serious accidents" under the null of no-better-than-humans is below 10 (and since serious is so heavily correlated with speed, I'd guess that SF is better than average?)
A much more minor point, but I don't think "the read-ending driver is always at fault" works as a way of describing the world in big cities, even if that's how the law works
Yes, there is a significant amount of uncertainty, which is why I headlined the article "may already be safer" rather than "are already safer." We need more data—both more miles of Waymo and Cruise driving and better statistics on crashes involving only human drivers.
With that said, I think it's really striking how much more often other cars hit Waymos than vice versa. I think you're right that we can't assume the front car in a rear-end crash is blameless, but if you read the details of those crashes a lot of them were clearly not the Waymo's fault (for example, the car behind stopped before moving forward and crashing into the Waymo).
So I think Waymo is probably a lot better than the average human driver but might be "only" as good as an average driver. And I think Cruise is probably a little better than the average human driver but might be a little worse.
I think it's super clear that Waymo and Cruise are both a lot better than a drunk driver or a teenager in his first few weeks behind the wheel. So even if self-driving cars were a little worwse than the average alert 45-year-old, I think there'd probably still be a safety benefit to having them on the roads because I bet a lot of drunk people and teenagers will use them.
Great article. I agree with you, broadly. I do concur, as you assert, that we don't know enough yet: we don't have good data on human drivers driving in the same domain (e.g. same areas of SF on similar roads and similar times) to know the real answer. (And, ironically, it is really something to read Waymo's paper and conclude that we could all eliminate fatalities by just driving the speed limit, not running red lights, and watching for unprotected left turns -- all these billions spent to ensure adoption of 3 behaviors that any sane driver would agree are sensible and feasible anyway!) But one statistical nit to pick: you and others emphasize how many times Way or Cru vehicles are HIT versus them HITTING other cars. And you compare W + C accident rates to human accident rates. But for true apples to apples, shouldn't we adjust the human crash rate for "times I was hit versus me hitting someone else," too? If I think of (danger: anecdotal evidence!) how many accidents my acquaintances have been in, many (half?) are in the category of "I was hit." If we give AVs a statistical "pass" for situations where they are hit, shouldn't we do the same for humans? I guess I am saying safe driving is not just an offensive skill ("I won't hit anyone") but a defensive one ("I will avoid being hit."). You glancingly refer to this with your "most crashes involve two cars" comment, but I'd like to see someone explicitly adjust for this. Maybe, even, we will find out that AVs are much safer than humans, but that is the sum of being twice as safe in offensive situations but only half as safe in defensive ones.
Anyway, great article.
“Waymo is probably a lot better than the average human driver but might be "only" as good as an average driver.”
Should this be reversed? Your belief is that Waymo is worse than the average human driver (because the human badness is in the left tail) but the waymo mile is more safe than the average human mile?
No, that's what I meant.
Since the average human driver is safer than average*, how would the former statement be possible?
*assuming an instantaneous measurement this is clearly true, since the average driver isn’t drunk right now
I think you're just parsing the sentence wrong. I'm not saying both things could be true simultaneously. I'm saying that I think Waymo is probably better than average but it's possible i'm wrong and Waymo is only average.
Ah, I was confused because I think the negation of _both_ is probably true
Thanks for sharing. I am not sure that one crash for every 60,000 miles on average sounds very reassuring. You do make a good point though on why the experiment should keep going.
I think people are more likely to collide with a self driving vehicle with no one in the driver seat. I think curiosity and malintent make them targets.
Driverless cars are almost undoubtedly safer drivers than me, not due to poor skill but the opposite. I am a very skilled driver and MOST of the time- a defensive one.
But when someone drives dangerously and it sets me off and they react poorly to my warnings- I become an insanely dangerous, aggressive, all round horrible driver. I doubt you've ever seen someone get out of their vehicle at red lights more than me. I think I'm addicted to the feeling now. The road is far better off without me than with me (well that's true about everyone but this isn't an article on why you should take public transit and bike lol).
Self driving cars can't come soon enough. I'd volunteer for a program like that in a heartbeat. It's why I try to cycle whenever I can, not just for the environment but because I don't stay as angry for as long. Unfortunately driving on the road with a bike is quite dangerous, so I don't do it super often and when I see a cyclist on the road and I'm not in a rush (not frequently, thanks AutDHD) I'll throw my cruise control on behind them, throw them a thumbs up and just enjoy being a big meat shield for them. People bully other vehicles less than cyclists so I like to think that I'm keeping them safer.
I have almost 100% opposite experience than you. Been getting around almost exclusively on e-bike for about 3 years. Rate at which vehicles fuck with bicycles: about 1/20 - 1/50
Average # of rides it would take to get killed by a driver's negligence (this is a different set of drivers from the 'let's fuck with the bicycle' crowd) if I didn't pretend I was completely invisible: 3-5
Part of it is where I am, trying to run bikes off the road is considered a sport to some around here.
Oh I totally agree. I've had so much abuse thrown towards me while biking. I'll yell back for them to stop and face me like a man but they're all cowards or have the foresight to know they're about to have their yolk scrambled with a U-lock.
Please let me know when you next plan to hit the road! (grin) I see a new VC opportunity: alerts pushed to one's car when known Road Rage Offenders are nearby!!!
TLDR: Skilled driver =/= good driver.
Self driving vehicles are almost certainly far better drivers than the average motorist.
What can we tell from a few taxis in San Francisco?
If they don’t already, couldn’t these services utilize remote human intervention in some of these non destructive corner cases? For example, say in case of accident, obstruction or traffic jam causing scenario, an alert is triggered and some remote human takes control of the vehicle temporarily (to get out of intersection, or to side of the road, etc.)? And so you’d have support of a human backup driver for every X self-driving vehicles on the road?
Much promise imho for the significant number of A to B routes that are highly standardized, and less standardized routes could be geofence restricted or given more precautions…
Yes, both Waymo and Cruise do that. If the cars get confused, they'll "phone home" and get remote guidance from a human operator. The human doesn't actually drive the car but it'll give "hints" like "go to the left of those cones."
I first saw this on Ars. Very interesting. The one thing I noticed? NO Tesla.
See for example work on airbags by George Hoffer (Testing for Offsetting Behavior and Adverse Recruitment Among Drivers of Airbag-Equipped Vehicles): "Earlier studies reported that an insurance industry index of personal-injury claims rose after automobiles adopted driver's side airbags and that drivers of airbag-equipped vehicles were more likely to be at fault in fatal multivehicle accidents. These findings can be explained by the offsetting behavior hypothesis or by at-risk drivers systematically selecting vehicles with airbags (i.e., adverse recruitment). We test for offsetting behavior and adverse recruitment after airbag adoption using a database containing information on fatal accidents including information on drivers' previous records and drivers' actions that contributed to the occurrence of the accident. Further, we reexamine the personal injury claims index data for newly airbag-equipped vehicles and show that the rise in the index after airbag adoption may be attributable to moral hazard and a new vehicle ownership pattern. "
Who is considered at fault in these accidents , the only thing I could find "California, the law explicitly states that “an operator of an autonomous vehicle is the person who is seated in the driver’s seat, or if there is no person in the driver’s seat, causes the autonomous technology to engage.” ... Copywrited 2023 but I hope it is old. If true though the companies or third party could claim , the hiring passenger is liable.
Here's a (heuristic, but working from what we have) argument that this is probably underrating humans driving comparable miles: death rates per 100m miles vary a LOT (more than 3x!) between states:
https://worldpopulationreview.com/state-rankings/fatal-car-accidents-by-state
In particular MA (which features an unusually high number of modern cars driving slowly in cities) is the absolute lowest. If SF driving is comparable to MA (and there's no reason to think of MA as a lower bound, since 1/3 of it by pop is outside the Boston metro), that could be a significantly different baseline
I’m puzzled by your conclusion that letting Cruise operate is a “close call.” It seems like it’s better than human drivers, and even if it was only even with them we benefit from the technology improving. This seems like a sop to the anti-progress forces, but it just makes their positions seem stronger than they are.
I said it's a closer call, not that it's a close call. And I said it's a close call on whether Cruise is safer than a human, not on whether to allow the experiment to continue. As I said in the piece I favor allowing the experiment to continue and would like to see the DMV reverse its halving of Cruise's fleet.
I am a little nervous about Cruise expanding to a dozen more cities all at the same time before it has a clearer safety case. So I wouldn't be sad if Cruise faced some short-term resistance in some of the new cities it has announced in recent months.
Part of the furor about AVs (this does not EXCUSE trying to cut them back, I am only seeking an EXPLANATION of the desire to do so) is for years if not decades Waymo and Cruise and Tesla et al. have been touting that these things are so incredibly safe one would (I infer) have to be an idiot to stand in their way (bad choice of words by me). The relentless repeat of the 95% of accidents are human error, the relentless repeat of Why do we allow CARnage on the roads, etc. They completely and utterly pushed aside the OTHER arguments for AVs (convenience, can work in the car, the disabled and elderly would greatly benefit) and hammered away on safety. So THEY set the bar high, and thus (in part) the backlash when they fall short. They set the terms of the AV debate ("it's all about safety"), and now they have to face the consequences. IMHO. (Personally, I see AVs in various domain as inevitable, especially if the price of AV tech continues to fall and I can own my OWN PERSONAL AV - why pay Waymo every time I want to go somewhere - so here I am only commenting on why I think some people are ticked off.)
What do you mean by “they have to face the consequences?” Waymos record at least seems quite good so what consequences should they be facing?
I meant they should not be surprised by the backlash. If you launch into a market effectively promising much better safety, if in the initial roll-out much better safety is not immediately evident, you have to expect to catch a lot of flack. I didn't mean "consequences" in terms of legal or political action or anything like that. And I agree that the safety record is "quite good." I just meant that by framing the goal of AVs as safety-safety-safety any blip in the safety record causes (predictable) criticism. Remember when Google first launched its little pod cars? There was a wonderful ad they ran showing a blind person riding around in it, overjoyed that finally he had the mobility otherwise denied to him. And at the same time there was talk by AV execs at AV conferences about productivity gains ("you can do emails while commuting!"). But the positioning of AVs swung from convenience AND safety AND benefiting the handicapped to just safety-safety-safety, at least IMHO. It's similar to what Tesla did in EVs. We were moving slowly along with EVs attuned to the obvious use case: small urban runabouts (no range issues, lots of charging: see Nissan Leaf). The Tesla burst on the scene and reframed the EV value proposition: huge range due to huge batteries, starting a range arms' race. Is this a bad thing? No! (Though if we look at damage to the environment from battery mining maybe it is better to make 5 100-mile EV runabouts than 1 500-mile sedan, as the former will be likely see much better utilization per pound of lithium.) But the Tesla reframing made the EV debate "all about range," and so any EV maker without huge range flops. (Whereas in China short-range EVs sell very very well.) It's not a bad thing or a good thing, but when a company or an industry sets the terms of the debate, then they have to live with the debate that ensues. That is all I meant.
Thank you -- well written and clear; quite insightful. I’m in the Middle East; many different cultures and approaches to driving on display here every day. Absolutely no question AVs would make driving safer, just by dint of reducing variation (speed, optional use of turn signals, following distances). Probably also faster travel times if all the vehicles were networked to cooperate on flow and merging, etc.
It would be interesting to see how well a driverless car might fare in a stock car race.😇
Would be boring to be honest. Proper programming and experience would lead to the perfect driver who never needs to pee, never gets distracted, never has to drink, never gets stressed out, etc. Allowing AI into sports will ruin them IMO.