42 Comments
User's avatar
Tubthumping's avatar

Doesn't matter. The SDC Fluffers for over a decade now have ranted that SDCs will be so superior to HDCs that they could avoid even these kinds of accidents.

Where in engineering reality, SDCs are only viable in SDC only traffic environments for the foreseeable future.

Expand full comment
Hellbender's avatar

Did you read a different article or something? This one clearly shows that SDCs are safer than HDCs in environments that have both, so why are you saying “SDCs are only viable in SDCs only traffic environments for the foreseeable future”?

Expand full comment
Tubthumping's avatar

Next time try actually reading what I wrote.

Expand full comment
Hellbender's avatar

I have, and your post was unclear at best. I am not sure how to interpret your second paragraph as anything other than a claim that SDCs are not viable in environments that still have HDCs, which is obviously false

Expand full comment
Tubthumping's avatar

No it isn't unclear. It is quite clear.

They are not viable in HDC environments because the standard is higher for SDCs to meet, period.

And they can only achieve what was promised to us by the SDC fluffers for over a decade now if HDCs (and pedestrians) are not in that loop.

Expand full comment
Hellbender's avatar

| They are not viable in HDC environments because the standard is higher for SDCs to meet, period.

This is a claim about policy realities; your initial claim was about engineering realities. Even as a policy reality claim, I think it’s bullshit - “SDCs are involved in serious accidents at 30% of the rate of human drivers” is absolutely good enough for legalization in more jurisdictions

Expand full comment
Tubthumping's avatar

So what? Engineering realities have to match legal and public perception realities.

Expand full comment
Rand Mart's avatar

No it doesn't prove that. The data on which it is based is seriously incomplete because it addresses only cases in which the SD is damaged. It does not comprehend cases in which the SDC caused an accident but was able to avoid being involved in the resulting crash.

Expand full comment
Timothy B. Lee's avatar

Can you give me an example of the kind of crash you have in mind? It seems hard to cause a crash without being involved in it.

Expand full comment
Hellbender's avatar

Maybe something like car A runs a red light, causing car B to swerve to avoid A but B swerves into car C? It seems unlikely to me that a serious accident like that could happen with an SDC without that surfacing in the media

Expand full comment
NormalAnomaly's avatar

I agree that that would be in the media, and also, those kinds of crashes aren't included in the HCD data either, so leaving them out still makes it an apples to apples comparison.

Expand full comment
Rand Mart's avatar

The SDC data also doesn't include info on incidents in which a crash was avoided because the safety observer intervened. This means the data is meaningless for determining the wisdom of releasing AVs on our streets.

Expand full comment
Timothy B. Lee's avatar

This article is about driverless vehicles with no human behind the wheel, so there were no cases where a crash was avoided because a safety driver intervened. There may be cases where a remote operator intervened, but who cares? Waymo isn't planning to stop using remote operators any time soon, so the safety gains accomplished with their help is as real as any other safety gains.

Expand full comment
Jason Samuels's avatar

I'm convinced that this will be *the* thing that drives self-driving technology adoption. When the day comes that this technology is incorporated into new consumer vehicles, I predict that insurance companies will incentivize using it. And when people are presented with a choice to either let the car drive itself, or override to manual mode but then have to pay 3x as much on their insurance bill, they will choose the former en masse.

Expand full comment
Tamritz's avatar

Why would car insurance companies incentivize technology that would eliminate the justification for their existence?

Expand full comment
Jason Samuels's avatar

For the same reasons they offer safe driving incentives already

Expand full comment
Tamritz's avatar

But if the technology is driving and not the driver, then the driver doesn’t need insurance at all. The liability component is removed from them and transferred to the technology company.

Expand full comment
Jason Samuels's avatar

Right now, if somebody borrows your car and gets in an accident, the insurance you pay for on that car will cover the damages in the vast majority of cases. So the system is already set up in a way that you're insuring the vehicle whether it's driven by you or another permitted party. Seems logical to me that concept will extend to self-driving tech, and also logical that insurance companies will continue their longstanding practices of pricing services based on actuarial analysis of risk which will lead to far lower rates for using the self-driving tech.

Expand full comment
BearWithIt's avatar

I agree with Jason on this one - car companies will try to avoid taking on the product liability/driver liability at all costs, and there will no doubt be small print in the purchase/subscription contract to drive shared liability driven by a requirement that a driver be in the driver seat even when the vehicle is driving itself. I'd imagine that the lower risk of collision and resulting payout will enable insurance to be cheaper but still maintain the healthy profit margins that insurance company execs have come to cherish!

Expand full comment
Derek Tank's avatar

The existence of car insurance companies is mandated by laws requiring liability insurance. Do you really think legislators are going to eliminate those laws just because more and more cars on the road become self driving? Insurers might see a drop in revenue eventually but I seriously doubt their existence is threatened

Expand full comment
Tamritz's avatar

When you take a taxi, do you buy insurance beforehand?

Expand full comment
BearWithIt's avatar

No, because there is a person (the taxi owner/driver) that owns the insurance. Independent self-driving vehicle owners will be required to own insurance, and the vehicle maker will require some sort of 'culpability' such as forcing a driver to be in the driver seat or whatever else will be required once the courts set a precedent.

Expand full comment
P. Morse's avatar

Anyone who takes Waymo in SF can attest to it being a Zen experience, the way human drivers should drive. It drives without rushing, over accelerating and braking, yet arrives without delay while humans zig zag, dart, cut each other off, zone out at lights, honk, and worse.

Disappointingly, it's one of the incredibly useful innovations were probably going to have to wait for a new generation to adopt. The negative, and I believe paid off media, will not help.

Expand full comment
Ryan Y's avatar

Tim - Great write up. Curious if you've looked at "average # of passengers per mile driven" or something like that. Obviously, all normal cars have at least 1 passenger (the driver). Whereas for some non-trivial portion of time, a Waymo vehicle would have 0 passengers. Superficially, it points to the possibility that the safety benefit is even greater, if measured not just by "risk of accidents" but also "risk to people". I guess some of that gets factored in when looking at injuries, but presumably a Waymo taxi's airbags would deploy regardless of whether anyone is in the car.

I suppose it's possible that this potential further risk reduction may be (partially) offset if an occupied Waymo is more likely to have 2+ people in it than a regular car is to have a passenger in addition to the driver. But I suspect that, on average, a Waymo vehicle's "average passenger count per mile driven" is lower, perhaps substantially - which matters especially if it's greatest vulnerability comes from being on the receiving end of unforeseeable accidents (someone else running a red light, etc).

Expand full comment
Rick Stahlhut's avatar

thanks.

A couple thoughts —

the other-guy rear end collisions should also be examined. Don’t assume they were actually the other driver’s fault, though that is the default assumption .

If I wanted cars to rear end me, I certainly could drive in a (legal) way that made it more likely. For example, maintain speed until the yellow light comes on — just before I reach the intersection, and hit the brakes really hard instead of drive on through.

Would also depend on the local driving customs. If Boston is anything like it was twenty years ago, you are expected to push the yellow. Waymo might get a lot of other-guy rear end collisions there.

Another thought — we should probably categorize assisted crashes:

1. a human would rarely/never make that error

2. a human could easily make that error.

3. the other driver made the error

We should hope for #2 to decrease. of course #1 increases by definition. We need the decrease in 2 to be much much greater than the increase in 1, because the #1 type error will scare people more.

Note also that 3 might be tricky to correctly assign. it is quite possible to create accidents by violating people’s expectations, even without violating the law. As in read end accidents above.

Expand full comment
Sam Tobin-Hochstadt's avatar

Waymo not running red or almost-red lights is good and bears no blame for people expecting unsafe driving from others.

Expand full comment
MS's avatar

Why? If self driving cars cause accidents by not following expected human behavior, while there are still human driven cars on the road, then that's a problem.

Expand full comment
Sam Tobin-Hochstadt's avatar

Because humans need to stop running red lights. There's a concept of fault in accidents for a reason.

Expand full comment
MS's avatar

Come on man, be reasonable. No one thinks running reds is good. There's a reason that's against the law.

Expand full comment
Sam Tobin-Hochstadt's avatar

You just said it would be unsafe for Waymo not to do it!

Expand full comment
MS's avatar

No I didn't. Running reds is not expected human behavior. The original commenter was talking about unexpectedly stopping at yellows (and unexpected stops in general).

Expand full comment
sébastien's avatar

Since Waymo is supposed to be a taxi service, wouldn't it be interesting to compare the crash rate of waymo's car with other taxi or similar services (uber et al.) ?

Professional drivers such as taxi or uber drivers may have a different behavior on the road than the average driver. Is there any statistics somewhere about this ?

Expand full comment
Sam Tobin-Hochstadt's avatar

Cruise did study comparing with ride hail driving in SF; Cruise was safer.

Expand full comment
John Quiggin's avatar

Even with small samples, collision data provide a very sharp test of the relative safety of human and AI drivers. Under null hypothesis they are equally safe, each should be responsible with probability 0.5. Applying binomial formula, chance of 19 or more human caused crashes out of 23 is around 0.001. It would take many millions (maybe billions) of vehicle-kilometres and a bunch of strong assumptions to get a similarly powerful test for single-vehicle crashes.

Interestingly, I tried this on an AI, getting the right formula and explanation, but failing on the calculation step for n>15

Expand full comment
Joshua Blake's avatar

You've previously said that the majority of human crashes are caused by high risk drivers (eg inexperienced). So it would be interesting to compare to an experienced reference class. For example, Uber or taxi drivers. Knowing whether you're more likely to end up injured if you order an Uber or Waymo would actually be a useful thing to know.

Expand full comment
JB's avatar

Are the crashes per miles travelled numbers actually comparable? Waymo runs a very limited set of routes and situations. I don't know that there is a way to get accurate information about human accidents per mile traveled excluding all accidents that occurred on streets and situations Waymo does not attempt. Even within the service area Waymo simply blocks traffic or refuses rather than attempting more complex parking/pickup/drop off situations.

Expand full comment
Timothy B. Lee's avatar

To be honest I don’t think it’s true that Waymo runs a limited set of routes and situations. They don’t do freeways but otherwise they offer service throughout San Francisco and in almost all weather conditions. Do you have an example of a situation that Waymo avoids?

Expand full comment
JB's avatar

I may be overestimating the limitations within the service area. The pickup/dropoff are probably a lot different but don't impact crashes that much.

However, I still am wondering how much effort they have made to ensure the human crash rate data is comparable (only from roads and locations and weather conditions and vehicle types used by Waymo)

Expand full comment
vectro's avatar

Was this measuring all crashes, or only crashes in full automated mode?

Waymo has in the past made claims about crashes that were "under human control", but then it turns out that human control was activated like 3 seconds before the crash.

Expand full comment
Timothy B. Lee's avatar

These are fully driverless vehicles, so there's no human being to take over at the last minute. So all crashes involving these vehicles should be counted.

Expand full comment
vectro's avatar

As I understand it, these vehicles can also be remote controlled.

Expand full comment
Jen's avatar

I’m a 20 year veteran driver in Phoenix. I have to say that when you take drinking, impairment, tiredness, anger, running late, eating, texting, talking on the phone, shifting gears, changing radio stations, screaming children, arguments, conversation, night blindness, the music being too loud, night blindness, bad eyesight, too elderly to be driving, uninhibited teens, inexperienced drivers, speeding, bad emotional state, not paying attention and every other human factor distraction that effects drivers, out of the equation, logic would say that machines driving is much safer. And I concur. I see dozens (if not hundreds) of these on the road every day. They drive the speed limit, drive in the center of the lane. Their onboard technology gives them “eyes” all around the vehicle. ((LiDAR, Radar) (Far more than humanly possible). Waymos have one job. To drive. And they do it exceptionally well. Humans are fallible, and easily distracted (even the best drivers). Machines are not. I would much rather see the streets filled with Waymos than human drivers. If it happens to take my job from me, it’s a sacrifice I’ll gladly endure, if it saves thousands of lives.

We humans are arrogant, we think we’re all god drivers, but re read the short list of distractions above and tell me that when you take all that human crap out of the equation, we’re all not safer. We are!

Expand full comment