Re the flat tire scenario (or for any breakdown);, I'll float the alternative that they send a second taxi to pick the rider up in a few minutes and call a tow truck, which could be one they hire on the open market. Thus no staffing is needed, the tow truck driver does what's needed. Do the tow truck drivers need training? Maybe, but I'm not sure.
Perhaps this removes the need to expand city by city?
If it was just flat tires it would probably be feasible to do something like this, but in practice I suspect it'll be more than that. Other situations where a vehicle might need a rescue or maintenance: some other kind of mehcanical failure, mud blocks a vehicle's camera, passenger vomits in the backseat, car needs a recharge while it's on the other side of town from its owner. And that's to say nothing of cases where the self-driving software itself malfunctions.
For any one of these issues, you could probably come up with a way to deal with it without dedicated staff, but add it all up and I think they're gonna have to have a footprint in each city.
And that's to say nothing of keeping supply and demand balanced on the ride-hailing network in each city. Uber and Lyft spend heavily to recruit drivers and advertise for passengers. Probably Tesla can get away with less of this, especially if a lack of a driver allows it to undercut Uber and Lyft on price. But still it's gonna take some work on Tesla's part to make sure that there are cars available when customers pull out the app (and conversely that people who buy cars to rent out are able to get a decent return).
Tesla could easily compare their current software prediction with what drivers actually do, no matter if the system is disengaged, or when the driver manually overrides system control, or when the system self-disengages, or when the system fails while engaged. Or even when FSD hasn't been purchased! The software can ALWAYS run whenever the hardware is powered, independent of its purchase state, engagement status or control mode.
In this way, *ALL* Teslas can provide experiential data at all times, not limited by factors such as Waymo's comparatively tiny fleet size.
I can definitely see this kind of data having some value. It seems a bit tricky to use this kind of data because it won't be obvious whether a small deviation reflects a mistake or just a different driving style.
If you're trying to train a self-driving system to stay in its lane and stop for a stop sign, the approach you describe works great. But what if you're trying to train the software to follow a police officer's hand gestures? Each training example is going to involve a different configuration of vehicles and a different cop making different gestures. I don't see how an "averaging out" approach will work.
Which goes back to Ethan Mollick's point about the jagged frontier. I have no doubt that Tesla's big data approach will make good progress on some aspects of autonomous driving. But I think there are likely to be others where it just doesn't work, just as there are some tasks where LLMs remain quite bad. Tesla doesn't seem to have any plan B for these situations, and a driverless car that works flawlessly 98 percent of the time is worse than useless.
The key is to identify anomalous conditions/situations, which will be an extreme minority of the data, then use those for a more focused training round. The anomaly identification can easily be fully automated, but the interpretation/labeling may initially need manual input once for each novel class of situation, such as "hand signals", where the average of thousands of driver responses is taken as ground truth for the proper response.
Eventually, with tens or hundreds of millions of recorded miles driven, and enough raw compute, there will be no need for any manual intervention whatsoever, as even low-level (rare) signals having consistent driver reactions will be automatically detected, classified and trained.
Tesla knows well both data and compute on this scale. Who else as a Top-100 supercomputer dedicated to the task? Now, saying that Tesla "can" do this with their massive resources does not mean they "will" do it correctly/optimally. In particular, they seem to evolve their model architectures very slowly, and use FP32 training elements.
A counterexample is Comma.AI, who Consumer Reports rated as having the best Level 2+ ADAS in 2020. Comma.AI tweaks their model architectures frequently, with major changes every 6-9 months. Every Comma 3X hardware unit in the field sends a recording of EVERY drive (including all the delicious CAN data), even if the user hasn't subscribed to the Comma Prime service (opt-out is supported). They built a supercomputer roughly equivalent to Dojo in application-level power, though based on FP16 rather than FP32, which drastically reduces compute, memory, cooling and electrical demands while increasing throughput.
Their ADAS software, OpenPilot, is Open Source, and releases always freely include the trained model weights. The Comma 3X hardware uses a phone processor, the Qualcomm Snapdragon 845. The inputs are three cameras (forward wide, forward narrow, and driver IR monitoring), GPS, and the content of the vehicle CAN/CAN-FD busses.
On my 2021 Nissan LEAF, despite having limited Comma support (due to Nissan idiosyncrasies), I get hands-off driving for about 90% of my miles driven. All for a one-time lifetime cost of $1450 ($1250 Comma 3X unit + $200 vehicle-specific wiring harness).
Remember, to use FSD, you must FIRST buy a Tesla vehicle, THEN pay $12,000 for FSD, yielding a total starting cost of at least $50K. My used Leaf cost me $12K, and I can easily move my Comma 3X to my next vehicle the cost of a $200 vehicle harness. Can you move your Tesla FSD to your next vehicle? I worry about Tesla's business model for developing and deploying FSD.
OpenPilot has been forked multiple times by the community, where forks that meet Comma.AI strict standards are allowed to submit their changes back to the mainline OpenPilot repo. The trusted FrogPilot and SunnyPilot forks enable my Leaf to do things the mainline OpenPilot software does not yet allow.
All this, including hardware designed and manufactured in-house, from a company with about 50 employees that is PROFITABLE.
Comma explicitly calls their system "Level 2" with enforced driver attention, as that allows them to support what the vast majority of drivers need/want today. However, if you enable Experimental Mode (and click through all the warnings and waive all liability), you get a mix of advanced capabilities ranging from Level 2+ to Level 4. Comma presently states they have no intention of pursuing autonomous driving, but that's clearly to keep regulatory agencies off their back, and to avoid lawyer and court costs.
Yes, Tesla has pulled ahead since 2020, as that's what spending literally billions of dollars will get you, but not nearly as far ahead as you'd think that money should have taken them.
The Comma.AI folks are wicked-smart. Keeping their teams tiny and nimble is key to their steady (and profitable) progress. Tesla's early FSD and Autopilot progress was accomplished with similarly small teams, but their progress stalled for several years until their size had mushroomed to the point where brute force and massive money flows enabled their recent advances.
When it comes to fully automated driving, my money is on Comma.AI to do it right. Last year, Comma.AI signed their first vehicle OEM, Aptera, who will use Comma.AI as their ADAS vendor.
So, Comma.AI and Tesla FSD have the same number of OEMs using their respective systems! (Yes, yes, there is a minor scale difference. Pffft.)
Closing the loop back to Waymo, their system appears to rely on lots of hand-programming and manual classification, meaning their system likely is fragile and difficult to scale. Both Tesla and Comma.AI used this approach starting out, and both shifted away from it the moment they had enough data and compute to support training and deploying end-to-end networks. The most immediate result was the system a) had minimal trouble supporting left-hand driving, and b) had very little trouble with signage in other countries. Where is Waymo in these respects? Who will be the first to achieve "Global Autonomous Driving"?
The availability of Waymo remote drivers means they are a "remote Level 2" system, where the remote driver must still take control (or at least provide input) whenever the automation fails. When can/should/will the human be removed from the loop, and which system will do it first?
I'd recommend watching the blog posts and videos released by Comma.AI. In particular, their "CommaCon" videos cover their technical approaches in depth. They are far more open about what they do and how they do it than either Tesla or Waymo.
Specifically, the types of situations in which their system fails. It seems unable to generalize very well, which is the hallmark of end-to-end trained systems.
Edit: How will a Waymo vehicle perform if you take it from their Phoenix, AZ fleet and drop it in Anchorage, AK? I suspect a Tesla or Comma.AI system would not care about such a relocation. Would Waymo?
When you compare Tesla having a couple interventions in a few drives and Waymo doing 2 hours without one, you vastly understate the difference. Waymo is doing 50,000 drives/week with nobody behind the wheel. While they do have issues from time to time, they are rare. People don't understand where the bar is. It's not doing 1, 2 or even 100 drives in a row without a problem. It's doing 10s of thousands, a a lot more than that.
Can Waymo go "from anywhere to anywhere", or is it geofenced? Is it limited to previously mapped areas? These "details" are important and should be made explicit.
The point of the article is that Tesla will learn the hard way as well that it will have to narrow the areas in order to gain reliability when driving unsupervised, short-term. Otherwise you get a flood of a trillion problems at once.
So, at Waymo's present rate of adding cities, it will take them decades to cover the US. Can they go faster? If Waymo can, they are intentionally not doing so.
How advanced do you think Tesla FSD and Comma.AI will be by then, as they already do 90% autonomy in ALL US cities today!
Waymo's own numbers and performance are stacked against them. Being perfect within a postage stamp is NOT equivalent to covering coast-to-coast.
I think that Waymo's rate of deployment will increase as they gain experience with and confidence in their technology. They spent 3 years in (a part of) the Phoenix metro area before expanding to SF in 2023. Then they spent about a year in SF before expanding to Austin and LA this year. If those expansions go well, I expect their rate of expansion will continue to increase over the next few years, allowing them to reach dozens of metro areas later in the decade.
BTW, I think one reason Waymo hasn't expanded faster so far is that they haven't quite figured out freeways. This makes the service non-competitive in a lot of areas since most trips take twice as long on surface streets. Waymo started driverless testing on freeways in Phoenix in January, which suggests they are fairly close to figuring out freeways. Once they enable freeway driving in their commercial fleet, it'll be a much more compelling service and the business case for expansion will be much stronger.
Tesla's "90% autonomy in ALL US cities today!" is not worth anything. To go from 90% to 99.999% is where the hard part will be. And 90% means 10% of the time the car fails on you, which is just a sure way to get one killed.
This is correct. Anybody who has been working on AI and ML for the last decade will tell you that ML has a "last mile" problem - in this case, getting from 90% to 99.999%. The majority of your effort in training and tuning your model is on the "last mile" of accuracy, not on the first 90%.
I found the article to be clearly organized by subtopic, with quite little overlap. One can always go to the next section if the current one seems to go too deep. I found the depth just right as well.
Thanks for your excellently researched article (which I did not find annoying, long, or redundant).
One thing you didn’t talk about is hardware. Waymo’s Lidar is significantly more precise than Tesla’s infra-red cameras. Do you think this will hold Tesla back?
I find it hard to see how Tesla can ever get quality data with its low quality hardware.
Tesla seems using lidar as ground truth internally to train it's neural network. Theoretically if human can drive with visual only, FSD should also be able to do it at some point. I can see Tesla provide lv 3 capabilities in maybe 10 years, but it can never achieve lv 4 with current hardware and company structure.
I would love to see Tesla roll out one metro at a time but I suspect they won’t for the same reason Musk would keep announcing FSD with way more capability than it really has - it’s his style and he has fired anyone who might disagree.
Years back I talked to a policy person at one of the companies doing self driving and he expressed major fear that Tesla would move too fast and be reckless and invite backlash. Fortunately, I guess, Tesla hasn’t made so much progress and so the companies (those that remain anyway) have probably had a chance to build up relationships and a reputation.
Interestingly enough, FSD v12.4 will apparently include hand gesture recognition so hopefully the example of a first responder waving cars past won’t be a problem.
Also, FSD v.12 is already navigating temporary construction diversions with aplomb. Do we know if Waymo is able to handle such temporary diversions without extensive mapping of the new/changing conditions?
Cruise previously indicated it was seeing thousands of miles between remote operator take-overs, until with that recent incident they admitted that their autonomous vehicles trigger a request for human help every four to five miles.
Makes one wonder what Waymo’s remote operator intervention rate is in the real world.
I love these deep dives into the state of self-driving. Waymo's patient approach is impressive. I look forward to the day when they're available where I live, which might take a while.
Another important difference is that Waymo uses Lidar whereas Tesla refuses to. That goes beyond the software and business model difference - Lidar gives superior depth perception and motion perception than what human eyes or conventional cameras offer.
Waymo's openness builds trust, while Tesla's occasional hype can leave some wary. It's a clear reminder: in marketing, honesty and transparency aren't just buzzwords, they're the foundation of lasting consumer relationships.
The next phase of these articles can even discuss the "flavor" of the driving they have. Waymo drives like a confident, reasonable human. Tesla's I haven't found as clear.
Re the flat tire scenario (or for any breakdown);, I'll float the alternative that they send a second taxi to pick the rider up in a few minutes and call a tow truck, which could be one they hire on the open market. Thus no staffing is needed, the tow truck driver does what's needed. Do the tow truck drivers need training? Maybe, but I'm not sure.
Perhaps this removes the need to expand city by city?
If it was just flat tires it would probably be feasible to do something like this, but in practice I suspect it'll be more than that. Other situations where a vehicle might need a rescue or maintenance: some other kind of mehcanical failure, mud blocks a vehicle's camera, passenger vomits in the backseat, car needs a recharge while it's on the other side of town from its owner. And that's to say nothing of cases where the self-driving software itself malfunctions.
For any one of these issues, you could probably come up with a way to deal with it without dedicated staff, but add it all up and I think they're gonna have to have a footprint in each city.
And that's to say nothing of keeping supply and demand balanced on the ride-hailing network in each city. Uber and Lyft spend heavily to recruit drivers and advertise for passengers. Probably Tesla can get away with less of this, especially if a lack of a driver allows it to undercut Uber and Lyft on price. But still it's gonna take some work on Tesla's part to make sure that there are cars available when customers pull out the app (and conversely that people who buy cars to rent out are able to get a decent return).
Good points, it seems like you'd at least need a staffed, central garage that could quickly repair and wash them and get them back on the road.
Tesla could easily compare their current software prediction with what drivers actually do, no matter if the system is disengaged, or when the driver manually overrides system control, or when the system self-disengages, or when the system fails while engaged. Or even when FSD hasn't been purchased! The software can ALWAYS run whenever the hardware is powered, independent of its purchase state, engagement status or control mode.
In this way, *ALL* Teslas can provide experiential data at all times, not limited by factors such as Waymo's comparatively tiny fleet size.
I can definitely see this kind of data having some value. It seems a bit tricky to use this kind of data because it won't be obvious whether a small deviation reflects a mistake or just a different driving style.
That can be distinguished by having thousands of vehicles traveling that same route, essentially averaging out driving styles and isolating mistakes.
If you're trying to train a self-driving system to stay in its lane and stop for a stop sign, the approach you describe works great. But what if you're trying to train the software to follow a police officer's hand gestures? Each training example is going to involve a different configuration of vehicles and a different cop making different gestures. I don't see how an "averaging out" approach will work.
Which goes back to Ethan Mollick's point about the jagged frontier. I have no doubt that Tesla's big data approach will make good progress on some aspects of autonomous driving. But I think there are likely to be others where it just doesn't work, just as there are some tasks where LLMs remain quite bad. Tesla doesn't seem to have any plan B for these situations, and a driverless car that works flawlessly 98 percent of the time is worse than useless.
The key is to identify anomalous conditions/situations, which will be an extreme minority of the data, then use those for a more focused training round. The anomaly identification can easily be fully automated, but the interpretation/labeling may initially need manual input once for each novel class of situation, such as "hand signals", where the average of thousands of driver responses is taken as ground truth for the proper response.
Eventually, with tens or hundreds of millions of recorded miles driven, and enough raw compute, there will be no need for any manual intervention whatsoever, as even low-level (rare) signals having consistent driver reactions will be automatically detected, classified and trained.
Tesla knows well both data and compute on this scale. Who else as a Top-100 supercomputer dedicated to the task? Now, saying that Tesla "can" do this with their massive resources does not mean they "will" do it correctly/optimally. In particular, they seem to evolve their model architectures very slowly, and use FP32 training elements.
A counterexample is Comma.AI, who Consumer Reports rated as having the best Level 2+ ADAS in 2020. Comma.AI tweaks their model architectures frequently, with major changes every 6-9 months. Every Comma 3X hardware unit in the field sends a recording of EVERY drive (including all the delicious CAN data), even if the user hasn't subscribed to the Comma Prime service (opt-out is supported). They built a supercomputer roughly equivalent to Dojo in application-level power, though based on FP16 rather than FP32, which drastically reduces compute, memory, cooling and electrical demands while increasing throughput.
Their ADAS software, OpenPilot, is Open Source, and releases always freely include the trained model weights. The Comma 3X hardware uses a phone processor, the Qualcomm Snapdragon 845. The inputs are three cameras (forward wide, forward narrow, and driver IR monitoring), GPS, and the content of the vehicle CAN/CAN-FD busses.
On my 2021 Nissan LEAF, despite having limited Comma support (due to Nissan idiosyncrasies), I get hands-off driving for about 90% of my miles driven. All for a one-time lifetime cost of $1450 ($1250 Comma 3X unit + $200 vehicle-specific wiring harness).
Remember, to use FSD, you must FIRST buy a Tesla vehicle, THEN pay $12,000 for FSD, yielding a total starting cost of at least $50K. My used Leaf cost me $12K, and I can easily move my Comma 3X to my next vehicle the cost of a $200 vehicle harness. Can you move your Tesla FSD to your next vehicle? I worry about Tesla's business model for developing and deploying FSD.
OpenPilot has been forked multiple times by the community, where forks that meet Comma.AI strict standards are allowed to submit their changes back to the mainline OpenPilot repo. The trusted FrogPilot and SunnyPilot forks enable my Leaf to do things the mainline OpenPilot software does not yet allow.
All this, including hardware designed and manufactured in-house, from a company with about 50 employees that is PROFITABLE.
Comma explicitly calls their system "Level 2" with enforced driver attention, as that allows them to support what the vast majority of drivers need/want today. However, if you enable Experimental Mode (and click through all the warnings and waive all liability), you get a mix of advanced capabilities ranging from Level 2+ to Level 4. Comma presently states they have no intention of pursuing autonomous driving, but that's clearly to keep regulatory agencies off their back, and to avoid lawyer and court costs.
Yes, Tesla has pulled ahead since 2020, as that's what spending literally billions of dollars will get you, but not nearly as far ahead as you'd think that money should have taken them.
The Comma.AI folks are wicked-smart. Keeping their teams tiny and nimble is key to their steady (and profitable) progress. Tesla's early FSD and Autopilot progress was accomplished with similarly small teams, but their progress stalled for several years until their size had mushroomed to the point where brute force and massive money flows enabled their recent advances.
When it comes to fully automated driving, my money is on Comma.AI to do it right. Last year, Comma.AI signed their first vehicle OEM, Aptera, who will use Comma.AI as their ADAS vendor.
So, Comma.AI and Tesla FSD have the same number of OEMs using their respective systems! (Yes, yes, there is a minor scale difference. Pffft.)
Closing the loop back to Waymo, their system appears to rely on lots of hand-programming and manual classification, meaning their system likely is fragile and difficult to scale. Both Tesla and Comma.AI used this approach starting out, and both shifted away from it the moment they had enough data and compute to support training and deploying end-to-end networks. The most immediate result was the system a) had minimal trouble supporting left-hand driving, and b) had very little trouble with signage in other countries. Where is Waymo in these respects? Who will be the first to achieve "Global Autonomous Driving"?
The availability of Waymo remote drivers means they are a "remote Level 2" system, where the remote driver must still take control (or at least provide input) whenever the automation fails. When can/should/will the human be removed from the loop, and which system will do it first?
I'd recommend watching the blog posts and videos released by Comma.AI. In particular, their "CommaCon" videos cover their technical approaches in depth. They are far more open about what they do and how they do it than either Tesla or Waymo.
"Their system appears to rely on lots of hand-programming and manual classification." What are you basing this on?
Specifically, the types of situations in which their system fails. It seems unable to generalize very well, which is the hallmark of end-to-end trained systems.
Edit: How will a Waymo vehicle perform if you take it from their Phoenix, AZ fleet and drop it in Anchorage, AK? I suspect a Tesla or Comma.AI system would not care about such a relocation. Would Waymo?
When you compare Tesla having a couple interventions in a few drives and Waymo doing 2 hours without one, you vastly understate the difference. Waymo is doing 50,000 drives/week with nobody behind the wheel. While they do have issues from time to time, they are rare. People don't understand where the bar is. It's not doing 1, 2 or even 100 drives in a row without a problem. It's doing 10s of thousands, a a lot more than that.
100 percent agree!
Can Waymo go "from anywhere to anywhere", or is it geofenced? Is it limited to previously mapped areas? These "details" are important and should be made explicit.
The smaller the scope, the easier the problem.
The point of the article is that Tesla will learn the hard way as well that it will have to narrow the areas in order to gain reliability when driving unsupervised, short-term. Otherwise you get a flood of a trillion problems at once.
So, at Waymo's present rate of adding cities, it will take them decades to cover the US. Can they go faster? If Waymo can, they are intentionally not doing so.
How advanced do you think Tesla FSD and Comma.AI will be by then, as they already do 90% autonomy in ALL US cities today!
Waymo's own numbers and performance are stacked against them. Being perfect within a postage stamp is NOT equivalent to covering coast-to-coast.
I think that Waymo's rate of deployment will increase as they gain experience with and confidence in their technology. They spent 3 years in (a part of) the Phoenix metro area before expanding to SF in 2023. Then they spent about a year in SF before expanding to Austin and LA this year. If those expansions go well, I expect their rate of expansion will continue to increase over the next few years, allowing them to reach dozens of metro areas later in the decade.
BTW, I think one reason Waymo hasn't expanded faster so far is that they haven't quite figured out freeways. This makes the service non-competitive in a lot of areas since most trips take twice as long on surface streets. Waymo started driverless testing on freeways in Phoenix in January, which suggests they are fairly close to figuring out freeways. Once they enable freeway driving in their commercial fleet, it'll be a much more compelling service and the business case for expansion will be much stronger.
Tesla's "90% autonomy in ALL US cities today!" is not worth anything. To go from 90% to 99.999% is where the hard part will be. And 90% means 10% of the time the car fails on you, which is just a sure way to get one killed.
This is correct. Anybody who has been working on AI and ML for the last decade will tell you that ML has a "last mile" problem - in this case, getting from 90% to 99.999%. The majority of your effort in training and tuning your model is on the "last mile" of accuracy, not on the first 90%.
Great insights- however, annoyingly long read with much redundancy.
Thanks for the feedback! I'd love to know which parts you found redundant or not interesting.
I found the article to be clearly organized by subtopic, with quite little overlap. One can always go to the next section if the current one seems to go too deep. I found the depth just right as well.
Thanks for your excellently researched article (which I did not find annoying, long, or redundant).
One thing you didn’t talk about is hardware. Waymo’s Lidar is significantly more precise than Tesla’s infra-red cameras. Do you think this will hold Tesla back?
I find it hard to see how Tesla can ever get quality data with its low quality hardware.
Yes, I think the lack of lidar is another factor holding Tesla back.
Tesla seems using lidar as ground truth internally to train it's neural network. Theoretically if human can drive with visual only, FSD should also be able to do it at some point. I can see Tesla provide lv 3 capabilities in maybe 10 years, but it can never achieve lv 4 with current hardware and company structure.
What evidence do you have for that? There are no Tesla mapping vehicles driving around with lidar.
From luminary fiscal report. https://www.theverge.com/2024/5/7/24151497/tesla-lidar-luminar-elon-musk-sensor-autonomous
I would love to see Tesla roll out one metro at a time but I suspect they won’t for the same reason Musk would keep announcing FSD with way more capability than it really has - it’s his style and he has fired anyone who might disagree.
Years back I talked to a policy person at one of the companies doing self driving and he expressed major fear that Tesla would move too fast and be reckless and invite backlash. Fortunately, I guess, Tesla hasn’t made so much progress and so the companies (those that remain anyway) have probably had a chance to build up relationships and a reputation.
Yes, that is very possible and I really really hope Waymo gets to enough cities before then that people have stopped thinking "self-driving=Tesla."
Instead it is Cruise that was reckless and invited backlash with their recent dragging a pedestrian under the car and then lying about it.
Interestingly enough, FSD v12.4 will apparently include hand gesture recognition so hopefully the example of a first responder waving cars past won’t be a problem.
Also, FSD v.12 is already navigating temporary construction diversions with aplomb. Do we know if Waymo is able to handle such temporary diversions without extensive mapping of the new/changing conditions?
Cruise previously indicated it was seeing thousands of miles between remote operator take-overs, until with that recent incident they admitted that their autonomous vehicles trigger a request for human help every four to five miles.
Makes one wonder what Waymo’s remote operator intervention rate is in the real world.
> FSD v12.4 will apparently include hand gesture recognition
Wow. Would this mean anyone could remotely stop an FSD car using the right hand signals?
I love these deep dives into the state of self-driving. Waymo's patient approach is impressive. I look forward to the day when they're available where I live, which might take a while.
Another important difference is that Waymo uses Lidar whereas Tesla refuses to. That goes beyond the software and business model difference - Lidar gives superior depth perception and motion perception than what human eyes or conventional cameras offer.
Agree.
Waymo's openness builds trust, while Tesla's occasional hype can leave some wary. It's a clear reminder: in marketing, honesty and transparency aren't just buzzwords, they're the foundation of lasting consumer relationships.
How is Tesla’s Full Self Driving (FSD) feature? It claims to "offer an unproven new path for autonomous driving"!
The next phase of these articles can even discuss the "flavor" of the driving they have. Waymo drives like a confident, reasonable human. Tesla's I haven't found as clear.