Discussion about this post

User's avatar
BobC's avatar

I like how Comma.AI's add-on system layers their smarts atop existing vehicle systems, making use of certified vendor systems to support and validate their own top-level smarts.

For example, Comma (well, actually, the Open Source OpenPilot software) has its own LKAS capabilities that provide LKAS support on vehicles lacking it. However, when a vehicle has its own LKAS, the Comma LKAS capability becomes secondary, providing redundancy.

Comma also uses end-to-end training for their camera-based system, with a separate safety processor ("Panda") validating ("sanity checking") and managing CAN data flowing between Comma and the vehicle. Such safety processors are common throughout many industries, including the electric power grid, nuclear reactors, aircraft avionics, spacecraft and so on.

This is done by Comma despite strictly being a Level 2 system (at least presently). I hope all other self-driving companies are using equivalent safety hardware that exists and runs independently of the host.

This safety module provides a key side benefit: It is the ONLY part of the Comma system that must be validated at the hardware level, allowing the main system hardware to be designed to commercial standards, precisely like a smartphone.

Expand full comment
Oleg  Alexandrov's avatar

The difference is that Waymo is relaxing the separation between its modules in a deliberate fashion, with rigorous validation to ensure it doesn't just become all mush.

Tesla seems to think that if they provide the inputs, so images, the machine will al sort it out in a giant neural net.

Then Waymo provides its AI with richer data and priors, with lidar and maps.

There's vast amount of engineering and research into how to do these things properly, and two different end-to-end implementations can be as night and day in practice.

Expand full comment
4 more comments...

No posts

Ready for more?