How are the models different despite their similarity? And I would imagine the transformer gets greatly specialized in its neuron implementation right? But it's about prediction symbols
The difference is that Waymo is relaxing the separation between its modules in a deliberate fashion, with rigorous validation to ensure it doesn't just become all mush.
Tesla seems to think that if they provide the inputs, so images, the machine will al sort it out in a giant neural net.
Then Waymo provides its AI with richer data and priors, with lidar and maps.
There's vast amount of engineering and research into how to do these things properly, and two different end-to-end implementations can be as night and day in practice.
I like how Comma.AI's add-on system layers their smarts atop existing vehicle systems, making use of certified vendor systems to support and validate their own top-level smarts.
For example, Comma (well, actually, the Open Source OpenPilot software) has its own LKAS capabilities that provide LKAS support on vehicles lacking it. However, when a vehicle has its own LKAS, the Comma LKAS capability becomes secondary, providing redundancy.
Comma also uses end-to-end training for their camera-based system, with a separate safety processor ("Panda") validating ("sanity checking") and managing CAN data flowing between Comma and the vehicle. Such safety processors are common throughout many industries, including the electric power grid, nuclear reactors, aircraft avionics, spacecraft and so on.
This is done by Comma despite strictly being a Level 2 system (at least presently). I hope all other self-driving companies are using equivalent safety hardware that exists and runs independently of the host.
This safety module provides a key side benefit: It is the ONLY part of the Comma system that must be validated at the hardware level, allowing the main system hardware to be designed to commercial standards, precisely like a smartphone.
In addition, the only successful self driving vehicle will combine radar based autonomous braking with cameras plus some combination lidar, radar or other technology. The idea that camera only, self driving cars are possible vastly underestimates the complexity of the brain (granted, can fail when compromised) and such an idea is caused by a "fixed thinking neural block".
How are the models different despite their similarity? And I would imagine the transformer gets greatly specialized in its neuron implementation right? But it's about prediction symbols
The difference is that Waymo is relaxing the separation between its modules in a deliberate fashion, with rigorous validation to ensure it doesn't just become all mush.
Tesla seems to think that if they provide the inputs, so images, the machine will al sort it out in a giant neural net.
Then Waymo provides its AI with richer data and priors, with lidar and maps.
There's vast amount of engineering and research into how to do these things properly, and two different end-to-end implementations can be as night and day in practice.
👍👍
I like how Comma.AI's add-on system layers their smarts atop existing vehicle systems, making use of certified vendor systems to support and validate their own top-level smarts.
For example, Comma (well, actually, the Open Source OpenPilot software) has its own LKAS capabilities that provide LKAS support on vehicles lacking it. However, when a vehicle has its own LKAS, the Comma LKAS capability becomes secondary, providing redundancy.
Comma also uses end-to-end training for their camera-based system, with a separate safety processor ("Panda") validating ("sanity checking") and managing CAN data flowing between Comma and the vehicle. Such safety processors are common throughout many industries, including the electric power grid, nuclear reactors, aircraft avionics, spacecraft and so on.
This is done by Comma despite strictly being a Level 2 system (at least presently). I hope all other self-driving companies are using equivalent safety hardware that exists and runs independently of the host.
This safety module provides a key side benefit: It is the ONLY part of the Comma system that must be validated at the hardware level, allowing the main system hardware to be designed to commercial standards, precisely like a smartphone.
All future vehicles need radar based collision avoidance with autonomous braking, this should override all other systems in a vehicle.
In addition, the only successful self driving vehicle will combine radar based autonomous braking with cameras plus some combination lidar, radar or other technology. The idea that camera only, self driving cars are possible vastly underestimates the complexity of the brain (granted, can fail when compromised) and such an idea is caused by a "fixed thinking neural block".