Tesla's robotaxi thesis rests on a camera-only perception stack achieving Level 4 autonomy — something no company has yet commercially deployed without LiDAR. FSD v14 reaches 9,200 miles between critical interventions, a dramatic improvement but still far from L4 requirements of millions of miles between disengagements.
NHTSA Investigation EA26002
On March 18, 2026, NHTSA escalated its FSD camera degradation investigation to Engineering Analysis covering 3.2 million vehicles. This is the most advanced stage before a potential recall.
The camera-only approach has a fundamental cost advantage — no LiDAR ($267/unit at scale), lower compute requirements — but a fundamental data disadvantage: no direct depth sensing. NVIDIA research confirmed log-linear data scaling laws for end-to-end autonomous driving, validating Tesla's data-centric approach, but the question remains whether camera-only can reach the safety bar.
Can end-to-end neural networks fully compensate for the lack of active depth sensing (LiDAR/radar)?
Tesla Is the Only Company Pursuing Camera-Only L4
Every deployed L4 system -- Waymo, Zoox, Pony.ai, and Baidu Apollo Go -- uses LiDAR. The camera-only approach faces three empirical challenges: NHTSA investigations of visibility failures, 15-43% camera accuracy retention in adverse weather, and Tesla's Austin fleet crashing at 4x the human rate.
| Tesla FSD | 8 (HW4) | None | None | Ultrasonic (some models) |
| Waymo 6th-gen | 13 | 4 | 6 | Audio receivers |
| Zoox | Yes | Yes | Yes | Thermal cameras (FLIR) |
| Pony.ai 7th-gen | 14 | 9 | 4 | Thermal, audio |
| Baidu Apollo Go | Multi | Multi | Multi | 360-degree fusion |
The Core Bet
If camera-only works, Tesla wins on cost and scale. If it doesn't, Tesla is years behind competitors who chose LiDAR. This is the single most consequential technical question in the Tesla thesis.
HW4 Is 7-10x Below NVIDIA's L4 Reference Spec
Tesla's HW4 delivers an estimated 100-150 INT8 TOPS. NVIDIA's DRIVE AGX Thor delivers 1,000 INT8 TOPS. Tesla's AI5 chip (2,000-2,500 TOPS) has been delayed to late 2026 / mid-2027, meaning the Cybercab will launch on HW4.
| INT8 TOPS | 36/SoC | 100-150 | 2,000-2,500 | 1,000 |
| Process node | Samsung 14nm | Samsung 7nm | TSMC 3nm | TSMC 5nm |
| Memory | LPDDR4, 96 GB/s | 16GB GDDR6, 384 GB/s | TBD | HBM |
| Status | In fleet (4M+ cars) | Current production | Delayed to late 2026+ | Shipping to partners |
| Can run latest FSD? | No | Yes | N/A | Yes |
Efficiency vs Raw Power
Tesla argues its end-to-end neural network is more computationally efficient than traditional modular stacks, achieving comparable real-world driving performance with fewer TOPS. This is plausible but unproven at L4. The Cybercab launching on HW4 rather than AI5 is a real constraint that limits the neural network models it can run.
Tesla's FSD fleet has surpassed 8.4 billion cumulative miles as of February 2026, growing exponentially -- from 6M in 2021 to 2.25B in 2024 to 4.25B in 2025. This dwarfs Waymo's 170.7 million rider-only miles by roughly 50x. Tesla collects more data per day than all competitors combined.
Quantity vs Quality
Tesla has overwhelming data volume but all of it is camera-only, L2 supervised driving on mapped roads. Waymo has less data but it is L4 autonomous driving across diverse conditions with LiDAR ground truth. The question is whether Tesla's volume compensates for Waymo's quality and diversity.
Improving But Far From Waymo's Safety Benchmarks
FSD v14 achieved approximately 1,454 miles between critical disengagements overall (834 in city), a 3-4x improvement over v13.2. But crowdsourced data shows sharp regressions between minor versions, and the system remains far from the safety threshold needed for unsupervised commercial operation.