Autonomous vehicle testing has long been constrained by the limitations of traditional closed-course facilities. This week, we feature insights from Radical portfolio company Waabi, whose CEO and Founder, Raquel Urtasun, reveals how Mixed Reality Testing is transforming AV safety validation by seamlessly blending physical test tracks with AI-powered virtual scenarios, enabling testing capabilities without real-world risks.
Closed-course testing has been a staple of vehicle safety testing for over 100 years. As the automotive industry has evolved toward automation, closed-course testing has been adopted by autonomous vehicle (AV) developers working on everything from basic driver assistance systems to self-driving technology spanning SAE levels two through five.
Though the practice is valuable for testing basic autonomy capabilities, this approach is resource-intensive and lacks repeatability, diversity, and the ability to test safety-critical scenarios.
Introducing Mixed Reality Testing
Waabi’s Mixed Reality Testing (MRT) offers a revolutionary alternative that completely transforms what’s possible on a closed-course track.
In the same way that augmented reality goggles blend the physical world with a virtual world, MRT enables the Waabi Driver to drive autonomously down a physical test track while simultaneously experiencing numerous intelligent, simulated actors that coexist in this hybrid reality and react to each other and to the physical world in naturalistic ways.
All this is possible by leveraging Onboard Waabi World, a version of Waabi’s neural simulator that runs in a few milliseconds on the onboard compute. As Onboard Waabi World generates new scenarios, the real physical sensor readings are modified instantaneously so the Waabi Driver can react to the blend of real and virtual elements while driving in the physical world. This fusion creates a first-of-its-kind reality that unlocks unlimited testing possibilities previously impossible to achieve safely or practically.
A Dynamic, Intelligent Alternate Reality
All of the virtual elements in MRT exist within a sophisticated 4D neural digital replica of the real world, where AI-powered actors behave with human-like unpredictability and intelligence, mirroring the chaotic nature of the real world. They can replicate anything from aggressive drivers weaving through traffic to distracted pedestrians or impaired motorists. Infrastructure can be created or modified on demand. Traffic lights can change states, and construction zones can shift layouts instantly.
Mixed Reality Testing (MRT).
Achieving Unprecedented Safety Testing at Scale
Waabi has been utilizing MRT as a central testing approach for more than two years. It has been a key innovation that has allowed Waabi to advance faster, safer and with unparalleled capital efficiency, enabling us to achieve feature-complete autonomous driving capabilities at the beginning of this year and paving the way to our imminent driverless launch.
We have always been driven to establish what would be the ultimate “driving test” for an AI system, and with MRT, we are delivering on that vision. For the first time, we can test as aggressively and comprehensively as the real world demands, without exposing anyone to the real world’s risks. This breakthrough allows us to build an AV that we can truly trust and accelerates the path to safer roads for everyone.
Read more about Waabi’s Mixed Reality Testing approach in their full blog post. Bloomberg Businessweek also published a feature story on this breakthrough.
AI News This Week
-
To Survive the AI Age, the Web Needs a New Business Model (The Economist)
AI-powered search engines are disrupting the internet’s economic foundation by reducing website traffic and ad revenue. Monthly search traffic has fallen 15% as users turn to chatbots for answers rather than visiting sites directly. Science, education, and health websites have lost 10-31% of visitors, threatening content creation incentives. Proposed solutions include pay-as-you-crawl systems for AI bots and attribution-based compensation for content sources.
-
Trump Hails $92 Billion in Investments for AI, Energy Projects (Bloomberg)
President Trump announced more than $92 billion in AI and energy infrastructure investments during the inaugural Pennsylvania Energy and Innovation Summit. The investments span new data centers, power generation, grid infrastructure upgrades, and AI training programs as part of the administration’s strategy to maintain US dominance over China in AI development. With data centers projected to account for 8.6% of US electricity demand by 2035, the administration emphasizes energy production as critical to maintaining America’s competitive edge in the global AI race.
-
China Is Spending Billions to Become an A.I. Superpower (NYT)
Beijing is implementing an industrial policy approach to help Chinese AI companies close the gap with American competitors. Since OpenAI blocked China’s access last July, Chinese companies like DeepSeek and Alibaba have developed top-performing open-source AI systems. The government has spent nearly $100 billion on semiconductor development since 2014, with an additional $8.5 billion allocated for AI startups in April.
-
These Four Charts Show Where AI Companies Could Go Next in the US (MIT Technology Review)
A new Brookings Institution report reveals AI development remains highly concentrated in tech hubs, with the San Francisco Bay Area and major metros like Boston containing nearly two-thirds of AI workers and over 75% of startups. However, 14 “emerging AI centers”, including university towns like Madison and regional centers like Pittsburgh, show promise despite lacking talent, innovation, or adoption readiness. These emerging areas typically specialize in either federal contracts or attracting venture capital funding, suggesting growth opportunities through academic-industry partnerships.
-
Research: A New and Fragile Opportunity for AI Safety (Multi-institutional collaboration including UK AI Security Institute, Apollo Research, OpenAI, Anthropic, Google DeepMind)
Researchers identify a critical but potentially closing window for AI safety: monitoring reasoning models’ “chains of thought” to detect harmful behaviours. Complex tasks force Transformers to externalize reasoning in human-readable language, enabling oversight of AI decision-making. However, scaling reinforcement learning, process supervision, and novel architectures could degrade monitorability. Early applications successfully detect model misbehaviour and alignment issues. The authors urge immediate investment in CoT monitoring research and recommend that developers track monitorability when making training decisions.
Radical Reads is edited by Ebin Tomy.