Radical Reads: Solving Self-Driving with Waabi World

Raquel Urtasun, Founder and CEO of Waabi

    .   

Image Source: Waabi

Editor’s Note:

This week we are cross-posting a blog from Radical Ventures portfolio company Waabi. World-leading machine learning and computer vision expert, Raquel Urtasun, founded the company to pioneer self-driving technology. Her goal is to unleash the full power of AI to “drive” safely in the real world.

Have you ever thought about what it takes to drive? When you get behind the wheel, a mix of intuition, instinct and learned skills helps you process what’s happening and make instantaneous decisions about how to navigate obstacles, when to slow down, speed up, stop, and much more. The human brain’s ability to do all this is remarkable.

Realizing the promise of self-driving technology requires us to teach the “brain” of self-driving vehicles to do exactly the same — while eliminating the risks of distraction, fatigue, and other human-specific vulnerabilities.

At Waabi, we have known from day one that this cannot happen with on-the-road testing alone. Beyond the fact that in some cases it can be risky, it would take thousands of self-driving vehicles driving millions of miles for thousands of years to experience everything necessary to learn to drive safely in every possible circumstance. After all, there are many things that happen out on the road, most of them very rarely.

Self-driving is one of the most exciting and vital opportunities in technology today. Once realized and scaled, it will change life as we know it—how we operate businesses, power industries, build cities and move goods and people. But if we want to see this realized in our lifetimes, we need to embrace a new approach.

Welcome to Waabi World, a scalable, highest-fidelity closed-loop simulator and a key to unlocking the potential of self-driving technology (read The Verge‘s take here). Think of it as the ultimate school for self-driving vehicles. Powered by AI, it is an immersive and reactive environment that can design tests, assess skills, and teach the self-driving “brain” (what we call the Waabi Driver) to learn to drive on its own. 

Four core capabilities define Waabi World:

  1. Builds realistic virtual worlds from data, automatically and at scale;
  2. Performs realistic sensor simulation, enabling testing of the entire software stack in a reactive manner;
  3. Creates diverse scenarios automatically and stress-tests our autonomous Waabi Driver;
  4. Trains the Waabi Driver to learn automatically from its mistakes.

This revolutionary simulator exposes the Waabi Driver to the vast diversity of experiences needed to hone its driving skills, including both common driving scenarios and safety-critical edge cases, reducing the need to drive testing miles in the real world. Waabi World represents the next step in our journey to rapidly and safely commercialize self-driving vehicles. 

5 Noteworthy AI and Deep Tech Articles: week of February 14, 2022

1) Machine learning outperforms clinical experts in classification of hip fractures (Nature)
Two convolutional neural networks (CNNs) developed at the University of Bath were able to pinpoint hip fractures from x-rays with a 19% greater degree of accuracy than hospital-based medics in the trial. The CNNs may be a tool to help standardize the fracture classification process and alleviate a bottleneck in the treatment process. Getting the classification correct is an extremely important decision, directly influencing patient outcomes and treatment costs. The CNN was trained on 3,659 hip radiographs, classified by at least two expert clinicians, achieving an overall accuracy of 92%. This is a step forward in a highly discussed and anticipated AI application: diagnostics automation and outcome prediction.

2) Stanford researchers propose ‘jury learning’ in AI (Arxiv)
Supervised learning is a subcategory of machine learning and AI that relies on labeled datasets to train algorithms. Some labels seem factual (e.g. dog or cat), but in areas such as social media applications, different groups may have irreconcilable disagreements about ground truth labels. This leads some developers to ask, “whose labels should a model learn to emulate?” In a new paper, Stanford researchers introduce “jury learning,” an approach that resolves these disagreements explicitly through the metaphor of a jury: defining which people or groups, in what proportion, determine the classifier’s prediction. For example, a jury learning model for online toxicity might centrally feature women and Black jurors, who are commonly targets of online harassment. Jury learning tries to capture differences in opinion by modeling every individual annotator and predicting each annotator’s label before outputting a joint annotator prediction. 

3) AI drives a race car like a champ (Wired)
AI is outracing human drivers in a head-to-head virtual competition. Racing simulations, such as the PlayStation game Gran Turismo, are an AI application that hinge on real-time decisions in physical systems while interacting with humans. Unlike chess or GO, drivers must execute complex tactical maneuvers to pass or block opponents while operating their vehicles at their traction limits. Video games have become an important sandbox for AI research in recent years. In this case, the game faithfully reproduces “the non-linear control challenges of real race cars while also encapsulating the complex multi-agent interactions.” The research was published in Nature last week.

4) imodels: leveraging the unreasonable effectiveness of rules (Berkeley AI Research)
Berkeley AI Research (BAIR) released imodels, a Python package offering users an interface for fitting and using state-of-the-art interpretable modeling techniques. The new toolkit offers an alternative to common black-box modeling and claims efficiency and transparency can be improved without sacrificing performance. Transparency in AI modeling can help mitigate issues of fairness, discrimination, and trust — all of which have received increased attention. But, it is becoming clear that disclosures about AI pose their own risks: explanations can be hacked and releasing additional information may make AI more vulnerable to attacks. The risks are known as the “AI Transparency Paradox.” Given this trade-off, the toolkit may be most useful in high-stakes applications such as medicine, automation and energy infrastructure. 

5) An ancient language has defied decryption for 100 years. Can AI crack the code? (Rest of World) 
AI has already unraveled mysteries of the ancient world including the authorship of the oldest known manuscripts of the Hebrew Bible. Now researchers are looking to use the technology to crack undeciphered scripts that have resisted scholars’ attempts to translate. Testing the model on two already deciphered ancient scripts, the algorithm took less than three hours, cutting out the days or weeks — or months or years — that it might take to manually test out a theory by translating symbols one by one. The research significantly improves on previous work in automatic decipherment and may be a useful tool to generate a list of candidates and reduce “brute force” hours put in by experts comparing scripts.

 

 

5 AI Predictions For The Year 2030

Forecasting the near-term future presents its own set of challenges, but extending our foresight to a five-year horizon amplifies these difficulties. This week, we feature excerpts from Radical Partner Rob Toews’ latest article in Forbes…

Read More »

© 2023 Radical Ventures Investments Inc.