Image Source: “AI Goldilocks Zone”, Barney Pell, 2021
To kick-off the new year, Radical hosted a workshop focused around a framework I developed on the crucial questions founders must ask themselves to ensure AI application success. The program is part of Radical’s AI Founders Master Class series and participants included researchers from the Vector Institute for Artificial Intelligence, the Alberta Machine Intelligence Institute, Mila (the Quebec Institute for Machine Intelligence), Stanford University, and Oxford University.
I have seen many high-calibre technology teams with strong academic research pedigrees run aground in the face of market realities or the challenges of production implementations. For researchers, building a successful AI business requires shifting from an academic to an “art of the applied” mindset.
To help make this shift, I have developed a series of questions to help AI teams navigate the subtleties and complexities of applying new technology. The framework involves a series of crucial questions in ten categories, including:
- The AI Goldilocks Zone – what can you do that others cannot, that customers need?
- Good enough AI – what capabilities, as well as technology and quality thresholds, do you need to achieve to satisfy customers? And, what are the competitive alternatives?
- Human in the loop – what is the optimal mix of human input required to make this application work?
These kinds of questions (along with others focused on areas such as usability, trust and bias) can serve as gating factors to success. Asking these key questions early in the product development cycle can dramatically increase the chances of success, lead to unique designs and approaches, and enhance the value of any AI project. I have found that teams capable of successfully answering these questions are better positioned to make that perilous journey across the chasm into the mass market. Ultimately, successful AI founders must take the application of AI as seriously as the algorithms that underpin their technology.
5 Noteworthy AI and Deep Tech Articles: week of January 31, 2022
1) Listen: Yinhan Liu/CTO of BirchAI about applying ML in the healthcare industry (The Sequence)
Yinhan Liu, Co-founder and CTO of Radical portfolio company Birch AI and previous Facebook AI researcher, sits down with The Sequence to share her insights on machine learning and company building. In this interview, Yinhan — who was the lead researcher behind the pioneering natural language processing paper RoBERTa – reveals the company’s founding story, how the team uses AI to solve problems, and how to overcome challenges deploying technology in real-world healthcare scenarios.
2) The advantages of at-memory compute for AI inference (EE Times)
As traditional processor architectures struggle to keep up with the exploding compute demands of AI workloads, innovative “at-memory” chip design is providing compute density for a variety of intensive AI tasks including vision, natural language processing, and recommendation engines. This article explores the chip architectures best-suited to supporting AI inference workloads and details the efficiency of Untether AI’s at-memory computation. A Radical portfolio company, Untether AI’s technology places compute elements directly next to the memory array, reducing power consumption and increasing throughput due to the architecture’s massive parallelism and memory bandwidth.
3) Everyday objects can run AI software (Science)
“Everything can be a computer.” Researchers are exploring physical neural networks (PNNs) to enable any physical object to power the computational layers inside a neural network. Most objects compute in a “passive way, merely by responding to stimuli. Canyons, for example, add echoes to voices without the use of soundboards.” Using an ordinary object as the central processor in a neutral network opens opportunities for functional machines where smart sensors can perform computations on the fly.
4) AI drug discovery startup BenchSci scores $63 million Series C round led by Inovia, TCV (BetaKit)
Radical Ventures portfolio company BenchSci uses machine learning to expedite drug discovery by targeting inefficiencies and unnecessary spending in preclinical scientific research. The company’s ultimate goal is to bring novel medicine to patients 50 percent faster by 2025. This week BenchSci announced that it secured $50 million in Series C capital to expand the reach of its technology around the world. Radical Ventures founders Jordan Jacobs and Tomi Poutanen first invested in the company in 2016 as very early angels, and subsequently Radical Ventures Fund 1 used Jordan’s and Tomi’s pro-rata rights to invest in 2017. Five years later, sixteen of the top twenty pharmaceutical companies use BenchSci’s technology, including Eli Lilly, Sanofi, Moderna, and AstraZeneca. This week, the Harvard Business Review also commented on BenchSci’s work that helps pharmaceutical companies improve their response to customers. BenchSci worked with an AI conversational intelligence platform to privately and securely capture signals from emails, support tickets and surveys to enable the customer service representative to better respond to customer service tickets.
5) Transformers see in 3D (The Batch)
Visual robots often rely on 2D images to perceive the world. This reliance can lead to mistakes. In one case, Tesla’s self-driving system mistook a full moon for a traffic light. Aljaž Božic and colleagues at the Technical University of Munich have released TransformerFusion to solve this problem. The Transformer-based 3D scene reconstruction approach derives 3D scenes from 2D video. The system falls short of human perceptions but demonstrates that Transformers excel at learning which features are most relevant to completing a particular task. The researchers showed that the shortcomings of Transformers, in this case identifying points in space, can be addressed by refining the representations using 3D convolutional neural networks.