Radical Reads: A Wave Of Billion-Dollar Language AI Startups Is Coming

Rob Toews, Partner


      

An image of the founders of Ivan Zhang, Nick Frosst, and Aidan Gomez.

Image source: Cohere founding team (from left to right: Ivan Zhang, Nick Frosst, Aidan Gomez), Globe and Mail 

 

In his latest Forbes column, Radical Ventures Partner Rob Toews posits that language is the next great frontier in AI and explores the landscape of emerging natural language processing (NLP) startups, which he believes are poised to create many billions of dollars of value. The column walks through several different categories of NLP startups, from search to language translation to call centers to healthcare. It features Radical portfolio companies Cohere, BirchAI and Twelve Labs, and includes insights from leading NLP researchers including Richard Socher, founder of You.com. We are sharing an excerpt of Rob’s article here:

Language is at the heart of human intelligence. It therefore is and must be at the heart of our efforts to build artificial intelligence. No sophisticated AI can exist without mastery of language.

The field of language AI—also referred to as natural language processing, or NLP—has undergone breathtaking, unprecedented advances over the past few years. Two related technology breakthroughs have driven this remarkable recent progress: self-supervised learning and a powerful new deep learning architecture known as the transformer.

We now stand at an exhilarating inflection point. Next-generation language AI is poised to make the leap from academic research to widespread real-world adoption, generating many billions of dollars of value and transforming entire industries in the years ahead.

A nascent ecosystem of startups is at the vanguard of this technology revolution. These companies have begun to apply cutting-edge NLP across sectors with a wide range of different product visions and business models. Given language’s foundational importance throughout society and the economy, few areas of technology will have a more far-reaching impact in the years ahead.

***

Cohere is a fast-growing startup based in Toronto that, like OpenAI, develops cutting-edge NLP technology and makes it commercially available via API for use across industries. Cohere’s founding team is highly pedigreed: CEO Aidan Gomez is one of the co-inventors of the transformer; CTO Nick Frosst is a Geoff Hinton protégé. The company recently announced a large Series B fundraise from Tiger Global less than a year after emerging from stealth.

While Cohere does produce generative models along the lines of GPT-3, the company is increasingly focused on models that analyze existing text rather than generate novel text. These classification models have myriad commercial use cases: from customer support to content moderation, from market analysis to search.

***

BirchAI has built a cutting-edge NLP solution focused on contact centers in healthcare. The company’s target customers include health insurers, pharmaceutical companies and medical device companies.

As Birch co-Founder/CEO Kevin Terrell put it, “Transformer-based NLP can now automate complex dialogue and document workflows that used to require highly trained employees. Healthcare, with its lagging productivity and aging workforce, is one sector where the need for this technology is particularly pronounced.”

***

One exciting startup building next-generation video search capabilities is Twelve Labs, which announced its seed financing earlier this month. Twelve Labs fuses cutting-edge NLP and computer vision to enable precise semantic search within videos. “Multimodal AI” like this—that is, AI that ingests and synthesizes data from multiple informational modalities at once, like image and audio—will play a central role in AI’s future.

“Large language models are accomplishing incredible things today. We think large multimodal neural networks for video are the obvious next step,” said Twelve Labs co-Founder/CEO Jae Lee. “Video embeddings generated by these networks will supercharge current and future video-driven applications with an intelligence that we’ve never seen before.”

***

This article is the second part of a series examining the role large language models will play in society and the economy going forward. You can read the first part of Rob’s series here.

5 Noteworthy AI and Deep Tech Articles: week of April 11, 2022

1) AI-directed algae blooms boost biofuel prospects (IEEE Spectrum)

The sugars and oils produced by algae can be turned into ethanol, diesel, and jet fuel. But, for the past decade, no one could find an economical way to produce algae and turn a profit. A new machine learning technique may revitalize this alternative energy source. A Texas A&M University team achieved the highest reported outdoor algae growth speed and productivity by optimizing the harvesting process to allow each cell to get as much sunlight as possible. The outdoor farm produced 43.3 grams of biomass per square meter daily, almost twice as much as existing methods. If the findings are scalable, this could be a promising AI application in energy production and tackling climate change.

 

2) ACM Prize in Computing recognizes pioneer of robot learning (Association for Computing Machinery)​​

Pieter Abbeel has received the 2021 Association for Computing Machinery (ACM) Prize in Computing for contributions to robot learning, including learning from demonstrations and deep reinforcement learning for robotic control. Pieter pioneered teaching robots to learn from human demonstrations (apprenticeship learning) and their own trial and error process (reinforcement learning), forming the foundation for the next generation of robotics. Pieter is a Professor at UC Berkeley and the co-Founder, President and Chief Scientist at Radical Ventures portfolio company Covariant, building a universal AI that enables robots to see, reason and act autonomously in the real world. Pieter also hosts The Robot Brains Podcast, where he interviews leading experts in AI and robotics from all over the world. 

 

3) This horse-riding astronaut is a milestone in AI’s journey to make sense of the world (MIT Technology Review)​​

First, it was an avocado chair; a year later, an astronaut riding a horse marks a new milestone in teaching machines to perceive the world the way humans do. San-Francisco-based AI research lab OpenAI announced DALL-E 2. The updated model has learned the relationship between images and the text used to describe them. It uses a process called “diffusion,” which starts with a pattern of random dots and gradually alters that pattern towards an image when it recognizes specific aspects of that image. While the results don’t bring us to AGI, the resulting images bring us closer to solving a significant hurdle in AI: understanding similarities. AI models have struggled to generalize broader concepts based on a particular instance of an object. The capacity to reason and identify similarities is an underappreciated skill in humans, a key ingredient for creativity, and an elusive problem for computers. DALL-E 2 successfully matches human expectations in this area  – for example, appropriately placing a “horse-riding astronaut” on a horse’s back, with the reins in the astronaut’s hands.

 

4) AI predicts side effects of combination therapies (BioTechniques)​​

Researchers at the Cancer Center Amsterdam (Netherlands) used AI to predict the adverse effects of combination therapies, commonly used to treat various cancers. Their preliminary findings demonstrate that the model could predict the profiles for widely used combination therapies. The researchers are now developing a statistical method to quantify the model’s accuracy before being used in clinical practices. Given that drug interaction is highly complex and involves many molecular, macromolecular, cellular, and organ processes, the researchers remark that “it is unlikely that [the] approach will lead to black-and-white decisions.” The research surfaced clear snapshots of the interplay of drugs, diseases, and the human body as described by millions of patients. While their tool is still in the proof-of-concept phase, it could be an essential aid to decision-making for patients and healthcare workers in the future. 

 

5) Just for fun: Solar powered dawn poems: progress report (Allison Parrish via Import AI)

Computer programmer, poet, and game designer Allison Parrish shares a delightful DIY project building a tiny solar-powered poem generator. She uses a markov generator plus scripts attached to a dataset she assembled to generate the poems. “What’s nice about this is the message that you can have fun building little AI-esque things without needing to boot up a gigantic supercomputer.” 

 

Share this edition of Radical Reads

5 AI Predictions For The Year 2030

Forecasting the near-term future presents its own set of challenges, but extending our foresight to a five-year horizon amplifies these difficulties. This week, we feature excerpts from Radical Partner Rob Toews’ latest article in Forbes…

Read More »

© 2023 Radical Ventures Investments Inc.