Radical Reads: The Best of NeurIPS 2021

Daniel Mulet, Investor

    .   

NeurIPS

Image Source: Microsoft; NeurIPS

The Neural Information Processing Conference (NeurIPS), widely regarded as the world’s leading conference for artificial intelligence researchers, wrapped up early last week. A record 2,344 papers were accepted at the conference, covering research on reinforcement learning, transformers, graph neural networks, differential privacy, generative models, and federated learning, amongst many others. Below we are sharing some of the notable topics, talks and themes to emerge from NeurIPS 2021.

Machine Learning and Biology

The intersection of machine learning and biology continues to accelerate, with many papers focused on applying machine learning to molecular structure prediction for protein and small molecule drug design. One of this year’s best paper awards went to the creators of a group of benchmark datasets. These types of benchmarks often set the stage for a flurry of activity as researchers develop new methods to meet or exceed the benchmark. This is also a signal that we are still near the start of what machine learning will be able to accomplish in the life sciences.  

Neuroscience

A wide ranging conversation between Daniel Kahneman and Professor Josh Tenenbaum (MIT) explores the intersection of judgment and computational neuroscience, examining how the way humans think influences the development of AI systems. The discussion delves into areas in which human thinking is a model for AI, but also explores cases in which human thinking is limited or flawed. They also discuss how to better design the ways AIs and humans interact. 

Collective Intelligence

Professor Radhika Nagpal (Harvard) discusses a fascinating project – Eciton Robotica – where researchers in her lab created a self-assembling swarm of climbing robots that were inspired by the living architectures of army ants.

AI for Education

In a featured talk, Luis Von Ahn (CMU), the CEO and Co-founder of Duolingo, describes how Duolingo uses AI to assess, engage, and teach better. Duolingo is a platform for learning languages where students around the world complete over 600 million exercises every day, providing an invaluable resource to the company’s data science team. This enables the company to create lessons tailored to each student, and to improve the order of curriculum by running A/B tests to optimize for learning outcomes.

5 Noteworthy AI and Deep Tech Articles: week of December 20, 2021

1) 2022: A major revolution in robotics (ZDNet) 
Researchers at the intersection of AI, machine vision, and machine learning are changing the foundations of robotics. By combining machine vision with learning capabilities, roboticists are uncovering a wide range of new innovations like vision-based drones, robotic harvesting, robotic sorting in recycling, and warehouse pick and place. These research advances represent an inflection point, where AI-powered applications are good enough to provide real value in environments where traditional robots could never succeed. This article is a conversation with Pieter Abbeel, co-founder and Chief Scientist of Radical Ventures portfolio company Covariant and a professor of electrical engineering and computer science at the University of California, Berkeley. Pieter is also the director of the Berkeley Robot Learning Lab and co-director of the Berkeley AI Research Lab, and host of The Robot Brains podcast.

2) New model improves accuracy of machine learning in COVID-19 diagnosis while preserving privacy (Nature)
An international team used federated learning to diagnose COVID-19 in patients while preserving privacy. Federated learning is a method used for privately building AI models. For example, a hospital or government can independently train and verify an AI model using a dataset from another hospital or government without ever viewing the other party’s data. While this model had a comparable performance to a panel of radiologists, the diagnosis requires CT scans, likely hindering widespread adoption. But, the work contributes to a growing body of research looking to use federated learning for privacy-preserving AI in digital health.

3) MLCommons releases open-source datasets for training speech recognition models (SiliconAngle) 
MLCommons, the non-profit consortium dedicated to creating open AI development tools and resources, announced the release of two datasets: the People’s Speech Dataset and the Multilingual Spoken Words Corpus (MSWC). The first is among the world’s most comprehensive English speech datasets licensed for academic and commercial usage, with tens of thousands of hours of recordings. The MSWC is one of the largest audio speech datasets with keywords in 50 languages. Standardized datasets such as these create opportunities for performance benchmarks. A well-known example is ImageNet, first created by AI computer-vision pioneer Fei-Fei Li. Competitions, such as the ImageNet Challenge, form around these datasets and have led to breakthroughs such as AlexNet – considered one of the most influential papers published in computer vision.

4) Modeling Uncertainty in AI (VentureBeat) 
Machine learning uses probabilities to make predictions that inform decisions taken by either machines or humans. Inherently, there can be some uncertainty around actions taken based on the information available to the machine learning system. Probabilistic models incorporate random variables and probability distributions into an event or phenomenon model and show a probability distribution as a solution – illustrating how much uncertainty there may be in any given prediction. Meta, previously known as Facebook, has developed a probabilistic programming system called Bean Machine to represent and learn about the uncertainties in their AI models. The system is intended to discover unobserved properties of a model via automatic, “uncertainty-aware” learning algorithms. Probabilistic modeling offers benefits like uncertainty estimation, expressivity, and interpretability compared with other machine learning approaches.

5) The future of Artificial Intelligence is self-organizing and self-assembling (Sebastian Risi)
In this article, AI researcher Sebastian Risi discusses the principles of self-assembling and its potential for impacting future AI systems. Self-assembling is how ants can join to create bridges or rafts to navigate difficult terrain, termites can build nests several meters high without an externally imposed plan, and thousands of bees work together as an integrated whole to make accurate decisions on when to search for food or a new nest. Surprisingly, achieving these incredible abilities results from following relatively simple behavioural rules and through a process of self-organization. In nature, groups with millions or even trillions of elements can self-assemble into complex forms based only on local interactions and display – a collective type of intelligence. Exploiting ideas from self-organization has already allowed artificial neural networks to create images and robot morphologies.

Radical Thinking: AI’s Next Wave

At Radical we have seen hundreds of startups looking to shape the future of AI technologies. From this experience, I developed a framework for categorizing different waves of AI adoption. Understanding the changes that have come before situates the current moment and helps companies plan for the future.

Read More »

© 2022 Radical Ventures Investments Inc.