Radical Reads

Can you hear me now?

By Daniel Mulet, Investor


Photo credit: Jason Rosewell, Unsplash

Last week Microsoft announced a $16Bn acquisition of Nuance Communications, a company with a wide variety of products including AI speech technology. This will be Microsoft’s second largest acquisition after purchasing Linkedin for $24Bn in 2016, and a complement to their Microsoft Cloud for Healthcare offering.

While there has been much analysis of the strategy behind this deal, we believe the acquisition signals the significant opportunity for Natural Language Understanding (NLU) across all industries. Today, most use-cases for advanced language modelling centre around text and content. We expect those applications in industry to explode over the next five years, along with a growing deployment in voice systems enabling more human-like interaction.

Frontier language models (including those by one of our portfolio companies currently in stealth) will revolutionize machine understanding of language. We are excited to see how they create better human-computer interactions everywhere.

AI News This Week

  • A new era of innovation: Moore’s Law is not dead and AI is ready to explode   (SiliconANGLE)

    It is true that chip densities are no longer doubling every two years. But, the focus on densities becomes irrelevant as processing power accelerates at unprecedented rates. Cheap processing means networks and storage become expensive bottlenecks pushing processing to the edge where the historical outcome of Moore’s Law is actually accelerating. Untether AI, a ML semiconductor startup and Radical portfolio company, has developed a groundbreaking new technology for the acceleration of AI inference workloads at-memory computing. The commercially available chip provides over 100 times throughput improvement compared to CPUs and up to 9 times compared to expensive cloud GPU instances. Combining advanced software with the massive increase in processing power will enable new machine intelligence problems to rapidly evolve and transform every industry.

  • AI teases out socioeconomic factors as personalized predictors of health   (AI in Healthcare)

    Where we live, learn, work, and play affects our well-being and a wide range of health outcomes. A group of researchers have used AI to examine these social determinants to pinpoint factors that best predict our future well-being. In the study, perception of purpose in the workplace and financial anxiety emerged as significant factors. A better understanding of socio-economic variables will help predict physical well-being, particularly for vulnerable groups.

  • The brain ‘rotates’ memories to save them from new sensations   (Quanta Magazine)

    This neuroscience breakthrough in human memory could have substantial consequences for coding information and the design of architectures useful for AI. The insight from a pair of Princeton University researchers shows that the brain essentially “rotates” sensory information to encode it as a memory by using current and past stimuli without mutual interference. In short, to make sense of our surroundings, to learn, to act, to think, depends on constant, nimble interactions between perception and memory. The brain does this rotation to keep memory and perception distinct and to avoid misinterpreting important contextual information. The researchers believe that this “rotation” of information is relevant to AI systems required to multitask and may influence future AI frameworks.

  • Forget Boston Dynamics. This robot taught itself to walk with AI   (MIT Tech Review)

    A pair of robot legs called Cassie has been taught to walk using reinforcement learning. The Berkley team enabled the robot to learn a range of movements from scratch, including walking in a crouch and carrying an unexpected load. While we are now familiar with viral videos of robots performing human-like movements, the reality is that roboticists often rely on hard-coded, pre-choreographed routines. Reinforcement learning allows AI systems to learn through trial and error, dispensing with manual fine-tuning by engineers. As this technology develops it will help robots navigate new and complex environments independent of human support.

  • Detecting Unknown Anomalies: Artificial intelligence for space  (Scitech Daily)

    Discoveries in space depend on anomaly detection. But, detecting what is “normal” in the observable universe is challenging. Zooniverse has gamified anomaly detection to encourage humans to sift through images in their spare time, combining human and machine classifications to find anomalies. The result is a training set for an AI system that will improve future detection and may one day power exploratory satellites.

Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).