Radical Reads

NLP’s hardest problems – fireside chat with Richard Socher and Rob Toews

By

http://NLP’s%20hardest%20problems%20–%20fireside%20chat%20with%20Richard%20Socher%20and%20Rob%20Toews

Radical Ventures Partner Rob Toews hosted a talk with Richard Socher at the RE•WORK Deep Learning Hybrid Summit in San Francisco. Richard was previously the chief scientist at Salesforce and an adjunct professor at Stanford’s computer science department. He obtained his Ph.D. working on deep learning with Chris Manning and Andrew Ng. Richard founded MetaMind, which Salesforce acquired in 2016. Propelled by the pace of technological advancement, Richard is also the founder of You.com, an ad-free, privacy-preserving search engine.

In the interview, Richard discusses the challenges of starting a company that disrupts big players, how he thinks about the right balance between research and building a product, and big picture trends and challenges ahead in natural language processing (NLP).

We have included an abbreviated version of the interview below.  

 

Rob Toews (RT): How do you think about building a company to challenge incumbents and win over users? What is the viability of taking on such a large existing incumbent?

Richard Socher (RS): It’s really hard. There are many ways to answer this question, but this would have been impossible five to ten years ago from a technical perspective. NLP really wasn’t far enough along. We can now rank hundreds of applications and rank the content within the majority of these applications. Without unsupervised learning, large language models, as well as other kinds of word vectors, contextual vectors, transfer learning, and then supervised fine-tuning, it would have been technically impossible for a small team to rank arbitrary queries on anything someone might want. Today, a small team can build a very general-purpose technology quickly.

RT: How, given your previous experience with startups, do you think about the right balance between research, productization, and operationalization in a startup?

RS: It’s important to have a research mindset. While it does not make sense for such a small company to focus on publishing papers at this stage, there are applied research problems that we are actively working on. For example, learning to parse human language into API language and transform that. When you want to find “the best fast Chinese restaurant near me,” the traditional backend will return, “there is no restaurant called ‘Near Me.’” So, the mindset for such an AI-heavy company matters a lot.

In the 2010s, so much foundational work needed to be done. In that time, we focused a lot more on how to build very accurate AI systems. Those ideas have been done. In the last couple of years, it has been about applying those ideas and making them even bigger, such as bigger training datasets. But fundamentally, research wise, those ideas have not changed that much. I think there is more interesting work to be done, and more positive impact to be had, on humanity by thinking through how we apply all of those research ideas. Of course there are some exceptions, but the ideas on the fundamental AI or algorithmic level have been similar now for a while and all the breakthroughs are mostly scaling and engineering breakthroughs.

RT: Beyond the increased size of models, zooming out, over the next 3-5 years are there any other big picture trends or developments in NLP that you are focused on or that you think are particularly influential? 

RS: In NLP one of the most interesting and challenging tasks right now is summarization. In contrast, for translation tasks there are tons of training data and so we see translation working better and better. But summarization is incredibly hard because there is no good training data on a massive scale. Not only that, but summarization – good summarization – has to be very personalized. For example, if you ask “what is the BERT model?” The answer, or summary, could be extremely short and highly technical if you have a lot of knowledge on NLP. But if you don’t know what a word vector is, or a neural network is, the summary may have to explain what a neural network is and could be even longer and written differently than the original content. Aligning summaries to the user is something that you just couldn’t do previously because it is a huge data problem. You need training data to make AI systems better while also preserving privacy.

Watch the full interview with Richard Socher.

AI News This Week

  • Top 10 tech trends for the next 10 years  (World Economic Forum)

    Over the next ten years, we can expect to experience more technological progress than we did in the preceding 100 years put together. There are 10 tech trends expected to dominate this shifting landscape. Our portfolio companies represent six of these significant trends, including automation RPA, cloud and edge computingquantum computingapplied AIdrug discovery, and the future of clean technologies (our investment in this space will be announced soon).

  • First wholly AI-developed drug enters Phase 1 trials  (Forbes)

    Drugs have been tested for the first time on humans at the dosage level required for therapeutic effect in Phase 1 trials. The trials are a step towards realizing AI-driven drug discovery and development. The drug on trial tackles Idiopathic Pulmonary Fibrosis (IPF), which stiffens the lung tissues in older people, affecting five million people each year. Its causes are unknown. The breakthrough comes one decade after “the Big Bang in AI,” when Geoffrey Hinton and his students famously made a breakthrough in image recognition using a convolutional neural network (CNN). Machine learning, especially deep learning, is responsible for the innovations of this decade, including interactive maps, near-omniscient search engines, and machines that can write, draw and create music. This is another milestone for AI as we head into its next decade.

  • Stanford University uses AI computing to cut DNA sequencing down to five hours  (ZDNet)

    A Stanford University-led research team has set a new Guinness World Record for the fastest DNA sequencing technique using AI computing to accelerate workflow speed. The study, published in The New England Journal of Medicine, involved speeding up every step of genome sequencing workflow by relying on new technology. The researchers believe reducing the sequencing time would mean clinicians can diagnose patients and provide tailored treatments faster. The researchers are now looking to reduce the time further, believing it could be halved again.

  • Big tech is spending billions on AI research  (Wall Street Journal – subscription required)

    If the past is any guide, the immense research in AI will spawn new products across medicine, new materials, climate and other areas. “The amounts of money being spent are massive. At Alphabet, R&D expenses rose to $31.562 billion last year from $27.573 billion in 2020; Meta spent $24.655 billion on R&D last year, rising from $18.447 billion the previous year, company filings show.” Canada is home to some of the largest tech companies’ AI labs, such as Samsung, Meta, Alphabet, and Nvidia. These multinational AI research labs, alongside Canadian research institutes, play an essential role in attracting and retaining Canada’s outsized tech talent pool.

  • Drones and AI help find pebble-sized meteorite that landed in 2021   (New Scientist)

    Locating meteorites on Earth’s surface is like looking for a needle in a haystack, but using AI to analyze images captured by drones pinpointed this tiny space rock in just four days. Usually, when a meteorite falls to Earth, it requires a team of five or more people to physically search the ground around the fall zone to find it. While the process can be successful, it is costly, time-consuming, and requires prior knowledge of where the rock may have fallen. Similarly, we recently noted an AI system that predicted frozen meteorites’ locations in Antarctica.

Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).