By The Editorial Team
Over 700 AI researchers from around the globe have registered for Radical Ventures’ AI Founders Master Class, featuring conversations with AI luminaries about what it takes to make the leap from researcher to entrepreneur. This week, NLP researcher Richard Socher, CEO and Co-founder of You.com and former founder of MetaMind and Chief Scientist at Salesforce, shared his insights on building a company and balancing research and commercialization. He spoke with Radical Ventures Co-Founder and Partner Tomi Poutanen. The following excerpt from their discussion is edited for length and clarity.
Tomi Poutanen (TP): You made a decision early on in life to go from being a researcher at Stanford to become an entrepreneur. That’s a big decision. A lot of researchers don’t make that leap. Can you tell us about that journey and what got you comfortable with entrepreneurial risk?
Richard Socher (RS): One of my first principles is impact – how can I maximize my impact in the world given the skills that I have, that I want to learn, my surroundings, and in some cases, the luck that I’ve had in life to get to those places? When I started my Ph.D. in 2008, I thought that the world needed much better AI systems. The path to maximizing impact was to do research and a Ph.D.
Now, we’ve identified how to make a ton of progress on a lot of real AI systems, by scaling large neural networks. Towards the end of my Ph.D., I felt that I could have more impact by bringing those ideas of deep learning to scale and into the real world, rather than working on more algorithmic novelty. You can have a ton of impact by bringing electricity into different industries without innovating on electricity.
TP: Can you tell us more about the balance between deep learning research, novel frameworks, and what it takes to productize that research? Is this an engineering problem or is this a deep learning research problem when it comes to commercializing AI?
RS: It’s a really interesting and subtle question because you could argue that for a lot of problems out in the world, for a long time, the right answer is often just to collect a specific dataset, train a large supervised model for it, and then the rest is an engineering problem.
That being said, we’ve now seen that large unsupervised pre-training models, which some people call foundational models or large language models, enable you to build quite robust AI systems. Previously, that was a research problem. And now, with some of the research resolved, there are a ton of applications to be built.
That’s in part what led to You.com having a natural language processing system that is almost as powerful as Google. Thanks to all of the progress we’ve seen in deep learning and NLP, we can build this extremely powerful search engine without needing tons of data. This is one of the most exciting things – we’re at a point where the research has been sufficiently figured out and now you can start to apply it.
If you are an AI researcher interested in entrepreneurship, the AI Founders Master Class offers participants an opportunity to join conversations with AI pioneers and access to practical seminars and resources designed to support entrepreneurs looking to commercialize their research. Next week’s session features Richard Socher – an NLP researcher, former founder of MetaMind, and Chief Scientist at Salesforce. He recently founded the AI-driven search engine, You.com.
5 Noteworthy AI and Deep Tech Articles: week of October 30, 2022
1) Artificial intelligence for strengthening healthcare systems in low and middle-income countries: A systematic scoping review (Nature)
A review of global research on AI healthcare applications in low and middle-income countries, suggests AI has the potential to substantially change how medical care and public health programmes are implemented in the near future, especially in health systems where access to care has been challenging.
2) AI predicts what chemicals will smell like to a human (Scientific American)
The average human nose contains about 350 types of olfactory receptors, which can bind to a potentially enormous number of airborne molecules. Researchers have trained a type of AI known as a graph neural network to predict what a compound will smell like to a person — rose, medicinal, earthy, and so on—based on the chemical features of odour molecules.
3) Using AI in cancer genome research (Forbes)
Researchers from the Institute of Medical Science at the University of Tokyo (IMSUT) and Fujitsu Research are using AI-powered language processing technology to create a knowledge database of 860,000 medical papers. The research team says this technology reduces the time to develop genetic mutation treatment plans by more than 50%.
4) A good chess cheater might never be caught (The Atlantic – subscription may be required)
The line between human and computer play is hard to find. Ever since Hans Niemann beat World Champion Magnus Carlsen, Niemann has been called a cheat. Officials do not know whether Niemann cheated. More importantly, they do not know how to determine if he did cheat and whether we could ever make this determination. In an era when chess engines entirely dominate the game — when they have monopolized human players’ thinking, strategy, and preparation that chess has, in certain ways, come to resemble poker — it seems impossible to disentangle creativity from computing.
5) Music, science and healing intersect in an AI Opera (New York Times)
K Allado-McDowell, who leads the Artists and Machine Intelligence initiative at Google, is staging an opera that brings together natural language processing, image generation and neuroscience.