Radical Reads: De-Mystifying Healthcare AI with Rahul Krishnan

Editor’s Note
This edition of Radical Reads is part of our “Demystifying Healthcare AI” series in which Radical Ventures investor Sanjana Basu speaks with leading founders, researchers, and clinicians working at the intersection of AI and healthcare. At Radical Ventures, we have seen a new generation of startups emerge under leaders who combine health, machine learning, and commercial expertise. These versatile teams are successfully deploying healthcare innovations in the real world and we want to share their stories and spur more innovation in the space. This week we are sharing an excerpt of Sanjana’s conversation with Rahul Krishnan an Assistant Professor at the University of Toronto, a Canada CIFAR AI Chair, and a Faculty Member at the Vector Institute for Artificial Intelligence.

Sanjana Basu (SB): Healthcare remains the top-funded sector in AI. What is it about machine learning that continues to drive innovation in this sector? 

Rahul Krishnan (RK): Beyond both the opportunity to create value – healthcare spending comprises nearly 10% of the GDP in Canada and 15% in the US – I think what motivates entrepreneurs to explore the application of AI to healthcare is that, in its best form, it is technology that can help people. AI, machine learning and medicine have a long history. From educational tools to help train doctors to software that helps improve the efficiency and throughput of hospitals, and clinical decision support tools, I think what’s unique is the sheer breadth of places where machine learning can have an impact in healthcare. 

SB: Tell us more about your research on machine learning methods to augment clinical decision-making. 

RK: Risk scores are numbers used to warn clinicians of impending adverse outcomes for patients. They’re used across the healthcare system – from monitoring patients for sepsis in intensive care units to deciding who should get a new liver. At their core, they’re often simple equations based on a handful of clinician biomarkers whose form is decided on expert consensus. But automatically building functions that are predictive of a future outcome (using data rather than expert knowledge) is one of the strengths of machine learning. My lab is collaborating with clinicians at the University Health Network in Toronto to design risk scores using retrospective patient data. Along the way we’ve come across some fascinating research questions, such as how to ensure that the resulting score is fair across patient subgroups and how to retain interpretability.   

Looking ahead, let’s say we succeed and we deploy our machine learning based risk score in clinical practice. Can we trust it? We know that the model will work on data similar to what it was trained on, but we also know the model can fail in unexpected ways when patterns in patient data change. We’re developing mechanisms to identify when a machine learning model is susceptible to failure to ensure its safe use in deployment.

In the future, we’d like to create software tools to empower oncologists to ask questions like: for my patient, how does their risk of progression-free survival change depending on whether I decide to perform radiation therapy after relapse? To that end, we’re leveraging tools in machine learning to build new methods for causal inference. By leveraging observational data from the hospital, and interventional data from trials. Our hope is that research such as this can inform decision support software for clinicians that simulates and predicts different hypothetical outcomes for patients. 

SB: What are some of the real-world challenges in applying machine learning to healthcare? Can you talk about some of the work being done to overcome these challenges? 

RK: Learning useful predictive models from healthcare data can be challenging when using data from a single hospital. The data reflects the prevailing operational norms of the hospital system and the socioeconomic factors that modulate the access and quality of care that people are exposed to. In addition, the vast majority of people do not go to the doctor unless they are ill – so one only sees vignettes of a patient’s timeline through the course of the disease when they come into the hospital. These facts have important statistical implications for the clinical data that we use to train models. It means we’re often working with missing data or containing selection bias. Practitioners often require a deep understanding of the processes that generate data in order to get value out of them using machine learning. 

There is a dearth of good software and hardware infrastructure to pull data from clinical silos, learn models and deploy them into sandbox environments. A key challenge in deploying these models in practice is doing so in a manner that minimizes friction with care delivery. It doesn’t matter how technically sophisticated the machine learning model is; if it distracts clinicians more than it helps them, they will not be inclined to use it. Put differently, machine learning is only one piece of the puzzle. A good engineering backend, an assistive human-computer interface and a tightly integrated model update loop are all important pieces of deploying this technology in practice. 

SB: You are an advisor to several AI startups. What makes a healthcare AI start-up successful? Do you have any advice for healthcare AI researchers or teams looking to commercialize their work?

RK: Successful startups in this space need patience, connections, and a deep understanding of how data can bring value through machine learning in the medical ecosystem. 

The advice that I would have for healthcare AI researchers or teams is to create meaningful partnerships with clinicians across different geographic regions who are as excited about the technology. The process isn’t easy; successful entrepreneurs need to grapple with the technical challenges, ensuring that the technical solutions are solving a real clinical pain point and they have the right financial and clinical partners who will work with them through the regulatory processes. 

Finally, the healthcare ecosystem is vast and opaque to the general public. It is worth investing in understanding the incentives that drive decision-making in the system since it can be invaluable to delivering the right pitch to investors. 

SB: Which applications of machine learning in health are you currently most excited about?  

RK: An area of research that I’m very excited about is leveraging large language models (LLMs) in healthcare. LLMs are models trained to predict the next word from the previous few; although such models have existed for decades, what’s changed in recent years is the discovery that scaling these models up to have billions of parameters results in the models’ remarkable ability to answer and respond to general-purpose queries. Although their applications to problems in healthcare are just beginning, I think this will be one of the areas where we see a lot of exciting development in the next few years. 

That said, I think progress in this vein will need to be measured, benchmarked and carefully tracked. We need to ensure that the predictions of such models have equitable quality across patient subgroups to ensure that human biases in clinical data do not adversely affect the model’s output and, consequently, the quality of care provided. 


5 Noteworthy AI and Deep Tech Articles: week of July 10, 2022

1) A ‘digital twin’ of Earth will soon monitor the planet’s health from space (Entrepreneur)

A virtual replica of the Earth, including its chemical makeup, will enable scientists to study our planet in entirely new ways. Radical Ventures portfolio company Pixxel achieves this with hyperspectral imaging from space. The technique has the power to help with pressing issues that are invisible to today’s satellites. As Awais Ahmed, who founded Pixxel, puts it: “With hyperspectral imaging, we capture light in multiple narrow wavelengths, which allows us to dabble into the chemistry of the image that we’re capturing. So instead of telling you whether a piece of the farm is green or not, we can identify what type of crop grows there, the chlorophyll content in the leaves, the moisture in the soil, the stage of growth, etc.” Pixxel’s first satellite launched via SpaceX in April. The goal is a constellation of over 30 Earth observation micro-satellites to detect, monitor, and predict global phenomena in real-time. 

2) These simple changes can make AI research much more energy efficient (MIT Technology Review)

Several factors impact the carbon emitted by neural network computations, including the location of the server used for training, the energy grid it plugs into, the size of the dataset, and the hardware where the training takes place. Tools are available to help individuals calculate their machine learning carbon impact, but this is difficult when it comes to cloud providers. A collaboration across industry and academic institutions developed a tool to measure the electricity usage of any machine-learning program that runs on Azure, Microsoft’s cloud service. The study found that emissions can be significantly reduced if researchers use servers in specific geographic locations and at certain times of the day. Settings to minimize environmental impact, for example, starting training when more renewable electricity is available on the grid, are readily available through major cloud providers.

3) Why Eric Schmidt believes bioscience will change the world (Financial Times – subscription required)

Growing our consumer products might be the next big thing. Former Google CEO Eric Schmidt shares his thoughts around the enterprising “bioeconomy” – the use of biological processes to make use of things that we consume and manufacture. This includes plastics that naturally degrade and potentially growing our computers. Discoveries in the underlying science have stoked a new excitement in molecular biology. The field is further fuelled by advances in AI that enable new techniques and the ability to grow new things. “This stuff has been kicked up a notch because scientists have now realized that they cannot only read the code [of DNA] but edit it too.” Eric Schmidt discussed re-imagining the economy with Radical Ventures’ Co-Founder Jordan Jacobs in an episode of our Radical Talks podcast.

4) Words matter: AI can predict salaries based on the text of online job postings (VentureBeat)

Sarah Bana, an economist at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), used AI to predict the salaries of job postings. The AI model was trained on the text of 800,000 job postings and their salaries. When tested on 200,000 separate job listings, it correctly predicted the salary 87% of the time. In follow-up work, Bana will attempt to characterize the contribution of various words to the salary prediction. “Amid this workplace turmoil, workers, as well as employers and policymakers, could benefit from understanding which job characteristics lead to higher wages and mobility.” The research could also help improve job application transparency and inform workforce training and education approaches.

5) Japan deploys AI to detect rip currents as beach season heats up (The Guardian)

A beach in central Japan is using an AI system to identify dangerous rip currents and alert lifeguards. The AI deployed at Yuigahama beach, just south of Tokyo, comes from Chuo University and the Japan Lifesaving Association. Lifeguards are alerted on a smart watch when anyone swims in the vicinity of rip currents identified by web cameras mounted on poles. 

5 AI Predictions For The Year 2030

Forecasting the near-term future presents its own set of challenges, but extending our foresight to a five-year horizon amplifies these difficulties. This week, we feature excerpts from Radical Partner Rob Toews’ latest article in Forbes…

Read More »

© 2023 Radical Ventures Investments Inc.