Unlearn AI, a Radical Ventures portfolio company, is at the forefront of developing new technologies aimed at streamlining the $100 billion annually spent by pharmaceutical companies on clinical research and drug trials. By creating patient-specific digital twins, or computational models of patients’ biological systems, prior to randomization into the experimental or control groups, the company predicts health outcomes under placebo conditions, regardless of the actual treatment. To achieve this, Unlearn AI has developed disease-specific Digital Twin Generators (DTGs), trained on extensive datasets. Recently, Unlearn AI launched new DTGs for psychiatric disorders, including schizophrenia and major depressive disorder. These new tools promise smaller control groups and quicker access to treatment. This week, we share a summary of the release and an overview of this cutting-edge technology.
In our continuous effort to advance AI in medicine, Unlearn AI has just expanded our Digital Twin Generators (DTGs) into the realm of psychiatric disorders with the release of two groundbreaking models: one for schizophrenia and one for major depressive disorder (MDD). This marks a significant step forward in addressing the pressing need for accelerating clinical research in these complex indications.
Schizophrenia and MDD are two of the most debilitating psychiatric conditions affecting millions worldwide. Schizophrenia, characterized by distortions in thinking, perception, emotions, language, and behavior, can lead to significant social and occupational dysfunction. MDD, marked by persistent sadness, loss of interest or pleasure, and various physical and cognitive symptoms, is a leading cause of disability globally. The complexity and heterogeneity of these disorders pose substantial challenges in treatment, necessitating personalized and precise therapeutic approaches.
Schizophrenia DTG (SCZ DTG 1.0)
Our Schizophrenia DTG is designed for use in acute trials targeting Positive and Negative Syndrome Scale (PANSS) Total Score or Clinical Global Impression (CGI) Severity endpoints from one to three months post-randomization or post-treatment initiation. The model supports standard of care involving first or second-generation antipsychotics that can be used to support adjunctive therapy trials or in active comparison against other antipsychotics. The model was trained on data from a couple thousand patients. Evaluation of the model on held-out data indicates the largest expected variance reduction is realized within 6 weeks of an acute event.
Major Depressive Disorder DTG (MDD DTG 1.0)
Our MDD DTG targets the Hamilton Depression Total Score (HAM-D) Total Score as the primary endpoint in acute trials from one to three months after treatment initiation. Participants are on standard of care involving first-line antidepressants like selective serotonin reuptake inhibitors (SSRIs) and serotonin and norepinephrine reuptake inhibitors (SNRIs). Our training set includes five trials with over five thousand patients. Evaluation of the model on held-out data indicates the model can dramatically improve variance reduction for all common MDD trial durations.
Looking Forward
The release of these DTGs for schizophrenia and MDD represents a pivotal advancement in our quest to harness AI for precision medicine. Given the common challenges in recruiting patients for psychiatric disorders trials, our DTGs offer an alternative and more comprehensive understanding of individual patient responses to support more efficient clinical trials. These DTGs have the potential to transform the treatment landscape for psychiatric disorders.
AI News This Week
-
The EU’s AI Act is now in force (TechCrunch)
The EU AI Act’s risk-based framework for AI applications came into effect on August 1, 2024. Full compliance is required by mid-2026. This starts the clock on a series of staggered compliance deadlines that the law will apply to different types of AI developers and applications. AI systems are categorized as low/no-risk, high-risk, or limited risk, each with specific obligations. High-risk systems, like biometrics and medical AI, must meet stringent requirements, including assessments and registration. Limited risk tools, such as chatbots, must ensure transparency. Non-compliance can result in fines up to 7% of global turnover. General purpose AI developers face transparency obligations, focusing on mitigating systemic risks. Detailed compliance standards are being developed and are expected by April 2025. There is widespread concern in the AI founder community that such frameworks will stifle innovation and result in the best founders leaving the EU to establish their companies elsewhere.
-
Long after surviving the Nazis, they use AI to remind the world (New York Times)
The Museum of Jewish Heritage is utilizing AI to develop an installation where visitors can engage with the firsthand testimonies of Holocaust survivors. As the number of survivors dwindles, this initiative preserves their stories by allowing visitors to interact with video clips of survivors answering questions, providing a comprehensive view of their diverse experiences. In collaboration with the USC Shoah Foundation, the project seeks to combat Holocaust denial by presenting authentic accounts, ensuring that individual stories continue to educate future generations.
-
‘Robot dentist’ completes first dental procedure (Dentistry UK)
An AI-driven robotics system, called the “robot dentist,” has successfully performed one of the first fully automated dental procedure on a human. The technology combines AI-driven 3D imaging software with a robotic arm to execute restorative procedures. Capable of completing tasks such as crown placements in approximately 15 minutes, the system has the potential to transform dental care by improving precision and efficiency.
-
Chatbots aren't sentient, but you should be nice to them anyway (Scientific American)
The benefits of being polite to AI may include prompting better chatbot replies—and nurturing our humanity. A Google DeepMind paper found that supportive prompts, such as “Take a deep breath and work on this problem step-by-step,” boost an LLM’s ability to solve grade school math problems that require basic reasoning skills. Another preprint paper found that using polite prompts, can produce higher-quality responses from an LLM, but there is a point of diminishing returns; excessive flattery can cause a model’s performance to deteriorate. The findings underline that the behaviour of LLMs reflect human social etiquette.
-
Research: The future of large language model pre-training is federated (Cambridge University/Flower Labs)
Researchers demonstrated the feasibility of training a 1.3 billion parameter language model using affordable, distributed training techniques, such as federated learning. This method, which involves distributing computations across geographically dispersed servers, allows for cost-effective training without relying on centralized data centers. While smaller models show more sensitivity to the distributed process, larger models like the 1.3B parameter model performed comparably to those trained centrally. This development suggests a future where AI model training can be democratized, allowing individuals and collectives to have more control and customization capabilities. However, while these methods are promising for smaller models, their effectiveness for training the largest models remains uncertain.
Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).