Radical Reads

How Muon Space is Transforming Wildfire Detection with FireSat

By Rob Toews, Partner

In this week’s Radical Reads, we highlight the crucial role Muon Space, a Radical Ventures portfolio company, is playing in combating wildfires through innovative satellite technology. Muon is at the forefront of developing FireSat, a groundbreaking satellite constellation in partnership with the Environmental Defense Fund (EDF), Google Research, and guidance from over 200 members of the federal, state, and local fire community. Designed to detect wildfires as small as 5×5 meters, FireSat aims to revolutionize early fire detection by capturing high-resolution data every 20 minutes, far surpassing the capabilities of current satellites. Set to launch in early 2025, FireSat promises to provide vital, real-time data to emergency responders, helping protect communities and ecosystems.

 

Learn more at muonspace.com

AI News This Week

  • Is nuclear energy the zero-carbon answer to powering AI?  (Financial Times)

    The rising energy demands of AI have led major tech companies like Google, Amazon, Meta, and Microsoft to explore nuclear energy as a zero-carbon power source for their data centers. Microsoft plans to revive the Three Mile Island nuclear plant, and Amazon invested $650 million to place a data center near the Susquehanna Steam Electric nuclear plant. With heightened interest in nuclear power, companies are exploring long-term power purchase agreements (PPAs) with nuclear facilities. 

  • California’s Gavin Newsom Vetoes Controversial AI Safety Bill  (Wall Street Journal)

    Last week, Governor Gavin Newsom vetoed California’s first-of-its-kind AI safety bill, S.B. 1047, which required safety testing of large AI systems, or models, before their release to the public. It also gave the state’s attorney general the right to sue companies over serious harm caused by their technologies, like death or property damage. And it mandated a kill switch to turn off AI systems in case of potential biowarfare, mass casualties or property damage. Newsom argued it focused too narrowly on frontier models and lacked a comprehensive risk assessment. Newsom plans to work with experts to develop revised regulations.

  • AI offers an intriguing new way to diagnose mental-health conditions  (The Economist)

    Researchers are developing AI models that improve mental health diagnosis by analyzing the acoustic properties of speech to detect conditions like depression and anxiety. These models focus on how words are spoken, rather than the content of the speech. A model developed by South-Central Minzu University achieved nearly 96% accuracy in detecting depression and 95% accuracy in categorizing its severity. Another method from Sorbonne University uses spectrogram analysis via smartphone apps to identify various mental-health conditions. Radical Ventures portfolio company Ubenwa has developed an audio foundation model that interprets infants’ needs and health by analyzing the biomarkers in their cry sounds. Leveraging AI to identify audio biomarkers could lead to faster, more accurate diagnoses without the need for subjective assessments, enhancing the personalization of treatment.

  • Rethinking ‘checks and balances’ for the AI age  (The New York Times)

    Stanford University’s Digitalist Papers project, led by Erik Brynjolfsson and Condoleezza Rice, explores the impact of artificial intelligence (AI) on democracy. The initiative features 12 essays addressing AI’s potential to both threaten and enhance democratic governance. The project emphasizes the need for greater citizen involvement in AI policy, citing Taiwan’s use of digitally-enabled citizen assemblies as a model. Some contributors warn that over-regulation of AI could lead to censorship, advocating instead for regulation after clear abuses occur. To this end, Radical Ventures has launched the open-source Responsible AI for Startups (RAIS) framework for evaluating early-stage startups with the aim of fostering the safe and responsible application of AI.

  • Research: Reliability in larger language models  (Valencian Research Institute for Artificial Intelligence/University of Cambridge)

    Researchers have analyzed the relationship between the reliability of large language models (LLMs) and their increasing size and instructability. As models scale up and incorporate human feedback, they tend to exhibit more unpredictable errors. This study highlights three core areas: difficulty concordance (models still make errors on simple tasks), task avoidance (larger models give plausible but wrong answers instead of safely avoiding questions), and prompt stability (improved but with remaining variability). These findings suggest a need to rethink the way in which LLMs are developed to ensure reliable performance, particularly in critical, high-stakes applications.

Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).