Curated deep tech and AI content that humans at Radical are reading and thinking about. Sign up here to have Radical Reads delivered directly to your inbox every week.
Editor’s Note: as we began last week, we will continue to use this platform to share without commentary some articles focused on data and the use of it to illustrate and illuminate racial injustice. Why? Because you cannot fix problems you cannot see or understand, and we want to work on solutions to create a better future.
1) Former US Chief Data Scientist: “Rage, fear, and confusion” (DJ Patil on LinkedIn)
“There was a young data scientist that pulled me aside before a meeting on how data and technology might be able to help on community policing. He showed me data that he had cobbled together from multiple systems where he showed the rate over time of how police stopped white (sic) vs blacks and it was about equal. And then he showed me the search rates after they were stopped and they were staggering. It showed how much more likely you were to be searched if you were black. As we dug in, he showed how you could tell which officers contributed to doing the most searches and matching it to those that had the most complaints. When I showed his results to the other police chiefs they all asked how they could get that kind of data so get those officers off the streets. The chiefs didn’t have the basic data systems to know which officers were a problem.
What gets measured gets fixed. Yet, we won’t track even the basics of policing.”
2) AI & Material Science: AI identifies change in microstructure in ageing materials (Phys.org)
“Lawrence Livermore National Laboratory (LLNL) scientists have taken a step forward in the design of future materials with improved performance by analyzing its microstructure using AI.
Technological progress in materials science applications spanning electronic, biomedical, alternate energy, electrolyte, catalyst design and beyond is often hindered by a lack of understanding of complex relationships between the underlying material microstructure and device performance. But AI-driven data analytics provide opportunities that can accelerate materials design and optimization by elucidating processing-performance correlations in a mathematically tractable way.”
Radical Commentary: Taking X-ray CT scans of the 3-D microstructures of a novel material helps to obtain very detailed material characterization, but often can only be done with small sample sizes. Traditional deep learning approaches, however, are hard to apply to cases where few samples of an experimental material are available for imaging.
To solve this problem, LLNL materials scientists used novel methods in scalar-field topology to extract useful features from the images that allowed them to identify microstructures, track their evolution as a material ages, and then combine these to create a model of the performance of a material over time.
These novel techniques subsequently allow researchers to accelerate the development of new material compositions, with exciting applications across many industries.
The use of AI for phenotyping is already widely used in biology, where analyzing images of organisms and cells is used for many applications. Phenomic.ai, a Radical Fund I portfolio company, is using this approach to analyze complex, physiologically relevant, disease models with the goal of bringing ground-breaking drugs to the clinic for diseases like colorectal cancer.
3) Predicting the Impact of Government Policy using AI Simulations: Supercharging sustainable development (The Alan Turing Institute)
“A bold new approach to development economics, championed by the Turing and the United Nations Development Programme, could boost government-backed sustainable development all over the world.”
Radical Commentary: The Turing Institute (the UK’s national institute for data science and artificial intelligence) has teamed up with the United Nations to create a computer simulation tool called Policy Priority Inference Kit. The simulation tool uses agent-based modelling, which simulates the actions and interactions of autonomous agents (such as different government departments) with a view to assessing their effects on the system as a whole. The purpose is to allow governments to better prioritize policy issues across their departments, thereby maximizing their Development Indicators (measured by 231 data points) and getting countries closer to achieving their 17 Sustainable Development Goals.
As an example, the software could allow governments to better understand the interlinkages between different government department policies, such as the effects of industrialization on the environment, and the effects of investing in public transport on school attendance.
The models incorporate data and theory across many disciplines, including economic theory, behavioural economics, and network science. However, as with all models, they are not perfect. Models are constrained by the volume and quality of data ingested. For instance, many governments do not measure all the 231 Development Indicators, while some governments simply do not share this data. Nevertheless, the tool may enable policymakers to understand the potential effects of their decisions on society before those decisions are finalized and carried out in the real world.
4) COVID accelerates Digitization of Healthcare: Telehealth — A quarter-trillion-dollar post-COVID-19 reality (McKinsey)
“COVID-19 has caused a massive acceleration in the use of telehealth. Consumer adoption has skyrocketed, from 11 percent of US consumers using telehealth in 2019 to 46 percent of consumers now using telehealth to replace cancelled healthcare visits. Providers have rapidly scaled offerings and are seeing 50 to 175 times the number of patients via telehealth than they did before. Pre-COVID-19, the total annual revenues of US telehealth players were an estimated $3 billion, with the largest vendors focused in the “virtual urgent care” segment: helping consumers get on-demand instant telehealth visits with physicians (most likely, with a physician they have no relationship with). With the acceleration of consumer and provider adoption of telehealth and extension of telehealth beyond virtual urgent care, up to $250 billion of current US healthcare spend could potentially be virtualized.
Our claims-based analysis suggests that approximately 20 percent of all emergency room visits could potentially be avoided via virtual urgent care offerings, 24 percent of healthcare office visits and outpatient volume could be delivered virtually, and an additional 9 percent “near-virtually.” Furthermore, up to 35 percent of regular home health attendant services could be virtualized, and 2 percent of all outpatient volume could be shifted to the home setting, with tech-enabled medication administration. Overall, these changes add up to $250 billion in healthcare spending in 2020 that could be shifted to virtual or near-virtual care, or 20 percent of all office, outpatient, and home health spend across Medicare, Medicaid, and commercially insured populations.”
Radical Commentary: McKinsey reinforces the massive opportunity for virtual care in a post-COVID world. Every day that passes without a viable vaccine increases the adoption of, and comfort with, digital health amongst providers and patients. McKinsey’s estimate that $250 billion or 20% of healthcare spend could potentially be virtualized is broken down by models of care. The article highlights actions that payers, health systems, investors and health IT service providers can take to capitalize on the current window of opportunity. We are watching this space very carefully and believe that the rapid acceleration in virtual care delivery will set the foundation for deep tech innovations, including the widespread use of Artificial Intelligence, in the nearer future.
5) AI Eye Exams: Eye, robot: Artificial intelligence dramatically improves accuracy of classic eye exam (Science)
“Researchers have developed an online vision test — fueled by artificial intelligence (AI) — that produces much more accurate diagnoses than the sheet of capital letters we’ve been staring at since the 19th century. If perfected, the test could also help patients with eye diseases track their vision at home…
When the researchers ran their “Stanford acuity test” (StAT) through 1000 computer simulations mimicking real patients, the diagnostic reduced error by 74% compared with the Snellen test, the team reports this month in the Proceedings of the AAAI Conference on Artificial Intelligence.”
Radical Commentary: The Snellen (alphabetic eye test) chart was developed in 1862 and represents the healthcare sector’s slow pace to adopt digital solutions. Eye exams are one example of many at-home solutions gaining traction during the pandemic including telemedicine and mental health applications, as discussed in the McKinsey article referenced above. These services can be reimagined in part due to the exceptional removal of regulatory barriers in 2020.
At-home solution efficacy has also improved due to Artificial Intelligence. As Wang notes, online eye tests are not new but the Stanford acuity test significantly improves diagnosis prediction accuracy by using AI. We anticipate the adoption of digital technology to accelerate across all touchpoints within the healthcare sector. From hospitals, doctors, researchers and patients, we expect to see digital technology integration expand until it nears equilibrium with adoption rates in the consumer sector.
6) Weather Modelling: Improving Atmospheric Forecasts with Machine Learning (Eos)
“Weather forecasting has improved significantly in recent decades. Thanks to advances in monitoring and computing technology, today’s 5-day forecasts are as accurate as 1-day forecasts were in 1980. Artificial intelligence could revolutionize weather forecasts again. In a new study, Arcomano et al. present a machine learning model that forecasts weather in the same format as classic numerical weather prediction models.
Radical Commentary: Researchers trained a global weather forecasting model and prepared 171 separate 20-day forecasts. It took a minute to generate each forecast and they found that the ML model accurately forecasts 3-day global weather patterns. While the model performed less accurately in the tropics, overall it is computationally efficient and can be used for rapid, short term forecasts.
This is an example of how machine learning is having an impact even on problems that are commonly believed to have already been solved, such as weather modelling.
— R —