Curated deep tech and AI content that humans at Radical are reading and thinking about. Sign up here to have Radical Reads delivered directly to your inbox every week.

1) Brain + Machine + AI: The Brain Implants That Could Change Humanity (New York Times)

“For decades, we’ve communicated with computers mostly by using our fingers and our eyes, by interfacing via keyboards and screens. These tools and the bony digits we prod them with provide a natural limit to the speed of communication between human brain and machine. We can convey information only as quickly (and accurately) as we can type or click.

Voice recognition, like that used by Apple’s Siri or Amazon’s Alexa, is a step toward more seamless integration of human and machine. The next step, one that scientists around the world are pursuing, is technology that allows people to control computers — and everything connected to them, including cars, robotic arms and drones — merely by thinking.”

Radical Commentary: Like the origins of many scientific advancements, the possibility of directly augmenting and enhancing the human brain with implants or other outside stimulation has long been a fascination of science fiction. We are now reaching the early stages of this technology becoming reality. This article surveys a number of different scientific experiments underway in both research labs and companies that aim to enable us to both ‘read and write’ directly to the brain, including brain implants of the type Elon Musk’s Neuralink discussed in their public event this week. Other efforts at brain interfaces include non-invasive machines, infrared light stimulation, and implants of “neural dust” throughout the nervous system and “‘neurograins’ — tiny, implantable silicon sensors no larger than a handful of neurons. They’re too small to have batteries, so they’re powered by microwaves beamed in from outside the skull.”

What most of these approaches have in common is an attempt to both understand thoughts, and to augment or alter those thoughts from outside the brain. Some efforts have already enabled people with various forms of paralysis to begin using their paralyzed limbs. Others aim to provide impulse control to people with severe addictions, depression, ‘locked-in’ diseases such as ALS, and other medical maladies which have otherwise been difficult to overcome with current treatments.

Artificial Intelligence plays a crucial role in these efforts in translating thoughts — manifested by brain activity patterns — into images or language we can understand. As described in the article: “This progress isn’t solely driven by advances in brain-sensing technology — by the physical meeting point of flesh and machine. The AI matters as much, if not more. Trying to understand the brain from outside the skull is like trying to make sense of a conversation taking place two rooms away. The signal is often messy, hard to decipher. So it’s the same types of algorithms that now allow speech-recognition software to do a decent job of understanding spoken speech — including individual idiosyncrasies of pronunciation and regional accents — that may now enable brain-reading technology.”

The extraordinary opportunity to help people with medical needs by using brain/machine/AI interfaces is balanced by the risks of technology that will augment brain activity and essentially evolve our species beyond its organic form. While such developments are not on the short-term horizon, it will be important to face concerns about third parties implanting thoughts for commercial or more nefarious purposes. We must also consider who should have access to these technologies which presumably will not come cheaply — i.e. should financial means determine who has access to brain augmentation?

The overarching question of whether the benefits of developing such technologies outweigh the risks is set against a deepening technology cold war between the US and China, and the fear that if the US/West does not develop this technology first, China will. The consequence may be that we have entered what the article describes as a “brain arms race”.

2) US/China AI Technology Cold War: China’s New AI Export Curbs Threaten TikTok’s US Sale (Nikkei Asian Review)

“The Ministry of Commerce released an updated export control list including “personalized content recommendations based on data analysis” and several other technologies under the category of dual use, implying they could potentially have both civilian and military use and that their sale overseas will be restricted.”

Radical Commentary: This is yet more government action that deepens a technology cold war between the US and China that has been focused on artificial intelligence and other critical technologies of the future, including quantum computing. Besides export restrictions, other actions taken include restrictions on who can invest in sensitive technologies in the US without a CFIUS review (investments by Canadian, UK and Australian funds are expressly exempt from review), restrictions on visas for researchers from certain countries and bans on technology from companies (e.g. Huawei).

Leaving aside the political motivations of both countries it is important to understand that, as this rift deepens, we seem to be heading for a full-stack technology bifurcation that will result in a China-led technology stack and a separate US-led technology stack. This would mean separate and likely non-interoperable chips, hardware and software, and effectively two separate internets. Not only would this reduce the possibility of direct collaboration and commerce between those in the US and China, but it would mean companies and countries around the world would have to choose which side to align with and buy products from, and the possibility of being unable to transact or collaborate easily or in some cases at all with those using the other side’s technology. These are very worrisome developments that could have drastic and very long-lasting and unexpected consequences including a global political and/or commercial realignment.

3) AI & Quantum Research Funding: NSF advances artificial intelligence research with new nationwide institutes (NSF)

“With an investment of over $100 million over the next five years, NSF’s Artificial Intelligence Institutes represent the nation’s most significant federal investment in AI research and workforce development to date. The $20 million investment in each of five NSF AI institutes is just the beginning, with more institute announcements anticipated in the coming years.”

Radical Commentary: This week the US National Science Foundation (NSF) established funding of $1 Billion over five years for 12 new artificial intelligence (AI) and quantum information science (QIS) research institutes at universities and existing research centres. While the commitment is admirable, it is long overdue. Also, with most of the money going toward quantum research, the commitment of $4 Million per year for each university-based AI institute for 5 years is surprisingly small.

By way of comparison, Canada’s Pan-Canadian AI Strategy (which was co-written by Radical’s founders Tomi Poutanen and Jordan Jacobs along with others from Canada’s three national AI institutes) and adopted by the federal government 3.5 years ago as the first such national AI policy in the world, provides more funding per year to each of three AI institutes: Vector Institute for AI in Toronto, MILA in Montreal, and Alberta Machine Intelligence Institute (AMII) in Edmonton.

Meanwhile, China is said to be investing $100 Billion in AI over the same period with the stated intention of using AI as the lever to become the world’s economic superpower. Clearly western governments can and should be doing much more to support these critical technologies of the future.

4) AI and Digital Health: Accenture Healthtech Vision 2020 (Accenture)

“Health care organizations have plugged AI and other tech tools into existing workflows, focusing on automation and execution. Consider how AI systems are powering chatbots that help health providers screen and triage patients, or how they are enabling the rapid reconfiguration of supply chains impacted by COVID-19.1 AI has helped to eliminate rote work for clinicians. But simply using AI to make an organization run faster and cheaper is limiting its impact. Instead, AI can become an agent of change, transforming not just how organizations do work — but also what they actually do.

AI offers a unique advantage that has helped startups disrupt decades-old incumbents: the technology doesn’t approach a problem based on years of experience or inherent human biases. It hasn’t yet learned what not to try. This blank slate offers fertile ground for transformation in health care.”

Radical Commentary: This year’s Accenture Digital Health Technology Vision report highlights AI as one of the 5 key trends driving transformation in the healthcare sector. The report discusses how human-AI collaboration is now finding its place across the value chain of healthcare, including prevention, treatment, monitoring and diagnosis of health conditions. It also highlights the role of AI in improving efficiency and reducing costs for back-office functions. Critically, it highlights a key consideration which is the appropriate training for practitioners who are, more and more, finding themselves interacting with AI-enabled technologies.

As we have noted in our healthcare thesis, COVID-19 has forced healthcare providers to accelerate digital adoption. Trends that we expected to play out over three to five years are now happening in real time. This acceleration of digitization and the resulting data generated from these applications will set the foundation for a second wave of AI healthcare solutions.

5) AI for Astronomy: 50 New Planets Confirmed in Machine Learning First — AI Distinguishes Between Real and “Fake” Planets (University of Warwick)

“Researchers from Warwick’s Departments of Physics and Computer Science, as well as The Alan Turing Institute, built a machine learning based algorithm that can separate out real planets from fake ones in the large samples of thousands of candidates found by telescope missions such as NASA’s Kepler and TESS.”

Radical Commentary: There are over 4000 known planets outside our Solar System, known as exoplanets. Over 95% of these confirmed exoplanets have been discovered using one of the following indirect techniques of detection:

  • Transiting: using telescopic data to capture signs of exoplanets passing between a telescope and its star; and/or
  • Measuring radial velocity: stars with at least one planet move within its own small orbit in response to the planet’s gravity. This technique uses variations in the speed with which the star moves towards or away from an object to confirm the presence of a planet.

Transiting techniques are prone to high false positives, caused by a number of factors, such as errors in the camera. Researchers have trained a machine learning algorithm by using labelled data categorized in two pools: either a confirmed planet or a false positive. The algorithm was then used on a dataset of still unconfirmed planetary candidates, which then confirmed the existence of 50 planets, with a less than 1% chance of each planet being a false positive.

This is the first time machine learning has been used to validate the existence of a planet using a probabilistic framework, and could allow astronomers to validate planets faster and more accurately in the future.

***
Editor’s Note: We will continue to use this platform to share without commentary articles focused on data and the use of it to illustrate and illuminate racial injustice. Because you cannot fix problems you cannot see or understand.

6) How Decades of Racist Housing Policy Left Neighborhoods Sweltering (New York Times)

“In the 1930s, federal officials redlined these neighborhoods in Richmond, Va., marking them as risky investments because residents were Black. Today, they are some of the hottest parts of town in the summer, with few trees and an abundance of heat-trapping pavement. White neighborhoods that weren’t redlined tend to be much cooler today — a pattern that repeats nationwide.

To understand why many cities have such large heat disparities, researchers are looking closer at historical practices like redlining.

In the 1930s, the federal government created maps of hundreds of cities, rating the riskiness of different neighborhoods for real estate investment by grading them “best,” “still desirable,” “declining” or “hazardous.” Race played a defining role: Black and immigrant neighborhoods were typically rated “hazardous” and outlined in red, denoting a perilous place to lend money. For decades, people in redlined areas were denied access to federally backed mortgages and other credit, fueling a cycle of disinvestment.”

— R —