Radical Reads

Radical Reads

By Radical Editorial

http://Radical%20Reads

Curated deep tech and AI content that humans at Radical are reading and thinking about. Sign up here to have Radical Reads delivered directly to your inbox every week.

AI News This Week

  • Technology Bifurcation Between China and the West: Tencent to invest US$70 billion in new digital infrastructure, backing Beijing’s economic stimulus efforts  (South China Morning Post)

    “Chinese internet giant Tencent Holdings plans to invest 500 billion yuan (US$70 billion) over the next five years in new digital infrastructure, a major hi-tech initiative that would bolster Beijing’s efforts to drive economic recovery in the post-coronavirus era.

    That massive investment will focus on fields that include cloud computing, artificial intelligence (AI), blockchain technology and Internet of Things, as well as the infrastructure to support them like advanced servers, supercomputers, data centres and 5G mobile networks…”

    Radical Commentary: Following last week’s announcement that China will invest $1.4 Trillion to overtake the US in Tech, Tencent is committing $70 Billion to its internal digital infrastructure including a new network of large-scale data centres, with a million servers deployed at each site.

    Why does this matter?

    First, this supports our view that technology is bifurcating into two technical stacks, one built in China and one built in the West (led by the US). This bifurcation will have long term, wide-sweeping, technological, economic and geopolitical ramifications. Second, the scale of investment is significant. By comparison, Google committed $10 Billion to new offices and data centers for 2020.

  • AI Regulation: U.S. Will Join G-7 AI Pact, Citing Threat From China  (Bloomberg)

    “The U.S. plans to join an international organization that will advise companies and governments on the responsible development of artificial intelligence, becoming the last of the Group of Seven countries to sign on.

    The group, called the Global Partnership on AI, will study and provide recommendations to encourage the creation of AI technologies that respect privacy and civil liberties, said Michael Kratsios, the chief technology officer of the U.S. ”

    Radical Commentary: This is an important development because the US was noticeably absent from this effort to regulate and ensure the ethical uses of AI. We view an international regulatory framework for AI as important and inevitable, especially in the context of unequal development and deployment of AI technology across countries and regions.

  • Creating AI: Is the Brain a Useful Model for Artificial Intelligence?  (Wired)

    “Even if neuroscientists can re-create intelligence by faithfully simulating every molecule in the brain, they won’t have found the underlying principles of cognition.”

    Radical Commentary: This article asks the fundamental question of whether we need to replicate the human brain to achieve artificial intelligence.

    Artificial neural networks (NN) are inspired by the biological neural networks that constitute the human brain. As an example, the human brain contains brain cells (neurons) and synapses, while a NN imitates this with nodes (artificial neurons) and connections.

    The author, a researcher at DeepMind, suggests that NNs are actually “un-brainlike” due to their use of mathematical tricks that the human brain simply cannot perform. The article concludes that AI models may not need to mimic the brain at all — just as airplanes don’t mimic birds, yet can still fly — and that understanding intelligence may come down to learning the principles of biology beyond the brain.

    Since Deep Learning is still in its infancy, it is too early to speculate whether it can transition from performing narrow tasks to achieving Artificial General Intelligence (AGI). Further research across biology, neuroscience, software engineering and hardware engineering will bring us closer to an answer of whether there is a universal theory of intelligence.

  • Computer Vision & Healthcare: Computer vision’s potential to improve health care  (The Lancet)

    “In the language of AI scientists, blindness to clinically important physical actions in hospitals, clinics, and homes represents a major gap in the ground truth required to characterize and improve clinically effective health care. To fill the gap, AI research teams are beginning to use video and other types of cameras to continuously collect data and train interpretative computer algorithms — known as computer vision — to discern clinician actions such as hand hygiene. The accuracy of AI in detecting intended clinical actions such as the use of alcohol gel dispensers by intensive care unit (ICU) clinicians at the doorways of patients’ rooms exceeds 90% in the few hospitals where research teams are developing this computer vision capability. Results from research to quantify the clinical benefit of algorithm-based clinician prompts are likely to become available over the next 2–4 years.

    The eventual adoption of AI-computer vision assistance could help reduce medical error. Despite 20 years of hospital focus on improving patient safety through management tools such as pre-procedure checklists and establishing a safety culture, harm in hospitals at moments of extreme patient vulnerability to human error remains severe and costly.”

    Radical Commentary: Dr. Arnold Milstein, Professor of Medicine and the Director of the Clinical Excellence Research Center at Stanford University discusses the existing gaps in the AI for medical applications field that can be filled by AI computer-vision assistance.

    The major issue is the “blindness to clinically important physical actions” at places of care. An example is existing EHR records where much of the data being mined is self-recorded by clinicians and also lacks a patient maintained counterpart. He highlights the cost of errors in the case of extreme patient vulnerability and cites a 2010 US Congressional report that estimated preventable hospital harm to 27% of Medicare patients impacting both human life and Medicare spending.

    To solve this, he proposes the application of sensors that continuously monitor patients to detect trends and raise alerts. An early application being tested is detecting the use of alcohol gel dispensers by ICU clinicians at the doorway of the rooms. Dr. Milstein acknowledges that the use of such technology can cause alert fatigue and can lead to concerns about privacy. However, he also recognizes the major tailwinds that can lead to wide-scale adoption including the use of edge computing on hardware that eliminates the need to store or transmit data.

  • AI & Cybersecurity: Half of employees admit they are cutting corners when working from home  (ZDNet)

    “Half of employees are cutting corners with regards to cybersecurity while working from home — and could be putting their organisation at risk of cyberattacks or data breaches as a result.

    The coronavirus pandemic has forced both employers and employees to quickly adjust to remote working — and, often without the watchful eyes of IT and information security teams, workers are taking more risks online and with data than they would at the office.”

    Radical Commentary: Facing the pressure to digitalize, many companies rapidly implemented IT solutions to meet remote work and consumer needs, which has increased cyber risk exposure.

    Digital adoption levels predicted (pre-crisis) to occur within five years have been reached within the last eight weeks. It’s not clear that this trend will dwindle when economies re-open, as big tech companies, such as Shopify and Facebook, are developing permanent remote work arrangements that will require extended security measures.

    All of this comes amidst growing consensus that the increasing demand for cybersecurity talent already in short supply will outlast COVID-19 social distancing measures. The European Union, for example, is prioritizing cybersecurity as part of a digital transition strategy.

    We believe there is a large role for machine learning tools to help automate the detection, triage, and mitigation of cyber risks.

Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).