Radical Reads

Radical Reads

By Radical Editorial

Curated deep tech and AI content that humans at Radical are reading and thinking about. Sign up here to have Radical Reads delivered directly to your inbox every week.

AI News This Week

  • AI and Quantum Computing: Xanadu launches quantum cloud platform, plans to double qubits every 6 months  (VentureBeat)

    This week Xanadu, one of Radical’s portfolio companies, announced the release of a publicly available photonic quantum cloud platform. This platform will allow businesses to build solutions to problems in finance, chemistry, machine learning, and analytics. Most quantum computers in development require enormous chambers where specialized chips are cooled to just above the coldest possible temperature. Xanadu uses photons from light which means it can operate at room temperature and integrate into existing fiber optic-based telecommunication infrastructure.

  • AI in Transportation: Traffic prediction with advanced Graph Neural Networks  (DeepMind)

    “Researchers at DeepMind have partnered with the Google Maps team to improve the accuracy of real time ETAs by up to 50% in places like Berlin, Jakarta, São Paulo, Sydney, Tokyo, and Washington D.C. by using advanced machine learning techniques”

    Radical Commentary: Google Maps helps drivers navigate one billion kilometers of road every day, which is nearly seven times the distance between the Sun and Earth. The vast data collected over the past 13 years has allowed the company to continuously test and improve its algorithms to better predict Estimated Time of Arrival (ETA), which is now consistently accurate for over 97% of trips.

    DeepMind has created a machine learning model to further improve the accuracy of predicted ETA. In Toronto, the improved ETA is said by DeepMind to be 26%, while in NYC it is 21%, and in London UK it is 16%. The improvement comes down to understanding the connectivity structure of real-world road networks and, in particular, understanding traffic flows and behaviours along adjacent and intersecting roads, rather than just the traffic in front or behind you. The machine learning model uses Graph Neural Networks, a branch of deep learning that models a set of objects (in this case, a segment of road) and their relationships with one another. Applied use cases of Graph Neural Networks include predicting the side effects of pharmaceutical medications, improving computer vision and, in this case, improved accuracy of ETA and the ability to reliably reroute you in real-time.

  • AI in Financial Services: Visa Unveils More Powerful AI Tool That Approves or Denies Card Transactions  (Wall Street Journal)

    “The new deep-learning model is more advanced because it is trained to sift through billions of data points of cardholder activity to define correlations on its own.

    For example, it could automatically learn that a particular cardholder transaction was normal and should be approved based on historical data about that person, such as the location of the merchant in relation to the cardholder and the time of day they are shopping…

    The new service reflects the growing use of AI in banking. Banks are expected to spend $7.1 billion on AI in 2020, growing to $14.5 billion by 2024, on initiatives such as fraud analysis and investigation.”

    Radical Commentary: Visa’s new AI-based fraud detection system is designed as a back-up to supplement its bank customers’ own fraud detection efforts in the event their systems fail.

    This is an example of how some adoptions of deep learning powered services will happen in cycles. When a new method is integrated into a legacy system, it will first be used as a redundant system, with both solutions operating in parallel. This dual track allows engineers and product managers to work out any kinks, fine tune the product, and develop trust with the customer. In time, the novel redundant system will displace the legacy one and become the operating standard.

  • AI Understanding of Causation: How special relatively can help AI prediction  (MIT Technology Review)

    “Instead of getting an AI to learn to predict a specific sequence of future frames by watching millions of video clips, they allowed it to generate a whole range of frames that were roughly similar to the preceding ones and then pick those that were most likely to come next. The AI can make guesses about the future without having to learn anything about the progression of time, says Vlontzos.”

    Radical Commentary: Causal reasoning remains a challenge for machine learning models. Although models excel at recognizing patterns, the models do not explain why one event should follow another. A model attempting to make a prediction about an effect without an ability to understand what causes the effect the model has witnessed from historic data, can lead to faulty reasoning. While AI models can very effectively determine correlation, they have historically been less effective at pinpointing causation. In short, machines have had difficulty answering ‘why’.

    A research team at Imperial College London is using Einstein’s theory of special relativity to constrain the options the AI uses to choose a future prediction. Using an algorithm inspired by a mathematical description of the boundaries of cause and effect in spacetime (i.e. light cones), the team was able to generate a boundary around instances that could be causally related to the sample data. The model is currently limited as the boundary needs to be manually determined. That said, increasing prediction accuracy based on identifying the cause for an event, rather than relying on correlation, could further the development of technologies for autonomous vehicles, robotics, and drug development.

  • AI and Policing: Algorithmic Policing in Canada Explained   (The Citizen Lab)

    “Citizen Lab and the International Human Rights Program at the University of Toronto’s Faculty of Law released a report that investigated the use and human rights implications of algorithmic policing practices in Canada. This document provides a summary of the research findings and questions and answers from the research team.

    We examined two broad categories of algorithmic policing technology: predictive policing technology (including both location-focused and person-focused algorithmic policing), and algorithmic surveillance technology.”

    “Adoption by Canadian law enforcement agencies is more widespread when it comes to algorithmic surveillance technologies, compared to adoption of predictive policing technologies.”

    Radical Commentary: The Report distinguishes between predictive policing technologies (which use data processing to draw inferences about potential criminal activity) and algorithmic surveillance technologies (which automates data collection and processing but do not inherently include any prediction abilities, such as automated license plate readers). When considering predictive policing technologies, the Citizen Lab looks at two types: location-focused and person-focused prediction technologies. The Lab found that Canada has relatively low levels of adoption of predictive policing technologies compared to jurisdictions such as the United States and United Kingdom.

    Canada has an opportunity to take a global leadership role in the ethical, policy and legal implications of using AI in policing and, in particular, predictive activities that are intended to be applied preventatively. We expect this to be a contentious and comprehensive area of study and policy creation for ethicists and legal scholars, and a focus of legal challenges in the absence of specific laws. Solving these issues will require new policy frameworks in addition to addressing issues of information asymmetry between policed communities, law enforcement, and policy technology vendors.

    Editor’s Note: We will continue to use this platform to share without commentary articles focused on data and the use of it to illustrate and illuminate racial injustice. Because you cannot fix problems you cannot see or understand.

  • Racial Economic Inequality Amid the COVID-19 Crisis  (Brookings Institute)

    “Across the economy and society, the distributional consequences of COVID-19 are uneven: the pandemic and its broader economic and health consequences are disproportionately impacting Black Americans.

    The outsized challenges that Black Americans are facing are a reflection of the generally diminished economic position and health status that they faced prior to this crisis. Several pre–COVID-19 economic conditions — including lower levels of income and wealth, higher unemployment, and greater levels of food and housing insecurity — leave Black families with fewer buffers to absorb economic shocks and contribute to Black households’ vulnerability to the COVID-19 economic crisis.”

Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).