Radical Reads

What Turing Taught Us

By Parasvil Patel, Partner

Image Source: Universidad Miguel Hernández de Elche

Originally coined the “imitation game,” Alan Turing introduced the idea of the Turing Test in 1950, suggesting criteria for when a machine is capable of exhibiting the same intelligence as a human. While the test has long served as a benchmark for AI systems, ChatGPT—with its ability to pass law exams and to draft Seinfeld episodes—has reinvigorated interest in Turing and how we measure robot intelligence.

Although best remembered for defining the parameters of his iconic test, Turing’s paper “Computing Machinery and Intelligence” also introduced concepts that would guide AI research years later. Notable examples of Turing’s prescient insights include:

  • Neural network training: If a child-like machine brain is “subjected to an appropriate course of education one would obtain the adult brain;”
  • The effort needed to make neural networks work: “We cannot expect to find a good child-machine at the first attempt;”
  • Reinforcement Learning: “We normally associate punishments and rewards with the teaching process;”
  • The probabilistic nature of large language models (LLMs): “It is probably wise to include a random element in a learning machine,” and;
  • How AI could be measured by advances in human games: “Many people think that a very abstract activity, like the playing of chess, would be best.”

The paper also introduced a way to test the efficacy of AI systems. Many others have created more specialized versions of these tests since, like the ImageNet challenge for computer vision (introduced by Fei-Fei Li), the KITTI benchmark for autonomous driving (created by Radical portfolio company founder Raquel Urtasun) and HELM for large language models.

Turing also made a quantitative claim that we can test today: “I believe that in about fifty years’ time, it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning.” It took a little longer, but today’s LLM-powered chatbots would likely win the game more than 70% of the time.

While it would be decades before this technology would transform computing, the frameworks introduced by Turing 70 years ago proved an invaluable foundation for developing modern AI.

AI News This Week

  • Cohere’s Aidan Gomez on commercializing generative AI, and the moment the public tuned in  (The Logic – subscription may be required)

    ChatGPT may have awakened the public to generative AI, but there remains plenty of room in the market. Cohere CEO and Co-Founder, Aidan Gomez, was part of a team at Google Brain that invented the Transformer. The “T” in GPT, transformer AI architecture is currently powering a wave of new generative technologies. Cohere’s platform lets companies better understand and leverage their language data and to integrate machine-made text and dialogue into products and services. Murad Hemmadi of The Logic spoke with Aidan about generative AI and how Cohere – a Radical portfolio company – is making this technology “maximally accessible” for businesses.

  • Research: Antarctic meteorite collection sites revealed through machine learning  (Science)

    Meteorites provide a unique view into the origin and evolution of the Solar System. By combining datasets in a machine learning algorithm and providing estimates of the probability to find meteorites at any given location, meteorite-rich areas were identified with 80% accuracy. The data-driven approach will facilitate the quest to collect meteorites in a coordinated and cost-effective manner.

  • Research: Global warming, predicted by AI  (Stanford University)

    New AI-assisted research predicts the Earth could pass the global warming threshold of 1.5°C in the early 2030s. Previous assessments using global climate models, statistical techniques, and carbon budgets to calculate how quickly emissions will need to decline to stay below the Paris Agreement targets of 2°C were more optimistic. The new analysis uses a neural network trained on the vast archive of outputs from widely used global climate model simulations.

  • Africa-centric transfer learning  (arXiv)

    African languages are spoken by over a billion people but are underrepresented in NLP research in development. This paper makes progress towards solutions for these challenges, focusing on the task of named entity recognition (NER) with respect to transfer learning. At the NeurIPS 2022 Black in AI workshop, David Adelani spoke about this important work in a talk titled, “Cross-lingual Transfer for Named Entity Recognition: A study on African Languages.”

  • A Nigerian AI artist reimagines a stylish old age  (BBC)

    Malik Afegbua is making the case that AI can challenge us to create a better real world. At first glance, his images—called “Fashion Show For Seniors”—look like they were snapped on a fashion runway, but these models are not real people. Instead, the pictures are the result of Afegbua’s imagination working in conjunction with AI software, showing older-looking models in beautiful clothes. Afegbua says his work is teaching AI to become more creative, which in turn, helps him make new discoveries.

Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).