Radical Reads

How Delta is making us rethink return-to-office and where AI can help

By Meg Sahni, Senior Director of Talent


Photo Source: Digiday

Over the past few weeks, many companies have pushed back their return to work dates, including major tech firms including Amazon, Google, Apple, Microsoft, and Uber. Along with rethinking timelines for a total return to the office, companies are also mandating proof of vaccination. For companies that are returning, hybrid or phased-in models are common. The fluidity in return to office policies reflects concerns around the Delta variant and how much the corporate world has adapted to the virtual workplace.

Most companies – 60%, according to one survey – are looking to embrace a hybrid model permanently. Companies have adopted software like Zoom, Slack, and Teams to enable collaboration. But new platforms and software powered by AI were also deployed and refined during the pandemic. These tools address in-office safety through contract tracing, health check-ins, contagion-risk alerts, and social-distancing notifications that flag overcrowded spaces. Mindfully tuning algorithms on our use of space could provide insights beyond contagion risk notifications to improve office safety and utilization. In addition to AI-powered software platforms, big tech players are also exploring the VR meeting space. This week, Facebook announced their ‘Horizon Workrooms’ via Oculus propelled by a belief that the pandemic has permanently changed the way that we work. One more sign that digital technologies are reshaping how we work and collaborate across virtual and physical spaces.

AI News This Week

  • On the opportunities and risks of foundation models  (Stanford CRFM)

    In this paper, more than 175 researchers across 10 departments at Stanford University discuss the opportunities and risks presented by what they call “foundation models,”  which are essentially massive AI models that use breakthrough technologies such as transformers to enable significant AI advances across different disciplines. These foundation (or ‘universal’) models are trained on supercomputers using vast amounts of data, potentially resulting in platform software with much greater flexibility than previous models, which tended to perform well only in the specific application for which they were trained. (The underlying transformer technology was co-created by Aidan Gomez, co-founder, and CEO of Radical Ventures portfolio company Cohere, which itself provides a foundation model for natural language (NLP/NLU), enabling other companies to build language applications using Cohere’s API).

    Called the Center for Research on Foundation Models (CRFM), the initiative was born from the Stanford Institute for Human-Centered Artificial Intelligence (HAI), co-led by renowned Stanford computer vision Professor Fei-Fei Li (also the co-founder of Radical portfolio company DawnLight, which uses foundation models for computer vision and healthcare). The CRFM’s first report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles to their applications (e.g., law, healthcare, education) and societal impact.

  • Now that machines can learn, can they unlearn?  (Wired)

    The next generation of AI technology is being built, and machine ‘unlearning’ is surfacing as a new research area. Unlearning seeks to induce “selective amnesia” in AI software. The goal is to remove all traces of a particular person or data point from a machine learning system, without affecting its performance. If made practical, the concept could give people more control over their data and its value.

  • How the US tech giants could fall  (Financial Times)

    “Since the 1970s, companies that finished a decade in the global top 10 have had a less than one in five chance of finishing the next decade there.” We should continue to expect competition and churn in the next ten years. The next internet iteration, likely to incorporate AI and augmented or virtual reality, is already underway with newer and significantly smaller companies already gaining.

  • Computers suck at ‘common sense’ — AI expert explains why   (The Next Web)

    Common sense may be the element needed to achieve generalized artificial intelligence and is a moonshot AI problem for this generation. We often take for granted the ability for the human mind to generalize. AI researcher Mayank Kejriwal provides his perspective on why this problem is so challenging.

  • Xanadu and imec partner to develop photonic chips for fault tolerant quantum computing   (Canadian Manufacturing)

    AI applications such as autonomous vehicles, advanced robots, and predictive healthcare, have an “insatiable need for processing speed.” Adding silicon photonics to computing platforms is one method to address this pain point. Radical portfolio company Xanadu, a full-stack photonic quantum computing company and imec, a world-leading research and innovation centre in nanoelectronics and digital technologies, have announced a partnership to develop the next generation of photonic qubits based on ultra-low loss silicon nitride (SiN) waveguides – offering a more promising path to fault-tolerant quantum computing.

Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).