Editor’s Note:
This week we are cross-posting a blog from Radical Ventures portfolio company Cohere. Cohere deploys large language models (LLMs) to generate, categorize, and organize language and text at a scale that was previously unimaginable. From the beginning, Cohere has believed in proactively working to build safer products, accepting a duty of care to users, the environment, and society. This week Cohere, OpenAI and AI21 Labs released a preliminary set of best practices applicable to any organization developing or deploying LLMs. VentureBeat covered the release. Support from other organizations has included Microsoft, Google Cloud Platform, Anthropic, CSET, and the Stanford Center for Research on Foundation Models (CRFM).
Cohere, OpenAI, and AI21 Labs have developed a preliminary set of best practices applicable to any organization developing or deploying large language models. Computers that can read and write are here, and they have the potential to fundamentally impact daily life.
The future of human-machine interaction is full of possibility and promise, but any powerful technology needs careful deployment. The joint statement below represents a step towards building a community to address the global challenges presented by AI progress, and we encourage other organizations who would like to participate to get in touch.
Joint Recommendation for Language Model Deployment
We’re recommending several key principles to help providers of large language models (LLMs) mitigate the risks of this technology in order to achieve its full promise to augment human capabilities.
While these principles were developed specifically based on our experience with providing LLMs through an API, we hope they will be useful regardless of release strategy (such as open-sourcing or use within a company). We expect these recommendations to change significantly over time because the commercial uses of LLMs and accompanying safety considerations are new and evolving. We are actively learning about and addressing LLM limitations and avenues for misuse, and will update these principles and practices in collaboration with the broader community over time.
We’re sharing these principles in hopes that other LLM providers may learn from and adopt them, and to advance public discussion on LLM development and deployment.
Prohibit misuse
- Publish usage guidelines and terms of use of LLMs in a way that prohibits material harm to individuals, communities, and society such as through spam, fraud, or astroturfing. Usage guidelines should also specify domains where LLM use requires extra scrutiny and prohibit high-risk use-cases that aren’t appropriate, such as classifying people based on protected characteristics.
- Build systems and infrastructure to enforce usage guidelines. This may include rate limits, content filtering, application approval prior to production access, monitoring for anomalous activity, and other mitigations.
Mitigate unintentional harm
- Proactively mitigate harmful model behavior. Best practices include comprehensive model evaluation to properly assess limitations, minimizing potential sources of bias in training corpora, and techniques to minimize unsafe behavior such as through learning from human feedback.
- Document known weaknesses and vulnerabilities, such as bias or ability to produce insecure code, as in some cases no degree of preventative action can completely eliminate the potential for unintended harm. Documentation should also include model and use-case-specific safety best practices.
Thoughtfully collaborate with stakeholders
- Build teams with diverse backgrounds and solicit broad input. Diverse perspectives are needed to characterize and address how language models will operate in the diversity of the real world, where if unchecked they may reinforce biases or fail to work for some groups.
- Publicly disclose lessons learned regarding LLM safety and misuse in order to enable widespread adoption and help with cross-industry iteration on best practices.
- Treat all labor in the language model supply chain with respect. For example, providers should have high standards for the working conditions of those reviewing model outputs in-house and hold vendors to well-specified standards (e.g. ensuring labelers are able to opt out of a given task).
As LLM providers, publishing these principles represents a first step in collaboratively guiding safer large language model development and deployment. We are excited to continue working with each other and with other parties to identify other opportunities to reduce unintentional harms from and prevent malicious use of language models.
Download Cohere’s Joint Statement in full (pdf).
AI News This Week
-
Xanadu launches first public cloud-deployed computer with quantum computational advantage (Nature)
Quantum computational advantage is achieved when a quantum computer outperforms the world’s fastest supercomputers, running the best known algorithms, on a well-defined task. Radical Ventures portfolio company Xanadu has made Borealis, a programmable photonic quantum computer with 216 squeezed-state qubits that outperforms the best classical supercomputers at a specific task, available to people everywhere via Xanadu Cloud and Amazon Braket. Several technologies developed for this machine solve key challenges in the development of fault-tolerant quantum computers. Such computers have the potential to unlock computational power that will solve an array of intractable problems in fields as diverse as next generation battery development, drug discovery, finance, and logistics.
-
Killer asteroids are hiding in plain sight. A new tool helps spot them (The New York Times – subscription may be required)
Researchers with the B612 Foundation have built an algorithm that can scan old astronomical images for unnoticed space rocks, helping to detect objects that could one day imperil Earth. The achievement did not require building a new telescope or even making new observations with existing telescopes. The Minor Planet Center has confirmed 104 objects as new discoveries so far. The NOIRLab archive contains seven years of data, suggesting that there are tens of thousands of asteroids waiting to be found.
-
Robots pick up more work at busy factories (The Wall Street Journal – subscription may be required)
Robotics orders in the United States are up a record breaking 40% during the first quarter compared to the same time period in 2021. The last few years have seen the largest increases in robotics orders to date. Last year, there were 39,708 units sold in North America at a value of $2 billion. This increase is not due to the humanoids or dancing mechanical dogs that we see on social media. Companies like Radical Ventures portfolio company Covariant supply AI-powered robotics solutions designed to see, learn, and interact with the world around them so they can handle complex and ever-changing operations in warehouses, factories, and other industrial settings. The trend is reaching into other industries, such as quick service restaurants, that are now testing AI and robotics for teleoperations and delivery.
-
Discovering the hidden vocabulary of DALLE-2 (arXiv)
Researchers at the University of Texas made the surprising discovery that DALLE-2, an AI system that can create realistic images and art from a description in natural language, seems to have a hidden vocabulary that can be used to generate images with prompts. These prompts are gibberish to the human reader, but consistently produce the same results such as Apoploe vesrreaitais generating bird images. It was not clear to the researchers if some of the gibberish-appearing words are misspellings of words in language other than English, but the researchers could not find any examples. Future research could take many interesting paths. An outstanding question is if Imagen, Google’s image generating AI, has a similar hidden vocabulary given that it was trained with a language model as opposed to CLIP (OpenAI’s computer vision system that DALLE and DALLE-2 build on). The gibberish prompts may be adversarial examples for CLIP’s text encoder.
-
Machine learning algorithm predicts opioid overdoses (Becker’s Hospital Review)
Researchers developed an AI tool that could help predict and prevent opioid overdoses. The tool currently works using 284 potential predictors from Medicaid beneficiaries’ pharmaceutical and healthcare claims data. As a next step the research will be validated against other state Medicaid programs and geographic regions, such as the Midwest and South, according to the researchers from the University of Florida, University of Pittsburgh, University of Arizona, University of Utah and Carnegie Mellon University.
Radical Reads is edited by Leah Morris (Senior Director, Velocity Program, Radical Ventures).