by Jordan Jacobs, Managing Partner and Co-Founder, Radical Ventures
The news of Sam Altman’s sudden firing from OpenAI (and reported efforts to bring him back) caught much of the tech world by surprise. But, on closer inspection, the roots of this fissure were always visible. At its heart, OpenAI has a problem which is not just about how much attention to pay to safety (i.e. alignment with human values) and thus how fast or slow to go in commercializing its products. The tensions at the board level reflect a misalignment between the company’s stated mission and the subsequent commercial execution which created ChatGPT and a company worth tens of billions of dollars.
OpenAI’s mission, as defined by its charter, is to build artificial general intelligence (AGI) and to ensure it “benefits all of humanity.” It is not a mission that can be easily reconciled with commercializing AI products, which is why OpenAI was initially designed as a non-profit research organization rather than a company. In its pursuit of AGI, it became apparent that the compute costs necessary to train Large Language Models (LLMs) at huge scale would necessitate funding that could perhaps only be supported by investors seeking a commercial return. (Of course opening ChatGPT to the world also has the benefit of users providing feedback that helps train the models at scale.)
OpenAI’s efforts have had extraordinary impact in accelerating global understanding and adoption of AI and have likely reshaped the technology landscape forever. However, the speed and focus needed to win the ultra-competitive LLM products and platform race in a way that is beneficial to the company comes at the expense of a mission that is a research-focused quest to achieve AGI that is beneficial to all. In other words, how do you reconcile winning the AI products and platform race for the benefit of one company while at the same time trying to create AGI that is beneficial for everyone? When viewed through this lens, the boardroom drama reflects this contradiction between stated mission and practical execution.
The tension that led to Sam Altman’s unexpected departure was not a sudden rupture, but rather a culmination of inherent misalignment. The conflict underscores the challenge faced by companies striving to align their mission with their execution – a challenge not unique to OpenAI. We may see similar upheaval at companies where inconsistencies between the original mission and the practical execution create cracks within teams in which individuals joined for different purposes. Those cracks may, over-time, widen into chasms.
When investing, we think a lot about a founding team’s mission and the alignment of that mission with how things will practically unfold in building a business. It is not impossible to have a seemingly utopian mission inside a company. But it does require a lot of thought and communication to reach a common understanding among the founding team (and subsequent investors and other stakeholders) about how the mission will be realized and, in the case of very capital intensive businesses, what you must be prepared to do in order to raise that capital.
When we began working with the founders of Cohere before incorporation in 2019, we had perceived this conflict at the heart of OpenAI that we thought could be very hard to reconcile long-term. So the mission at Cohere was defined very clearly: to make transformer-based generative language AI safely and securely available to businesses globally, and to build a cloud-agnostic platform and products in service of that mission. So, while the company is deeply committed to building safe and responsible AI, there is no tension between its stated mission and the work it pursues.
For startups, the ‘why’ of the mission cannot be in conflict with the ‘what and how’ of the execution of that mission. To succeed at building a generational business, a team must be unified in believing that ‘what and how’ they are building is in support of ‘why’ they are building. It is not impossible to re-align when these two core elements become out of sync (something I’ve experienced as an AI company founder). But getting back in sync is unlikely to be a smooth ride for anyone involved. Starting with that alignment – and ensuring that challenges are met with constant open communication – helps to give a team the unified clarity of purpose that is foundational to success.