by Molly Welch, Radical Ventures
Image: “A universe of AI apps being funnelled into a worker’s laptop” prompt, Ideogram AI, Jan. 2024.
If you spent time in 2023 AI startup land, you certainly heard companies described as “copilot for X,” or assistive applications that promised to make particular functions or tasks faster and easier. At the same time, you may have also heard some of the same applications called — derisively — “a thin wrapper on GPT-X.” What to make of this? Were 2023’s AI apps copilots or wrappers?
The answer is that copilots and wrappers are descriptions of the same phenomenon. Both wrappers and copilots are, ultimately, appendages: though inadvisable, a plane can fly without a copilot, and a wrapper is the prelude to a core product within. In this sense, the wrapper vs. copilot debate is a false dichotomy that obscures a deeper truth — both wrappers and copilots are “nice-to-have’s.” They’re not the main thing.
Contrast this with agents, another popular 2023 AI phenomenon. Agents are definitionally the main thing. They’re active, empowered, and, from a software perspective, fully automated. In 2023, we saw early examples of agents, which were replete with ambition but limited in capability. The vision nonetheless captivated users and developers, with BabyAGI and AutoGPT skyrocketing to GitHub stardom.
These linguistic tugs-of-war are revealing. We wanted agents in 2023, but got copilots and wrappers. There were early and promising demonstrations of LLMs and diffusion models applied in various domains, but the app layer was overall nascent. Vertical apps were only beginning to emerge, and there were even fewer de novo AI applications, or net new, previously impossible AI-native product experiences with generative models at their core. For the most part, we saw a proliferation of bolted-on conversational interfaces, plugged into current products.
In this sense, 2023 was a false start for the AI application revolution. As foundation models have matured, and the beginnings of an infrastructure stack formed around them, 2024 is the real starting gun. This year, we’ll see the advent of AI analysts, or AI-native software applications that mechanize cognitive tasks, with high degrees of automation but humans in the loop.
This is a big deal. Why?
For one, the white space for software applications is big. SaaS application businesses, including both horizontal and vertical SaaS, represent almost 75% of public SaaS companies. There are simply more “jobs to be done” by application software.
But much more importantly, AI-native applications can make that already-large white space bigger. Generative models are reasoning engines. They can have natural language conversations and synthesize large volumes of unstructured, multimodal data to generate conclusions. The effect is that they can some automate some rote, mechanical cognitive work today. Services spend may begin shifting to software spend.
This has big implications. Per the WTO, services represented over 75% of GDP in developed economies in 2019. LLM-powered apps that can replace some professional services may represent a new generation of vertical software. There also may be a “re-platforming” that occurs if legacy horizontal SaaS providers are slow to incorporate generative models. And perhaps most exciting, there will likely be entirely new kinds of horizontal software applications that emerge.
Finally, growth in LLM applications means more demand for the infrastructure required to power them. The obvious proof point for this maxim, so far, has been NVIDIA. Beyond compute, however, it is increasingly clear that LLMs, and generative models more broadly, will require different types of software infrastructure to be put into production. The bulk of public software companies may be application companies, but the software companies that durably command premium public market valuations are infrastructure. For instance, the majority of public companies that have continued to trade at >10x ARR through the “SaaSacre” have been infrastructure software, led by players like Snowflake, GitLab, and DataDog.
Taken together, this AI application ecosystem — encompassing vertical SaaS, next-gen horizontal platforms, and new LLM infrastructure — represents a once-in-a-generation commercial transformation. 2024 will be the year AI starts eating software and services alike.
In the a series of posts, I’ll unpack each of these categories and some of the most promising startup opportunities within them. Specifically:
Next-gen vertical applications: What categories beyond legal are poised for transformation and why? How will AI-native apps differentiate?
Horizontal apps: What examples so far do we have of de novo applications, and what is on the horizon? What legacy horizontal SaaS may be disrupted?
LLM infrastructure: Evolving categories across the LLM stack, with a particular eye towards outstanding challenges in observability and evaluation.
Of course, the AI-enabled technology transformation surfaces important questions about work and labor markets. I overall believe that AI analysts will enable people to focus on higher-value, more interesting, and more creative tasks, as with previous waves of automation. But different people, and geos, may be affected differently, and this is a key line of exploration for startups and society more broadly.
The capabilities of generative models were on full display in 2023. The spotlight in 2024 will be on their applications.
Radical Ventures Investor Molly Welch writes a regular column on how AI technologies are shaping the startup landscape.