A recent visit to the AI Agents Conference in NYC highlighted a common theme among companies: addressing the challenges that arise when AI agents enter production. Talks and booths focused on observability, governance, and data substrates, with some offering solutions to the problems that have emerged. However, the question remains: what will still be relevant in a couple of years, and what is truly defensible and durable?
The traditional SaaS model, which bundled expensive engineering investments and domain expertise into a tool, is breaking down. With the rise of direct-from-imagination technology, engineering labor is becoming increasingly affordable. As a result, companies are shifting their focus from the size of their engineering teams to the revenue they can generate per engineer.
The old software model was based on under-utilization, with the most profitable SaaS companies often being those whose customers underused their products. However, pricing is moving towards a ‘token markup’ model, where outcomes are more valuable, but margins are compressed due to the cost of running large language models (LLMs). This has led many companies to bet on new moats, such as encoded domain expertise, to replace the old ones.
While encoded domain expertise may work in the short term, it is uncertain whether this will be a durable solution. Prompt architecture is text-based and portable, and the expertise underlying it is often abundant. The future of this category may lie in open marketplaces of prompt architecture and crowdsourced best-practices, rather than trade secrets.
The pursuit of data substrates is another area of focus, with companies seeking to connect, govern, and comply with regulations. As AI agents demand 100-1000x more data than traditional web apps, the need for tools to manage this data is clear. However, the question remains: what will be the most effective and sustainable approach to this challenge?
Photo by Chris F on Pexels
Photos provided by Pexels
