The Rise of AI: What does it mean for billing?

AnalysisMar 12, 2024
AI billing stack hero

By Laura Lenz, Eugene Lee, Justin Ouyang and Taku Murahwi

We recently wrote about the billing cycle and the opportunity for innovation in the category. How will AI impact pricing, quoting, and billing? In this piece, we’ll outline our learnings and what we’ve been thinking about, including assumptions we’re fairly certain are correct, big questions that remain, and the opportunities that exist for value creation.

For anyone building with AI, it represents new and unpredictable complexity in the cost of goods sold (COGS) and cost structures of a company. And it’s not yet clear what the use cases of tomorrow will look like. All this makes for an interesting pricing & packaging shift.

Let’s walk through an example: Say you’re launching an AI chatbot, what additional costs might you incur? First, you need to build the chatbot – and then there are related infrastructure and third-party services costs. Below, we outline what development costs might look like for a typical product before and after adding an AI chatbot.

A few important notes before we look at that breakdown:

  • We know that users can benefit from their own history and the history of other users, so the chatbot needs to store previous conversations.

  • As a result, inputs to a generative AI model will include context from the user query, prior chat history, and fine tuning based on our own data.

  • This is a simplified example. As we know, each product and company can end up leveraging multiple models, databases, etc. based on what works best in each use case.

  • AI models are often priced on pay-as-you-go systems that are based on input and output tokens. Tokens are roughly equivalent to 4 characters.

Introducing AI results in three additional costs: product development, infrastructure needs, and new third-party services. We’ll need to use an AI model like OpenAI’s GPT 4; store our user information (chat history, context, etc.) in a vector database; and connect that all together and deliver it to the end user.

Of these, the AI services is the most unpredictable cost bucket. Services are priced in a simple consumption-based model (e.g. cents per a token), but the demand and customer use will be hard to predict. And the biggest issue of all is they don’t easily translate to the pricing & packaging that a customer sees and expects. How should a company pass on these costs to their customer?

This is the crux of the problem. As AI enters the product value chain, it becomes increasingly hard to know and forecast customer usage and how that translates to costs to the company. This might be the first time that we’re adding a third-party service that doesn’t obviously scale with revenue to the company. With customer usage and cost to the company being volatile, we expect there to be a mixture of gates and observability on both ends of this demand and cost relationship.

As something both expensive and unpredictable, AI has only further supported the shift from perpetual, to subscription, to consumption pricing. Clearly perpetual and subscription involve too much risk. Will AI shift us beyond that towards value or results-based pricing? Only time will tell. We’ve already seen the emergence of API-first, real-time, and usage-based models. But with AI, each of those factors only gets more important. Usage can be sporadic, dictated by end customers (not the service purchaser), and evaluated every second instead of just every day. We’re at an interesting time where businesses must balance the need for innovation with the cost to innovate.

Known knowns and things we’re fairly certain about:

  • This is a rapid platform shift, driven by fast adoption and a consistent wave of ROI stories.

  • AI further accelerates the rise of real-time, API-first, usage-based pricing. The demand for real-time metering is real and only growing (see Metronome and OpenAI).

  • The downfall of Zuora continues. Companies are increasingly looking to adopt usage- and consumption-based pricing, whether that’s a pay-as-you-go model or a credit burn down. Zuora missed the mark here, failing to innovate, and subscription is now the perceptual license of today.

  • Salesforce will need to re-platform Configure, Price, Quote (CPQ) and/or Billing to stay relevant. We expect them to focus on modernizing their CPQ product and sunsetting the old Apptus architecture.

  • The market for AI is a classic price-demand curve. The commoditization in different parts of the stack drive downward pricing pressure, with no change in quality. Moreover, this market also scales up with lower prices. Consumers can apply AI in many aspects of their life and work and would do so at the right price. It’s not one AI agent and done (e.g. one cell phone).

  • AI products are increasingly embedded; this model looks like it’ll be one to persist.

  • Pricing is both consumption-based (e.g. driven by the end user’s usage), but also capacity-based (e.g. based on the company’s understanding of the AI service’s cost, availability and impact on other services). Cost can be a prohibitive factor for large language models (LLMs) usage at scale.

Trends we’re watching and questions we have:

  • It feels like we’re still in the experimentation phase of how to price for AI-first products. How do companies cover their additional costs while also aligning value capture with value delivered? We’re wait and see on whether value or results-based pricing (e.g. Justin’s former employer Intercom and Fin) becomes the new standard over seat-based pricing. As markets prioritize efficiency, value or results-based pricing may become the new preferred model. It’s easier to sell and adopt – perfect for new entrants and new products – and headcount may never scale as it did during the peak low interest rate environment. But it comes with greater risk of a revenue / cost mismatch for the company.

  • As we move towards multi-modal models, we expect this to create different costs and thus have different prices. Some languages are more expensive than others. Images will scale up in price based on image resolution. Video and audio will also have their own pricing scales. How will companies account for this in COGS? If they choose to pass on some of these to the end customer, how will they do it?

  • We’re early in evolution of AI form factors. We’ve seen the rise of co-pilots and the emergence of agents. Agents can transcend many applications and use cases. So how do you price them? When AI evolves to new form factors after agents, we suspect pricing & packaging will need to change. Additionally, our first form factors (i.e. co-pilots) favored incumbents, but the future may give an advantage to new entrants that can always look to pricing & packaging as a method of counter-positioning.

  • Do foundation models use pricing as a growth lever? In a competitive and highly capital-intensive business, do they sacrifice margins or create a temporary loss-leader to scale up demand and catch up to rivals?

  • With such variable, unpredictable cost, how does the need for profitability affect how companies integrate AI into their products? What types of observability, controls, and security will they need to have in place?

  • Do companies like Reddit and Google shift from ad companies to data companies as outlined by Tomasz Tungz? Does AI bring new revenue models for existing products that will have their own unique pricing & packaging designed to align value and cost to the customer?

  • If the world moves to system <> system interactions vs. system <> human <> system, how does billing & payments change in a world of AI with autonomy?

Where we see opportunity:

  • We see the fundamental need to understand cost as a major opportunity.

  • First, a business needs to understand the estimated cost of AI services before consuming these services. Translating use cases to tokens, frequency, and cost is important in deciding which models to start using. For example, if we are going to use OpenAI, what is generally the cost of generating a video guide for a customer support query in Spanish?

  • Next, real-time estimates of cost. If the above can be done in real-time, cost can be passed on or transparent to the end user. Companies can use this to better price their own products and understand their margin. In a price-constrained world for the company, they can also choose whether to offer this AI-enabled product feature to their customer. To build on our example, for this specific customer’s inquiry at 2 pm UTC, how much will it cost to offer a video guide for a customer support query in Spanish using OpenAI and should we provide this to them?

  • Finally, if LLM services become interchangeable, there is an opportunity to dynamically utilize, and price based on the most appropriate AI service for the specific use case. Further building on our example, if we know the cost across a few different models for that customer inquiry, which model should we use?

  • The intersection of business-to-business systems. Today, companies bill each other either automatically or via invoice. Those invoices are typically reviewed and then sent out for payment. How will this process work in the future as we remove humans from the loop and rely on AI for a manual repetitive, but critical process?

  • What teams and functions will be responsible for the costs that come with leveraging generative for your products? Is this DevOps? Will they need a new set of tools to manage, monitor, and control third-party AI?

  • With Salesforce focused on CPQ, and more off-the-shelf solutions for metering, does this create an opportunity for Billing? We’d expect CPQ to typically be the wedge (slightly easier to displace than Billing), but does this change?

The need for real-time costs and pricing are more important than ever. As this landscape evolves, with new form factors, new use cases, and new pricing, we remain excited for the future. If you’re building in this space, please reach out.