Measuring AI startups by the right yardstick
Building a B2B AI startup is hard enough between struggling to obtain training data and fighting with major tech companies to secure talent. Building a B2B AI startup held to the well-established software-as-a-service (SaaS) metrics is even harder. While many AI businesses deliver value via software monetized by a recurring subscription like their SaaS counterparts, the similarities between the two types of businesses end there.
AI startups are a different animal
SaaS products built without data and AI offer generalized solutions to their customers. AI businesses more closely resemble a services business or consultancies because they provide solutions that become tailored to that customer’s specific needs. Like services providers or consultants, an AI product improves as it knows a customer better (as in, as it collects more data from customers with continued usage), and as it serves a broader customer base, from which it can collect best practices and make better predictions over a bigger data set.
Services revenue has been the antithesis of venture-style growth because it yields lower margins and lacks repeatability and scalability; as your services business brings on more customers, you will need to scale headcount accordingly to support those accounts, which keeps margins low. Palantir, a big data analytics unicorn, is one company mired in services demands. Unlike services providers, AI businesses have the potential to deliver that targeted and greater ROI at scale.
AI businesses are not scalable right out of the gate: AI models take time and require data to train. Moreover, not all AI businesses will scale. Here are the metrics we use to tell the difference early on.
Hype will make enterprise customers trigger-happy to pilot AI solutions, but at the end of the day, enterprise buyers buy the best solution available to address their problems and don’t care whether that solution comes in the form of SaaS software, a consultancy or an AI product. It is very difficult to build a high-performing MVP version of an AI model without data from customers. In order to demonstrate value right out of the box and be competitive against other vendors, you might automate which processes you can right off the bat using a rules engine, and provide a human operator to perform the rest of the work while simultaneously labeling the collecting data in order to train the AI.
As the AI improves over time, the human operator will offload more of the work and only jump in to intervene when the AI falls below a predetermined accuracy or confidence threshold. This enables you to serve an increasing number of customers with a limited number of staff. Lilt, which provides machine translation for enterprise, uses professional translators in this role. The translation AI automatically translates a text excerpt from one language to another. A human translator goes over the text looking for errors in translation or contextual corrections. As the translation AI improves, the human translator will have to make fewer corrections per translation task. More generally, the ratio of human interventions over total automated tasks should be decreasing.
As with SaaS products, exactly how that compounding AI performance increase is tied to bringing value for the customer is key to the startup’s long-term stickiness. The key difference with AI products is once the AI’s performance ramps up, it could very quickly exhaust all low-hanging fruit opportunities. If the AI cannot continue to provide value to the customer, the difference in value from one renewal cycle to the next may seem stark to the customer, who may decide to not renew.
There are only so many opportunities to take out costs before you are constrained by the laws of physics.
Choosing the right applications of AI to enable long-term payoffs and avoiding hitting a wall with ROI is key. Typically, applications that improve the customer’s bottom line face finite opportunities for improvement, and applications that improve the customer’s top line have no ceiling on opportunities to grow. For example, once an AI improves the operating efficiency of a production line to the point where it is rate-limited by the time it takes for the raw materials to chemically react, the AI can no longer find value for the customer for that specific application.
There are only so many opportunities to take out costs before you are constrained by the laws of physics. An AI that helps customers find new opportunities for revenue like, Constructor.io, which provides AI-powered site search as a service and helps customers such as Jet.com increase cart conversions, will not hit that wall.
You should closely track the cumulative ROI for each customer over time to make sure the curve does not plateau and lead the customer to churn. Sometimes the long-term application is harder to sell because the value is difficult to demonstrate immediately, and you might get a foot in the door with the cost take-out value proposition. Understanding its ROI curve would enable you to design a longer contract period so that the AI has time to ramp on new problems before it exhausts the initial application. To ensure customer retention, you should make sure that the customer ROI increases over time and not plateau or taper off.
Deploying an AI product is a complicated process that leaves you at the mercy of each customer’s idiosyncratic tech stack and org chart. AI needs data to train, so an AI product may take more time than a SaaS product to deliver value. Acquiring or creating data for the AI model, integrating the product into the customer’s tech stack and workflows and getting the product to deliver value before the model is sufficiently trained on the customer’s data may significantly impact your own bottom line.
Many sectors have only recently begun to digitize, and valuable data might be in difficult-to-extract formats, such as handwritten notes, unstructured observation logs or PDFs. In order to capture this data, you may have to spend significant manpower on low-margin data preparation services before AI systems can be deployed. Depending on how the data is captured and organized, your deployment engineer may have to build new integrations to a data source before the model can be fully functional.
The way data is structured might also vary from one customer to the next, requiring AI engineers to spend additional hours normalizing the data or converting it to a standardized schema so the AI model can be applied. Over time, these costs may decrease as you build up a library of reusable integrations and ETL pipelines.
Products sold by SaaS companies either work or they don’t. AI performance is not binary; it works less well out of the box and improves with more data. Each application and each customer will accept a different minimum algorithmic performance (MAP). The deployment process should make sure to get the product to that customer’s specific MAP, and you might revert back to Wizard of Oz stop-gap approaches to deliver MAP until the model can perform at MAP on its own.
If you are selling to customers that allow you to pool anonymized data or use a model trained on their data with other customers, the AI product will perform better “out of the box” with each subsequent customer. Inside sales customers, for example, can get immediate suggestions on how to optimally target a sales lead using its sales acceleration platform thanks to that data pooled from its customer network.
AI products incur more significant rev-up costs than a typical SaaS product rollout and may have as much impact on margins as customer acquisition costs (CAC). You should carefully track how much time these rollouts and ramp-ups take, and how much it costs for each new customer. If there are true data network effects, these numbers should decrease over time.
Unlike SaaS businesses that compete on new features, AI startups have an opportunity to build long-term defensibility. The AI startups that can scale will kick off a virtuous loop where the better the product performs, the more customers come on board to contribute and generate data, which improves the product’s performance. This reinforcement loop builds a compounding defensibility that was previously unheard of for SaaS businesses.
AI models perform better with more data, but that performance may plateau over time.
It’s too simplistic to merely aim for the largest volume of data. A defensible data strategy takes into account whether the appropriate data is being collected at a pace that is appropriate for the problem at hand. Ask yourselves these questions about your data to determine where you can strengthen your data strategy on the following dimensions:
Accessibility: how easy was it to get?
Time: how quickly can the data be amassed and used in the model?
Cost: how much money is needed to acquire and/or label this data?
Uniqueness: is similar data widely available to others who could then build a model and achieve the same result?
Dimensionality: how many different attributes are described in a data set?
Breadth: how widely do the values of attributes vary, such that they may account for edge cases and rare exceptions?
Perishability: will the data be useful for a long time?
AI models perform better with more data, but that performance may plateau over time. You should take care to track the time and volume of data necessary to achieve an incremental unit of value for your customer, to make sure that the data moat continues to grow. In short, how much time, and how much data, would a copycat need to match your level of performance?
SaaS metrics aren’t enough
The higher upfront work necessary to launch an AI business means that most will look more like services businesses or will appear to underperform when they are evaluated under the framework of SaaS metrics. A small subset of AI startups will resemble SaaS businesses from the beginning, before AI is deployed in the product. In order to collect data for their AI models, some businesses first sell SaaS workflow tools and can even achieve meaningful revenue from that workflow tool alone. By SaaS metrics, that company may be blowing the competition out of the water. Without the reinforcement loop generating a compounding volume of data and an increasingly powerful AI over time, however, that company’s product remains vulnerable to copycats and will eventually be commoditized.
AI metrics captures this difference. AI offers the opportunity to deliver the customized and specialized ROI of a services business with the scalability of software, with the ability to defend against copycats. The high start-up costs of this approach to company-building may mean you will realize smaller profits and build the company prioritizing different elements than what has worked before. Vertical AI is so new as a category that many companies are not yet tracking these metrics, so we don’t yet have enough data points to establish benchmarks. In the meantime, these numbers will serve as helpful barometers for you to monitor the health and performance of this new type of company.
- AI-First Companies: Flipped
- The AI-first startup playbook
- AI adoption is limited by incurred risk, not potential benefit
- AI-First Companies
- Data rights are the new IP rights
- The Intelligent Enterprise Stack
- Beating Behemoths
- Don't sell your data
- Framework to grasp industrial analytics opportunities
- Beyond systems of record
- Positioning a machine learning company
- The intelligence era and the virtuous loop
- Vertical beats horizontal in machine learning
- Zetta Bytes AMA: Questions to ask about pricing
- Zetta Bytes AMA: Hiring a CTO
- AI Entrepreneurs: 3 things to check before you pursue a customer
- There are more data scientists than you think
- Stages of funding in the intelligence era
- Could data costs kill your AI startup?
- Measuring AI startups by the right yardstick
- Finding the Goldilocks zone for applied AI
- Data is not the new oil
- Machine Learning in the Deployment Age
- 10 innovations of the next decade
- Zetta Bytes: Privacy Preserving Machine Learning
- GDPR panic may spur data and AI innovation
- Computing like a human
- New opportunities for hardware acceleration in data analytics
- Hardware acceleration in data analytics