Not all applications are ready for AI, despite recent major advances in the field and enabling infrastructure. Anxiety over the prospect of being disrupted is prompting leaders in all industries to experiment with AI-powered solutions. This makes it difficult for aspiring entrepreneurs to distinguish C-suite curiosity from a long-term intention to buy.
If AI startups want to move their work beyond the pilot stage toward sustainable, long-term growth, they should avoid chasing opportunities where the stakeholders are not culturally ready for AI, or where more effective technology could be applied. Before you even start working with a potential customer’s data, you need to understand the ABCs of AI-readiness: Acceptance, Better Solutions, and Costs.
1. Is there societal acceptance of your AI?
AI technologies able to unlock significant value have already been around for decades, but few companies have adopted it. This is because adopting AI depends on both trust and risk. The more closely AI affects vital outcomes such as a revenue or human health and safety, the more previous exposure to AI end users will need in order to be receptive to the tech.
Gradual exposure to successful AI applications in daily life, or as part of trivial workflows, builds trust. For example, AI algorithms encourage us to revisit abandoned online shopping carts every day, so adopting AI-based software to make our jobs in enterprise sales and marketing easier seems like a natural extension. A nuclear engineer, however, has a wider mental gulf to bridge in imagining how the tech behind her Nest thermostat could safely automate procedures in her nuclear power plant without extremely close supervision.
Acceptance of AI follows the AI risk curve, which tracks AI acceptance as it graduates from low-risk consumer applications to the enterprise over time. Early successful applications of AI in enterprise packaged the AI as an augmentation layer over familiar workflow tools; if the AI failed to perform, the user would still have a functioning tool. As enterprises become more familiar with AI over time, it will become more central to their solutions. Eventually, enterprises will accept new applications that can only be created by AI. If AI companies can demonstrate proven results over and over on the same application in the same vertical, they move potential customers further along the risk curve.
Also, try to define an accepted entry point. For example, one AI startup I’ve tracked found that hospitals were not comfortable today buying their AI-based product that predicts whether patients will need surgery, but one hospital was interested in using that same technology to make sure a doctor sees those patients sooner. After repackaging their algorithm, the startup was able to convert several pilots into paying customers. If the target customer is not ready for the initial application of your technology, decide if you can extract enough value in the near term to have a viable business model until societal acceptance improves for the other use cases of your model.
Beyond an enterprise’s overall readiness to trust AI, startups face another cultural risk: hard-coded inefficiencies. As much as engineers would like to think otherwise, not all inefficiencies are operational or logistical. Many inefficiencies may persist for political reasons, even when it’s obvious that getting rid of the efficiency would bring cost savings. Procurement is one area where those inefficiencies are common.
A couple of startups I’m tracking that are applying AI to optimize hospital supply chains are particularly vulnerable to this dynamic: A hospital might stock the same scalpel from five different brands at varying price points because each surgeon prefers a different brand. Even though hospital CFOs know that simplifying inventory would enable bulk order savings, they accommodate each surgeon’s preferences for fear of losing them to another hospital. If the cost savings achieved from gauzes, sponges, and other less polarizing supplies are not sufficiently large, the startups will have trouble finding customers. Similar dynamics may also exist in other industries where a star engineer or salesperson may insist the organization buy a suboptimal product because that is what he or she prefers to use. In order to retain these high performers, the organization may turn a blind eye to the inefficiency.
In the process of exploring product-market fit, AI startups should make sure they understand the motivations of other decision-makers beyond the immediate end user of the product. Beyond understanding who makes purchasing decisions, it is also important to know the underlying reasons these decisions are made and why any inefficiency persists.
2. Are non-AI solutions better?
Not all problems are most effectively solved with AI. Despite increasing democratization of cutting-edge algorithms, AI systems remain expensive to build. Depending on the use case, the AI model might require a significant volume of training data before it reaches the minimum algorithmic performance (MAP) required to deploy with early customers, which will delay the startup’s go-to-market.
Implementing AI-powered software isn’t as simple as running an installation package. It requires thorough data preparation, intensive algorithm performance monitoring, and, for many use cases, significant customization before it can deliver value to its users. In many use cases, humans or a simple rules-based automation system can deliver value faster than AI.
These tradeoffs are particularly true for multiple startups applying AI to customer support: AIs can answer as much as 90 percent of questions, but it is still more efficient to send the remaining long tail of infrequent questions and edge cases to human agents. The startups end up frustrating their customers because much fewer support tickets are automatically resolved than the startup suggested when they made the sale. The flowchart below offers a framework for finding AI-appropriate opportunities.
As frequently as AI may be referenced on earnings calls, enterprise adoption of any solution ultimately depends on the ROI it delivers to its customers. While many business users are willing to experiment with AI today, the solutions that deliver meaningful value quickly will be the ones to stick once the AI hype has waned.
3. Will the rollout costs kill your startup?
Models trained on data that is correlated with, instead of predictive of, the desired outcome can bring expensive and disappointing surprises to aspiring AI entrepreneurs. Depending on how tightly correlated the input is to the outcome, controlling the input may have no effect on the outcome at all. The more complex a system is, the more AI is vulnerable to confounding factors.
Healthcare applications of AI, especially AI-powered diagnostic decision support tools, are one category particularly vulnerable to confounding factors. Many companies in this space have found incredible experimental gains in patient survival rates by monitoring patients weekly using scans or biopsies and applying AI to track the subtle changes in the disease over time. These tests are often expensive, invasive, and not always conclusive, so doctors currently order them at quarterly intervals or even less. Is the additional expense and discomfort to the patient caused by this close monitoring worth the increased chances of survival?
A potential confounding factor in that AI system comes from the fact that patients who are undergoing these weekly evaluations are also having their health status recorded in non-invasive ways, such as blood pressure, weight, and basic blood tests, all of which may also hold subtle clues about disease progression. All of that additional data is used in the algorithm. Could the AI be trained just as effectively on these less invasive data points, for much less cost and stress inflicted on the patient?
To tease out confounding correlations from truly predictive inputs, entrepreneurs must run experiments early on to examining the performance of the AI model with and without the input in question. In extreme cases, AI systems built around a correlative relationship might be more expensive, and achieve lower margins, than AI systems built around the actually predictive inputs.
On the data integration front, startups face another cost issue. Many sectors have only recently begun to digitize, and valuable historic and current data might be in difficult-to-extract formats such as handwritten scribbles, unstructured observation logs, or PDFs. In order to capture this data, startups may have to spend significant manpower on low-margin data preparation services before AI systems can be deployed.
The more complex a model is, the more sources of data it may draw from. Depending on how the data is captured and organized, the AI engineer may have to spend a significant amount of time building each integration before the model can be fully functional. Some industries are built around monolithic and idiosyncratic tech stacks, making integrations difficult to reuse across customers. If integration service providers are not available, the AI startup may find itself mired in building custom integrations for every new customer before it can deploy its AI system. The way data is structured might also vary from one customer to the next, requiring AI engineers to spend additional hours normalizing the data or converting it to a standardized schema so the AI model can be applied.
AI powered software holds the tantalizing potential to unlock new heights of performance for all industries. As history shows, however, technological performance alone does not drive adoption. Even if the technology can return significant value to the end user, organizational culture, the flexibility of human workers, and costs-to-deploy all affect whether that technology has long-term viability in any given application. We can expect to see these limitations evolve as AI applications evolve, society becomes more comfortable with the technology, and the tech itself improves. For AI startups, finding the right positioning given these ABCs is imperative to an effective go-to-market strategy.