Zetta Venture Partners

Announcing Zetta Fund III, a $180M fund for AI-first companies

Zetta Team Photo.jpg

Today we’re launching Zetta Fund III, a $180M fund for founders building AI-first companies. We’re grateful for the opportunity to double down on one of the most impactful innovations of our time.

When Zetta was founded in 2013, machine learning was in its infancy and enterprise software companies were far behind their consumer counterparts in recognizing the value of their data. Over the past seven years, we’ve seen that meaningfully change as businesses became more comfortable with the risks around deploying AI. Today, AI is deployed in a wide range of industries to assist and augment people, and automate repetitive tasks.

We are excited to support founders building the next wave of AI-first companies with this new fund: those that build products to understand complex systems and tackle problems far beyond human capabilities.

Many of the biggest challenges we face are too complex for our current computer systems to understand. Healthcare, energy, agriculture and logistics are all governed by systems - biological, physical, geological and behavioral - with complexities and interactions we only understand at a high-level and almost never in real-time. As a consequence, most businesses struggle to respond to volatility and reach beyond local maxima.

AI can help us understand these systems in new ways: using data to uncover hidden relationships and interactions between actors, and new techniques to do complex analyses in near real-time. Increasingly, these systems combine far-flung techniques and leverage new modes of data to train their models, and improve over time. They are laying the foundation for the next wave of AI applications while creating the need for a new set of tools and infrastructure.

Here are some of the areas we’re going to explore through this new fund.

Applications

Healthcare & Medicine - A data-driven understanding of disease will allow us to design better drugs and diagnostics, deliver better treatments and accelerate fundamental research in areas like immunology. Better models of collective behavior, and the social and economic determinants of health at a population level will help us better understand patient risk to more effectively deliver preventative care.

Food & Agriculture - Data and machine learning already improved climate and weather predictions, providing accurate forecasts months ahead and, in some cases, down to the acre. These models - combined with biological, agronomic and market data - will have a profound impact on our food systems, allowing producers to plan better and manage yield with greater agility and precision. Understanding the longer-term changes to the climate will be key to innovations in food production, from new seeds and chemicals to new farming techniques and production processes.

Energy & Materials - Better climate models will also have a significant impact on our energy systems and could accelerate the transition to a more distributed, renewable grid. Elsewhere, computational models are driving the discovery of new materials for batteries and biofuels, and even driving the design of safer nuclear reactors. At the distribution level, large scale machine learning models that incorporate energy prices, weather and transmission loss are making grids more efficient, ensuring a tight balance between energy production and demand.

Supply Chains - Machine learning is uniquely suited to the challenges of global supply chains, which are impacted by the complexities of every business and region they touch. More dynamic models that incorporate externalities like weather and more granular upstream data from suppliers will allow businesses to quickly adapt to changing conditions. Intelligent automation in areas like pricing, inventory management and production scheduling will make supply chains even faster and more resilient.

Cities & Transportation - Cities bring together all of these systems and are playing an increasingly important role in global governance. Better models of cities - their populations, infrastructure, output and consumption - will be critical to improving the lives of billions of urban dwellers. Data-driven models are used to plan building projects, improve transportation systems and, as we write this, manage a public health crisis. Better models along with greater automation can make those who serve cities more responsive to fluctuations in demand and resilient in the face of emergency.

Infrastructure & Security - While cities may be the infrastructure of our physical lives, our digital lives run on complicated rails that will be impacted by the next wave of AI. Modern cloud infrastructure increased the complexity of operational management by orders of magnitude. The rise of DevOps reflects a decade-long shift from manual to programmatic IT operations, which have reached a level of complexity that can no longer be efficiently run by static programs. Over the next decade, we will see a shift to AIOps as businesses run more machine learning models to manage efficiency, reliability and performance. At the same time, distributed data centers and devices are expanding the surface area of vulnerability and bad actors are using machine learning for ever more sophisticated attacks, overwhelming small security teams. Intelligent applications that detect openings, model attacks and identify bad actors in real-time are likely to be in high demand.

Risk & Financial Markets - At their core, models help us understand risk and uncertainty and, as they get better at modeling systems, businesses will be able to manage risk in new ways. Better, more dynamic risk models will impact a wide-range of industries, from healthcare to construction, enabling new business models and changing the competitive landscape. Better tools for scenario planning will help businesses identify vulnerabilities and hedge against high-impact events, giving them a critical edge in times of crisis and helping them navigate volatility better than their competitors. Tools for quantifying risk will enable new kinds of underwriting and allow operating businesses and financial organizations to better allocate resources, potentially blending the line between the two.

Infrastructure & Tools

Data Quality - The quality of data being used to train models and perform inference is a critical piece of every intelligent system. In supervised learning, the costs of acquiring and annotating data are high while unsupervised and reinforcement learning suffer from their own data challenges. This is a problem that won’t be solved by a single company or approach, so we’re excited about companies looking at data quality from different angles. From data ingestion, validation and enrichment, to active learning and weak labeling, to synthetic data and simulation - there are challenges in every modality and part of the data lifecycle.

Developer Tools - The tools available to data scientists today are decades behind those of software developers and inadequate for the next wave of AI. Data is hard to manage and version, workflows are fragmented and largely offline and moving models from development into production consumes far too much of data scientists’ time. Better, more integrated tools are needed to accelerate AI development and free up data scientists to focus on the work that matters. We are just as excited about tools for non-data scientists, especially those making machine learning accessible to developers, designers, analysts, marketers and business operators of all types. Data is a critical part of knowledge work and adding machine learning to the toolkit represents a massive opportunity.

Infrastructure - Many of the platforms that models run on today are remnants of the big data era and woefully ill-equipped to deal with the workloads and run-time requirements of more cutting-edge applications. The next-wave of applied AI calls for a more tightly integrated and production-oriented suite of tools built around machine learning: from real-time data processing and intelligent storage to better frameworks for scaling models up to massive, distributed systems or down to simple architectures that can run at the edge. While the major cloud providers and hardware manufacturers are heavily investing in this area, we think that product-focused founders can build category leading companies in overlooked areas.

Security & Monitoring - Unlike traditional software, machine learning models continue to evolve in production and can behave in unexpected ways. As machine learning takes on higher levels of decision making, the need to protect against adverse outcomes is a critical concern. Better tools are needed across every stage of the lifecycle to ensure safety and compliance: from testing and training in development, to monitoring models and data in production, to tools for explainability and auditing, to identify the cause of mistakes.

Data Privacy - Persistent breaches and sweeping regulations like GDPR in Europe are making data privacy an unavoidable priority for businesses around the world; one that is often at odds with their machine learning efforts. Building intelligent systems that work well while preserving user privacy will be a critical challenge for AI-first companies and a defining opportunity for those building infrastructure and tools. Like data quality, data privacy needs to be addressed at every stage of the pipeline; so we’re looking for founders that are solving the puzzle in different ways - from federated learning and encrypted ML, to data monitoring and governance.

While these are some of the areas we’ve been thinking about, a lot of the best ideas come completely out of left field. Founders find the novel applications of AI. Please do share what you’re finding so that we can help you take it to the next stage.