Companies
Team
Playbook
FAQ
Subscribe to our Newsletter
Guardrails AI: Pioneering the Open-source Industry Standard for Trustworthy AI
February 2024
Apoorva Pandhi

In the rapidly evolving AI landscape, enterprises are eager to harness the transformative power of large language models (LLMs). However, the journey from excitement to practical use is fraught with challenges, chief among them being the unpredictable, and at times, unreliable behavior of LLMs. This unpredictability has been a significant barrier to the broader adoption of generative AI across enterprise workflows.

Founded in 2023 by a team of AI experts and large-scale system operators, Guardrails AI introduces an innovative open-source platform designed to define and enforce AI assurance across generative AI applications.

Founders Shreya Rajpal, Diego Oppenheimer, Safeer Mohiuddin, and Zayd Simjee were drawn together to address this massive pain point of AI assurance for application builders. Shreya’s background as a senior ML engineer at Apple and a founding engineer at Predibase, along with Diego’s pioneering work in MLOps at Algorithmia (acquired by DataRobot), highlights the depth of practical AI and operational knowledge embedded in the company's DNA. Safeer and Zayd further bolster this foundation with their extensive experience in launching and scaling software products within AWS, underlining the team's deep empathy for the operational complexity required to solve these problems at the broadest scale.

As companies enthusiastically adopt AI for their products and workflows — ranging from conversational interfaces to agents that transform functional workflows to AI-native products — these projects are increasingly stuck in the lab, waiting for a viable solution to ensure safety, reliability, and compliance. Guardrails AI tackles this demand head-on, offering an end-to-end open-source platform that acts as an inverse firewall around LLM applications, ensuring they do not deviate from well-defined policies - guardrails, if you will! This approach provides stability and accuracy to AI applications and gives developers greater control of compliance, brand safety, security, and data privacy.

Guardrails AI's unique methodology, based on open-source in-line validation for applications through user-created validators and corrective orchestration, sets it apart from other AI assurance solutions. The recent launch of Guardrails Hub is an important milestone in their commitment to open-source AI reliability. Most organizations are struggling with the same set of problems around responsibly deploying AI applications, and struggling to figure out what’s the best and most efficient solution. They often end up reinventing the wheel in terms of managing the set of risks that are important to them. With the Guardrails Hub, the team has created an open forum to share knowledge and find the most effective way to safer AI adoption, but also to build a set of reusable guardrails that any stakeholder or organization can adopt. This proactive and open-science stance towards AI safety will ensure that the evolution towards safe and reliable AI applications is accessible to all, not just a few.

This approach clearly makes sense to AI builders, as the Guardrails open-source project has accumulated over 10,000 monthly downloads and more than 2,900 Github stars, demonstrating the clear need for a solution to bridge the gap between AI's potential and its safe, effective implementation. Robinhood considers Guardrails AI indispensable for embedding AI safety in their AI journey. Aman Gupta, engineering lead for AI at Masterclass describes that Guardrails AI addresses a fundamental need for them. “For us, LLM-generated output is fundamentally untrustworthy, and Guardrails builds confidence in that output. Gen AI promises to provide additional value to our customers quickly and at a lower cost - but only if trust could be established as easily”. Hundreds of companies are already creating Guardrails and the use cases range from building secure and compliant chatbots, to aligning copywriting with company’s comms policies with factual accurate data to ensuring that AI agents behave as expected. And this is just the beginning: long-term, the founders envision more holistic solutions to responsible AI application building, ranging from pre- to post-launch assessments of model safety, reliability, and value.

Zetta Venture Partners has long been focused on the problem of AI safety and reliability, and we are thrilled to lead a $7.5 million seed funding for Guardrails AI, with participation from Factory, Pear VC, Bloomberg Beta, Github Fund, and AI luminaries like Ian Goodfellow, Logan Kilpatrick and Lip-Bu Tan amongst others.

We believe in Shreya, Diego, Safeer and Zayd to spearhead a crucial industry shift towards adopting AI widely and wisely, with the right guardrails ensuring safety, reliability, and responsibility.

If you’re interested in learning more, please check out their open-source documentation here.

SF
168 South Park
San Francisco, CA 94107

NYC
135 W 26th St #10A
New York, NY 10001

© 2023 Zetta Venture Partners