About Guardrails AI

Guardrails is an open-source Python package designed to enhance large language models (LLMs) by adding structure, type validation, and quality guarantees to their outputs.

Here are four key features of Guardrails

  1. Pydantic-style Validation: Guardrails performs comprehensive validation of LLM outputs, including semantic checks such as identifying bias in generated text or detecting bugs in generated code.
  2. Corrective Actions: When validation fails, Guardrails takes corrective actions, such as reasking the LLM for a different response, ensuring that the output meets the required criteria.
  3. Structure and Type Guarantees: Guardrails enforces structure and type guarantees on the LLM outputs. For example, it ensures that the output is in the JSON format, providing consistency and compatibility for downstream applications.
  4. RAIL Specification: At the core of Guardrails is the Reliable AI markup Language (RAIL) spec. This language-agnostic and human-readable format allows users to specify the expected structure and types of LLM outputs, define quality criteria, and specify corrective actions to be taken if the output is invalid.