About Guardrails AI
Guardrails is an open-source Python package designed to enhance large language models (LLMs) by adding structure, type validation, and quality guarantees to their outputs.
Here are four key features of Guardrails
- Pydantic-style Validation: Guardrails performs comprehensive validation of LLM outputs, including semantic checks such as identifying bias in generated text or detecting bugs in generated code.
- Corrective Actions: When validation fails, Guardrails takes corrective actions, such as reasking the LLM for a different response, ensuring that the output meets the required criteria.
- Structure and Type Guarantees: Guardrails enforces structure and type guarantees on the LLM outputs. For example, it ensures that the output is in the JSON format, providing consistency and compatibility for downstream applications.
- RAIL Specification: At the core of Guardrails is the Reliable AI markup Language (RAIL) spec. This language-agnostic and human-readable format allows users to specify the expected structure and types of LLM outputs, define quality criteria, and specify corrective actions to be taken if the output is invalid.