Pydantic AI Shields
Guardrail Capabilities for Pydantic AI Agents
Pydantic AI Shields provides ready-to-use guardrail capabilities for Pydantic AI agents. Drop them into any agent for cost control, tool permissions, content safety, and more.
Quick Start¶
Python
from pydantic_ai import Agent
from pydantic_ai_shields import (
CostTracking, PromptInjection, PiiDetector, SecretRedaction,
)
agent = Agent(
"openai:gpt-4.1",
capabilities=[
CostTracking(budget_usd=5.0),
PromptInjection(sensitivity="high"),
PiiDetector(),
SecretRedaction(),
],
)
Available Shields¶
Infrastructure Shields¶
| Shield | Description |
|---|---|
CostTracking |
Token/USD tracking with budget enforcement |
ToolGuard |
Block tools or require human approval |
InputGuard |
Custom input validation (pluggable function) |
OutputGuard |
Custom output validation (pluggable function) |
AsyncGuardrail |
Run guard concurrently with LLM call |
Content Shields¶
| Shield | Description |
|---|---|
PromptInjection |
Detect prompt injection / jailbreak (6 categories, 3 sensitivity levels) |
PiiDetector |
Detect PII — email, phone, SSN, credit card, IP |
SecretRedaction |
Block API keys, tokens, credentials in output |
BlockedKeywords |
Block forbidden keywords/phrases |
NoRefusals |
Block LLM refusals ("I cannot help with that") |
Next Steps¶
- Installation — install the package
- Examples — real-world usage patterns
- API Reference — full API docs