Pydantic AI Middleware
Simple middleware for Pydantic AI agents, the Pythonic way
Pydantic AI Middleware is a lightweight library for adding before/after hooks to Pydantic AI agents. No imposed structure — you decide what to build: logging, guardrails, metrics, rate limiting, PII redaction, or anything else.
Part of the vstorm-co ecosystem for building production AI agents with Pydantic AI.
Why use Pydantic AI Middleware?¶
-
Clean API: Simple before/after hooks at 6 lifecycle stages — no complex abstractions to learn.
-
Maximum Flexibility: No imposed guardrail structure. You decide what each hook does.
-
Production Ready: 100% test coverage, strict typing with Pyright + MyPy, and parallel execution with early cancellation.
-
Composable: Chain, branch, parallelize, and load middleware from config files.
Hello World Example¶
from pydantic_ai import Agent
from pydantic_ai_middleware import MiddlewareAgent, AgentMiddleware, InputBlocked
class SecurityMiddleware(AgentMiddleware[None]):
"""Block dangerous inputs before they reach the agent."""
async def before_run(self, prompt, deps, ctx):
if "dangerous" in prompt.lower():
raise InputBlocked("Dangerous content detected")
return prompt
class LoggingMiddleware(AgentMiddleware[None]):
"""Log agent activity."""
async def before_run(self, prompt, deps, ctx):
print(f"Starting: {prompt[:50]}...")
return prompt
async def after_run(self, prompt, output, deps, ctx):
print(f"Finished: {output}")
return output
agent = MiddlewareAgent(
agent=Agent('openai:gpt-4o'),
middleware=[LoggingMiddleware(), SecurityMiddleware()],
)
result = await agent.run("Hello, how are you?")
Decorator Syntax¶
For simple middleware, use decorators:
from pydantic_ai_middleware import before_run, after_run, ToolBlocked, before_tool_call
@before_run
async def log_input(prompt, deps, ctx):
print(f"Input: {prompt}")
return prompt
@before_tool_call
async def block_dangerous_tools(tool_name, tool_args, deps, ctx):
if tool_name == "delete_file":
raise ToolBlocked(tool_name, "Not allowed")
return tool_args
Core Capabilities¶
| Capability | Description |
|---|---|
| 6 Lifecycle Hooks | before_run, after_run, before_model_request, before_tool_call, after_tool_call, on_error |
| Parallel Execution | Run multiple middleware concurrently with 4 aggregation strategies |
| Async Guardrails | Run guardrails alongside LLM calls (BLOCKING, CONCURRENT, ASYNC_POST) |
| Middleware Chains | Compose middleware into reusable sequences with + operator |
| Conditional Routing | Route to different middleware based on runtime conditions |
| Config Loading | Build pipelines from JSON/YAML configuration files |
| Context Sharing | Share data between hooks with access control |
| Decorator Syntax | Create middleware from simple decorated functions |
Part of the Ecosystem¶
Pydantic AI Middleware works alongside other vstorm-co packages:
| Package | Description |
|---|---|
| pydantic-ai | The foundation: Agent framework by Pydantic |
| pydantic-deep | Full agent framework with planning, subagents, skills |
| pydantic-ai-backend | File storage, Docker sandbox, permission controls |
| pydantic-ai-todo | Task planning with PostgreSQL and event streaming |
| subagents-pydantic-ai | Multi-agent orchestration |
| summarization-pydantic-ai | Context management processors |
Installation¶
With YAML config support:
llms.txt¶
Pydantic AI Middleware supports the llms.txt standard. Access documentation at /llms.txt for LLM-optimized content.
Next Steps¶
- Installation - Get started in minutes
- Core Concepts - Learn about middleware, hooks, and context
- Advanced Features - Chains, parallel execution, config loading
- Examples - Real-world examples
- API Reference - Complete API documentation