History Processor Examples¶
Comprehensive examples demonstrating PydanticAI's conversation history management features. Learn how to build stateful AI agents that maintain context across multiple interactions.
Learning Path¶
Work through these examples in order to progressively understand history handling.
1. Basic History Handling¶
File: 1_basic_history_handling.py · Concepts: Message inspection, JSON serialization, object representation
- View conversation history in JSON format
- Access individual messages as objects
- Understand
ModelMessagestructure
2. Continuous History¶
File: 2_continuous_history.py · Concepts: Multi-turn conversations, message_history parameter, context passing
- Build multi-turn conversations with an agent
- Pass history to maintain context
- Understand the difference between
new_messages()andall_messages()
3. History Usage in Real Workflows¶
File: 3_history_usage.py · Concepts: Conversation summarization, persistence, JSON serialization
- Create realistic multi-turn conversation flows
- Leverage history for conversation summarization
- Save and load conversation history as JSON
4. History Filtering¶
File: 4_history_filtering.py · Concepts: History processors, filtering strategies, ModelRequest vs ModelResponse
- Filter history to only user messages (
ModelRequest) - Filter history to only model responses (
ModelResponse) - Understand constraints: history must end with
ModelRequest
5. Context Window Management¶
5a. Fixed Message Limit¶
File: 5a_history_length_fixed.py · Concepts: Message count limiting, simple truncation
- Keep only the last N messages
- Basic strategy for preventing history bloat
5b. Dynamic Token-Based Management¶
File: 5b_history_length_dynamic.py · Concepts: Token estimation, RunContext dependency injection, stateful processing
- Estimate tokens consumed by messages
- Dynamic context guarding based on thresholds
- Production-ready patterns for token-aware history
5c. History Trimming with Tool Calls¶
File: 5c_history_with_tools.py · Concepts: Tool-call/response pair integrity, safe history slicing
- Preserve tool-call and tool-response pairs during history trimming
- Understand why splitting tool pairs breaks agent execution
- Compare naive truncation vs tool-aware truncation
6. Persistent History with Database¶
File: 6_persistent_history.py · Concepts: Database persistence, SQLite ORM, conversation archival
- Save conversation history to SQLite database
- Store metadata: prompts, responses, token usage, model information
- Retrieve and query historical conversations
- Build a conversation archive system
Key Concepts¶
Message Types¶
ModelRequest— User/human messages sent to the agentModelResponse— Responses generated by the AI modelModelMessage— Base type for any message in the conversation
History Methods¶
result.new_messages()— Only the messages from the current inferenceresult.all_messages()— Complete conversation history up to this pointresult.all_messages_json()— Serialized history as JSON bytes
History Processors¶
Functions that transform history before sending to the model:
def my_processor(messages: list[ModelMessage]) -> list[ModelMessage]:
"""Transform history before agent processes it."""
return [msg for msg in messages if isinstance(msg, ModelRequest)]
agent = Agent("openai:gpt-5.1", history_processors=[my_processor])
Important Constraint
History must always end with a ModelRequest (user message). If you filter to only ModelResponse messages, the agent cannot process the history.
Common Patterns¶
Stateful Conversation¶
# First turn
result_1 = agent.run_sync("Your prompt here")
# Second turn with context
result_2 = agent.run_sync(
"Follow-up question",
message_history=result_1.new_messages()
)
Context Window Management¶
def keep_last_messages(
messages: list[ModelMessage],
num_messages: int = 3
) -> list[ModelMessage]:
return (
messages[-num_messages:]
if len(messages) > num_messages
else messages
)
agent = Agent("openai:gpt-5.1", history_processors=[keep_last_messages])
Database Persistence¶
# Save conversation to database
record = prepare_data_for_db(prompt, result)
add_message_to_db(record)
# Retrieve conversation
conversation = get_conversation_by_id(record.id)