Skip to content

History Processor Examples

Comprehensive examples demonstrating PydanticAI's conversation history management features. Learn how to build stateful AI agents that maintain context across multiple interactions.

Learning Path

Work through these examples in order to progressively understand history handling.

1. Basic History Handling

File: 1_basic_history_handling.py · Concepts: Message inspection, JSON serialization, object representation

  • View conversation history in JSON format
  • Access individual messages as objects
  • Understand ModelMessage structure
uv run python 1_basic_history_handling.py

2. Continuous History

File: 2_continuous_history.py · Concepts: Multi-turn conversations, message_history parameter, context passing

  • Build multi-turn conversations with an agent
  • Pass history to maintain context
  • Understand the difference between new_messages() and all_messages()
uv run python 2_continuous_history.py

3. History Usage in Real Workflows

File: 3_history_usage.py · Concepts: Conversation summarization, persistence, JSON serialization

  • Create realistic multi-turn conversation flows
  • Leverage history for conversation summarization
  • Save and load conversation history as JSON
uv run python 3_history_usage.py

4. History Filtering

File: 4_history_filtering.py · Concepts: History processors, filtering strategies, ModelRequest vs ModelResponse

  • Filter history to only user messages (ModelRequest)
  • Filter history to only model responses (ModelResponse)
  • Understand constraints: history must end with ModelRequest
uv run python 4_history_filtering.py

5. Context Window Management

5a. Fixed Message Limit

File: 5a_history_length_fixed.py · Concepts: Message count limiting, simple truncation

  • Keep only the last N messages
  • Basic strategy for preventing history bloat
uv run python 5a_history_length_fixed.py

5b. Dynamic Token-Based Management

File: 5b_history_length_dynamic.py · Concepts: Token estimation, RunContext dependency injection, stateful processing

  • Estimate tokens consumed by messages
  • Dynamic context guarding based on thresholds
  • Production-ready patterns for token-aware history
uv run python 5b_history_length_dynamic.py

5c. History Trimming with Tool Calls

File: 5c_history_with_tools.py · Concepts: Tool-call/response pair integrity, safe history slicing

  • Preserve tool-call and tool-response pairs during history trimming
  • Understand why splitting tool pairs breaks agent execution
  • Compare naive truncation vs tool-aware truncation
uv run python 5c_history_with_tools.py

6. Persistent History with Database

File: 6_persistent_history.py · Concepts: Database persistence, SQLite ORM, conversation archival

  • Save conversation history to SQLite database
  • Store metadata: prompts, responses, token usage, model information
  • Retrieve and query historical conversations
  • Build a conversation archive system
uv run python 6_persistent_history.py

Key Concepts

Message Types

  • ModelRequest — User/human messages sent to the agent
  • ModelResponse — Responses generated by the AI model
  • ModelMessage — Base type for any message in the conversation

History Methods

  • result.new_messages() — Only the messages from the current inference
  • result.all_messages() — Complete conversation history up to this point
  • result.all_messages_json() — Serialized history as JSON bytes

History Processors

Functions that transform history before sending to the model:

def my_processor(messages: list[ModelMessage]) -> list[ModelMessage]:
    """Transform history before agent processes it."""
    return [msg for msg in messages if isinstance(msg, ModelRequest)]

agent = Agent("openai:gpt-5.1", history_processors=[my_processor])

Important Constraint

History must always end with a ModelRequest (user message). If you filter to only ModelResponse messages, the agent cannot process the history.

Common Patterns

Stateful Conversation

# First turn
result_1 = agent.run_sync("Your prompt here")

# Second turn with context
result_2 = agent.run_sync(
    "Follow-up question",
    message_history=result_1.new_messages()
)

Context Window Management

def keep_last_messages(
    messages: list[ModelMessage],
    num_messages: int = 3
) -> list[ModelMessage]:
    return (
        messages[-num_messages:]
        if len(messages) > num_messages
        else messages
    )

agent = Agent("openai:gpt-5.1", history_processors=[keep_last_messages])

Database Persistence

# Save conversation to database
record = prepare_data_for_db(prompt, result)
add_message_to_db(record)

# Retrieve conversation
conversation = get_conversation_by_id(record.id)

Further Reading