Examples¶
All examples are interactive chat agents with persistent memory. They share the same integration pattern:
- Retrieve relevant context from memory
- Inject context into the agent/LLM
- Store the exchange after response
- Processing triggers automatically in background
Prerequisites¶
Available Examples¶
| Example | Framework | Run Command |
|---|---|---|
| Quickstart | None (core API) | uv run python examples/quickstart.py |
| Full Demo | None (core API) | uv run python examples/demo.py |
| PydanticAI | PydanticAI | uv run python examples/pydantic_ai_agent.py |
| LangGraph | LangGraph + LangChain | uv run python examples/langgraph_agent.py |
| LlamaIndex | LlamaIndex | uv run python examples/llamaindex_agent.py |
| CrewAI | CrewAI | uv run python examples/crewai_agent.py |
| AutoGen | Microsoft AutoGen | uv run python examples/autogen_agent.py |
All interactive agents support these commands:
quit— Flush remaining messages and exitflush— Force-process buffered messagesdebug— Show current memory contents