Introduction
Memorer is production-grade memory infrastructure for conversational AI — sub-100ms semantic recall, automatic knowledge graphs, and zero-maintenance consolidation.
Why Memorer
- Fast retrieval — Sub-100ms semantic recall, not keyword matching. Queries return ranked, relevant context ready to inject into your prompt
- True understanding — Every
remember()call extracts entities, relationships, and categories automatically. Your data becomes a structured knowledge graph, not a pile of embeddings - Production scale — Built to handle millions of users. Memory stays fast and accurate as your data grows
- Zero maintenance — Automatic consolidation merges duplicates, resolves contradictions, and removes stale data. Your knowledge base stays clean without manual curation
How it works
from memorer import memorer
client = memorer(api_key="mem_sk_...")
user = client.for_user("user-123")
# Store a memory — returns IngestResponse with extraction counts
user.remember("The user prefers dark mode and uses VS Code")
# Recall relevant memories — returns QueryResponse with context string
results = user.recall("What editor does the user prefer?")
print(results.context)
# "The user prefers dark mode and uses VS Code"The SDK is synchronous — no await needed.
Key concepts
| Concept | Description |
|---|---|
| Memories | Individual pieces of stored knowledge (direct, derived, or inferred) |
| Entities | Named things extracted from memories (people, places, preferences) |
| Relationships | Connections between entities in the knowledge graph |
| Conversations | Sessions that combine short-term context with long-term memory |
| Consolidation | Automatic cleanup that merges duplicates and removes stale data |
Next steps
- Getting started — Install the SDK and store your first memory
- Concepts — Understand the architecture
- SDK Reference — Full API documentation
- Integrations — Use with OpenAI, Anthropic, or LangChain
Last updated on