Skip to Content
Introduction

Introduction

Memorer is production-grade memory infrastructure for conversational AI — sub-100ms semantic recall, automatic knowledge graphs, and zero-maintenance consolidation.

Why Memorer

  • Fast retrieval — Sub-100ms semantic recall, not keyword matching. Queries return ranked, relevant context ready to inject into your prompt
  • True understanding — Every remember() call extracts entities, relationships, and categories automatically. Your data becomes a structured knowledge graph, not a pile of embeddings
  • Production scale — Built to handle millions of users. Memory stays fast and accurate as your data grows
  • Zero maintenance — Automatic consolidation merges duplicates, resolves contradictions, and removes stale data. Your knowledge base stays clean without manual curation

How it works

from memorer import memorer client = memorer(api_key="mem_sk_...") user = client.for_user("user-123") # Store a memory — returns IngestResponse with extraction counts user.remember("The user prefers dark mode and uses VS Code") # Recall relevant memories — returns QueryResponse with context string results = user.recall("What editor does the user prefer?") print(results.context) # "The user prefers dark mode and uses VS Code"

The SDK is synchronous — no await needed.

Key concepts

ConceptDescription
MemoriesIndividual pieces of stored knowledge (direct, derived, or inferred)
EntitiesNamed things extracted from memories (people, places, preferences)
RelationshipsConnections between entities in the knowledge graph
ConversationsSessions that combine short-term context with long-term memory
ConsolidationAutomatic cleanup that merges duplicates and removes stale data

Next steps

Last updated on