Skip to Content
ConceptsConcepts Overview

Concepts Overview

Memorer provides persistent memory for AI applications through a layered architecture:

Architecture

User Input ┌─────────────┐ │ Remember │ Extract entities & relationships from text └──────┬──────┘ ┌─────────────┐ │ Knowledge │ Store in vector DB + graph DB │ Graph │ └──────┬──────┘ ┌─────────────┐ │ Recall │ Semantic search + graph reasoning └──────┬──────┘ Context for LLM

Core concepts

Memories

Individual pieces of stored knowledge. Memories can be direct (explicitly stored), derived (extracted from text), or inferred (generated through reasoning). Memories support consolidation to keep your knowledge base clean.

Entities

Named things extracted from memories — people, places, organizations, preferences, skills. Each entity has a type, category, and importance score. Entities are connected through typed relationships.

Knowledge Graph

The graph structure that connects entities through relationships. Supports community detection (finding clusters of related entities) and deduplication (merging duplicate entities).

Conversations

Session-based message tracking that combines short-term context (recent messages) with long-term memory (semantic search). Conversations automatically extract memories from messages.

Data flow

  1. Ingestion — Text goes in via remember() or knowledge.ingest()
  2. Extraction — Entities and relationships are automatically extracted
  3. Storage — Data is stored in both vector (for semantic search) and graph (for reasoning) databases
  4. Retrievalrecall() combines vector similarity search with optional graph traversal
  5. Consolidation — Periodic cleanup merges duplicates and removes stale data
Last updated on