Concepts Overview
Memorer provides persistent memory for AI applications through a layered architecture:
Architecture
User Input
↓
┌─────────────┐
│ Remember │ Extract entities & relationships from text
└──────┬──────┘
↓
┌─────────────┐
│ Knowledge │ Store in vector DB + graph DB
│ Graph │
└──────┬──────┘
↓
┌─────────────┐
│ Recall │ Semantic search + graph reasoning
└──────┬──────┘
↓
Context for LLMCore concepts
Memories
Individual pieces of stored knowledge. Memories can be direct (explicitly stored), derived (extracted from text), or inferred (generated through reasoning). Memories support consolidation to keep your knowledge base clean.
Entities
Named things extracted from memories — people, places, organizations, preferences, skills. Each entity has a type, category, and importance score. Entities are connected through typed relationships.
Knowledge Graph
The graph structure that connects entities through relationships. Supports community detection (finding clusters of related entities) and deduplication (merging duplicate entities).
Conversations
Session-based message tracking that combines short-term context (recent messages) with long-term memory (semantic search). Conversations automatically extract memories from messages.
Data flow
- Ingestion — Text goes in via
remember()orknowledge.ingest() - Extraction — Entities and relationships are automatically extracted
- Storage — Data is stored in both vector (for semantic search) and graph (for reasoning) databases
- Retrieval —
recall()combines vector similarity search with optional graph traversal - Consolidation — Periodic cleanup merges duplicates and removes stale data