Concepts Overview
Memorer’s architecture is designed for fast, accurate recall at scale — not just storing text in a vector database. Every piece of data flows through extraction, structured storage, and semantic retrieval.
Architecture
User Input
↓
┌─────────────┐
│ Remember │ Extract entities & relationships from text
└──────┬──────┘
↓
┌─────────────┐
│ Knowledge │ Store in vector DB + graph DB
│ Graph │
└──────┬──────┘
↓
┌─────────────┐
│ Recall │ Semantic search + graph reasoning
└──────┬──────┘
↓
Context for LLMCore concepts
Memories
Categorized, consolidated, and ranked — not raw text dumps. Memories are automatically classified as direct, derived, or inferred, and consolidation keeps your knowledge base clean without manual curation.
Entities
Structured, queryable data extracted from unstructured text. Every person, place, organization, and preference becomes a typed node with an importance score and relationships — so you can ask “who does Alice work with?” and get a real answer.
Knowledge Graph
What makes multi-hop reasoning possible. Vector search alone can’t connect “Alice works at Acme” with “Acme is in Seattle” to answer “where does Alice work?” The graph handles that.
Conversations
Short-term context and long-term memory without the token bloat. Instead of stuffing entire chat histories into the prompt or losing context entirely, conversations give you recent messages plus relevant memories in a single call.
Data flow
- Ingestion — Text goes in via
remember()orknowledge.ingest() - Extraction — Entities and relationships are automatically extracted
- Storage — Data is stored in both vector (for semantic search) and graph (for reasoning) databases
- Retrieval —
recall()combines vector similarity search with optional graph traversal - Consolidation — Periodic cleanup merges duplicates and removes stale data