LangChain
Add persistent memory to your LangChain applications.
Setup
pip install memorer langchain langchain-openaiUsage Pattern
from memorer import Memorer
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage
# Initialize
memorer = Memorer(api_key="mem_sk_...")
user = memorer.for_user("user-123")
llm = ChatOpenAI(model="gpt-4o")
def chat_with_memory(user_message: str) -> str:
# 1. Recall relevant memories
memories = user.recall(user_message)
# 2. Build messages with memory context
system_prompt = "You are a helpful assistant."
if memories.context:
system_prompt += f"\n\nWhat you know about this user:\n{memories.context}"
messages = [
SystemMessage(content=system_prompt),
HumanMessage(content=user_message),
]
# 3. Call the LLM
response = llm.invoke(messages)
assistant_message = response.content
# 4. Remember the exchange
user.remember(f"User: {user_message}\nAssistant: {assistant_message}")
return assistant_messageWith Conversations
For multi-turn sessions with LangChain:
conv = user.conversation()
def chat_with_conversation(user_message: str) -> str:
# Add user message to conversation
conv.add("user", user_message)
# Recall with conversation context + long-term memory
result = conv.recall(user_message)
# Build messages
messages = [
SystemMessage(content=f"You are a helpful assistant.\n\n{result.context}"),
HumanMessage(content=user_message),
]
# Call the LLM
response = llm.invoke(messages)
assistant_message = response.content
# Add assistant response
conv.add("assistant", assistant_message)
return assistant_messageWith LangChain Chains
You can also use Memorer inside a LangChain chain by recalling context in a custom step:
from langchain_core.runnables import RunnableLambda
def recall_memories(input_dict):
"""Recall and inject memories into the prompt."""
query = input_dict["input"]
memories = user.recall(query)
return {
**input_dict,
"memory_context": memories.context,
}
# Build a chain with memory recall
chain = (
RunnableLambda(recall_memories)
| prompt_template
| llm
)How it works
user.recall()searches long-term memory for relevant context- The memory context is injected into the system message or prompt template
- The LLM generates a response with the enriched context
user.remember()stores the exchange for future recall
Last updated on