-
Type:
New Feature
-
Resolution: Unresolved
-
Priority:
Unknown
-
None
-
Affects Version/s: None
-
Component/s: None
https://docs.langchain.com/oss/javascript/langgraph/memory#long-term-memory
NOTE: this experience should include Voyage auto-embedding support for MongoDB Vector Store
Use Case
As a... LangGraph JS Developer I want... A persisted Store API to manage data across different conversation threads and sessions. So that... Agents can retain user facts, preferences, and learned instructions (Long-term Memory) without resetting context between interactions.
User Experience
Developers gain a unified interface (Store) to save/retrieve JSON documents via custom namespaces (not just Thread IDs). End-users experience agents that remember them over time.
If bug: N/A
Dependencies
- Upstream: LangGraph State management, Embedding/Vector provider interfaces.
- Downstream: Persistent storage adapters (Postgres, Redis, etc).
Risks/Unknowns
- Latency: "Hot path" memory updates (during generation) may slow down responses.
- Context Overload: Retrieving too many memories via search may exceed LLM token limits.
- Data Drift: JSON schemas for user profiles may become corrupted or bloated over time without strict validation.
Acceptance Criteria
Implementation Requirements
- Implement BaseStore interface: get, put, delete, search.
- Support hierarchical namespaces (e.g., [userId, "memories"]).
- Support Semantic Search (vector similarity) and Metadata Filtering.
Testing Requirements
- Unit tests for CRUD operations on InMemoryStore.
- Mocked tests for vector search/embedding generation.
Documentation Requirements
- API documentation for Store.
- Examples for "Hot Path" vs "Background" memory updates.
Follow Up Requirements
- Implement backend-specific stores (Postgres, Redis).
- Parity check with LangGraph Python BaseStore behavior.
- is depended on by
-
DRIVERS-3350 [AI-Frameworks] Auto embedding in Community Vector search
-
- Ready for Work
-