Context
We would like to add support for Voyage AI auto-embedding in our LangChain integrations as the default method to generate embeddings for MongoDB Vector Search. We would ideally make passing an embedding instance optional, where customers can use Voyage as the default embedding model. We will still support the ability for customers to pick their own model but as an optional parameter. More specifically we'd update the following interfaces:
- Memory & Semantic caching
- Hybrid Search
- Parent Document Retrieval
- Local RAG
- Graph RAG
- Natural Language Queries
[Proof of Concept PR|https://github.com/langchain-ai/langchain-mongodb/pull/204]
Definition of done
This tracks adding the feature to:
- Memory & Semantic caching
- Hybrid Search
- Parent Document Retrieval
- Local RAG
- Graph RAG
- Natural Language Queries
Pitfalls
potential issues may arise due to general availability in the server.
- blocks
-
INTPYTHON-754 [LangChain] Test HybridSearch Retriever with Autoembedding
-
- Blocked
-
- is cloned by
-
INTPYTHON-808 [LangGraph] Add Autoembedding to core VectorStore
-
- Blocked
-
- is depended on by
-
INTPYTHON-754 [LangChain] Test HybridSearch Retriever with Autoembedding
-
- Blocked
-