-
Type:
Task
-
Resolution: Duplicate
-
Priority:
Unknown
-
None
-
Affects Version/s: None
-
Component/s: None
-
None
Context
Document how each driver team has implemented their AI/ML integrations. Note any similarities/commonalities in issues across all implementations.
Key points to capture:
- Type of MongoDB server used for testing (local/cloud)
- Type of LLM used for testing (Homemade Mock Embedder / Paid for OPENAI_API_KEY)
- How often regression are caught
- All the libraries our teams have integrated with within a driver team
- How often tests get run within the integrated repository (and outside of it)
- Where and if there are any "implementation best practices" for their team
- Findings on how requests surface
- Findings on how integrations get prioritized/triaged
- Findings on how teams deal with external PRs
- ...etc
Definition of done
Documentation of investigation findings. Potential follow-ups.
Pitfalls
What should the implementer watch out for? What are the risks?
- depends on
-
PYTHON-4816 [Spike] Investigate AI/ML integrations across driver teams
-
- Backlog
-
- is related to
-
PYTHON-4816 [Spike] Investigate AI/ML integrations across driver teams
-
- Backlog
-