[Spike] Investigate AI/ML integrations across driver teams

XMLWordPrintableJSON

    • Type: Task
    • Resolution: Unresolved
    • Priority: Unknown
    • None
    • Affects Version/s: None
    • Component/s: None
    • None
    • None
    • Python Drivers
    • None
    • None
    • None
    • None
    • None
    • None

      Context

      Document how each driver team has implemented their AI/ML integrations. Note any similarities/commonalities in issues across all implementations.

      Key points to capture:

      • Type of MongoDB server used for testing (local/cloud)
      • Type of LLM used for testing (Homemade Mock Embedder / Paid for OPENAI_API_KEY)
      • How often regression are caught
      • All the libraries our teams have integrated with within a driver team
      • How often tests get run within the integrated repository (and outside of it)
      • Where and if there are any "implementation best practices" for their team
      • Findings on how requests surface
      • Findings on how integrations get prioritized/triaged
      • Findings on how teams deal with external PRs
      • ...etc

      Definition of done

      Documentation of investigation findings. Potential follow-ups.

      Pitfalls

      What should the implementer watch out for? What are the risks?

            Assignee:
            Unassigned
            Reporter:
            Casey Clements
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated: