Investigate using random seed when testing BaseChatOpenAI

XMLWordPrintableJSON

    • Python Drivers
    • Not Needed
    • Hide

      1. What would you like to communicate to the user about this feature?
      2. Would you like the user to see examples of the syntax and/or executable code and its output?
      3. Which versions of the driver/connector does this apply to?

      Show
      1. What would you like to communicate to the user about this feature? 2. Would you like the user to see examples of the syntax and/or executable code and its output? 3. Which versions of the driver/connector does this apply to?
    • None
    • None
    • None
    • None
    • None
    • None

      Context

      Describe the background behind the problem.

      In addition to a "temperature" setting, it has come to my attention that the ChatOpenAI class (as well as the Azure one, also derived from `BaseChatOpenAI`) has a random seed argument!

      It is my hope that we can use this to reduce the flakiness of our tests.

      Definition of done

      What must be done to consider the task complete?

      Investigate whether adding `seed=0` wherever we create an LLM in our tests leads to consistent output. Update throughout tests used in ai-ml testing pipelines.

      Primary focus:

      • LangChain{}

      Pitfalls

      What should the implementer watch out for? What are the risks?

              Assignee:
              Steve Silvester
              Reporter:
              Casey Clements
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated: