-
Type:
Bug
-
Resolution: Fixed
-
Priority:
Unknown
-
Affects Version/s: None
-
Component/s: None
-
None
-
None
-
Python Drivers
-
Not Needed
-
None
-
None
-
None
-
None
-
None
-
None
We've noticed features requiring an LLM doing funny things. The workaround has been to explicitly set the cache to False as a kwarg. This isn't necessary though.
The problem is being introduced in test_cache.py.
The solution that I've come up with is to add a module-scope fixture that simply sets the cache to None after tests have run. I've tested and all tests pass after this is done. Even without the workaround we do. Hence we can now do llm = ChatOpenAI(model="gpt-4o") without further kwargs.
Only the one in integration_tests has bit us so far, but I'll make the same safeguard