-
Type:
Build Failure
-
Resolution: Fixed
-
Priority:
Unknown
-
Affects Version/s: None
-
Component/s: None
-
None
-
None
-
Python Drivers
-
Not Needed
-
-
None
-
None
-
None
-
None
-
None
-
None
We appear to have a regression causing failures in tests/unit_tests/test_cache.py.
This was likely due to a downstream update that we failed to catch in a PR. As part of this ticket, we should audit the GitHub workflow filtering for when the tests are run.
The failures are of the form:
> assert output == expected_output # type: ignore E AssertionError: assert LLMResult(gen...e='LLMResult') == LLMResult(gen...e='LLMResult') E E Full diff: E - LLMResult(generations=[[ChatGeneration(text='foo', message=AIMessage(content='foo', additional_kwargs={}, response_metadata={}))]], llm_output={}, run=None, type='LLMResult') E ? ^^^^^ E + LLMResult(generations=[[ChatGeneration(text='foo', message=AIMessage(content='foo', additional_kwargs={}, response_metadata={}, usage_metadata={'total_cost': 0}))]], llm_output={}, run=[RunInfo(run_id=UUID('8606de79-6065-47f8-8dfc-93a0b3cd5de9'))], type='LLMResult') E ? ++++++++++++++++++++++++++++++++++ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^