There appears to be incorrect data in cedar. See attached screenshot. This is going to trigger alerting on DAG team about consistent failure to handle updates for that time series since we assume there can only be one avg_latency_picoseconds measurement made per run of the test.
As an EVG engineer,
I'd like to add checks to ensure cedar does not ingest data that does not conform to its schema, such that, broken data is rejected early rather than percolating through various systems and requiring cleanup.
- Add checks to prevent ingestion of performance results with duplicated measurements.
- Remove performance results with erroneous measurements like this from the db.