Document explicit policy on test skipping for AI/ML Testing Pipeline

XMLWordPrintableJSON

    • Type: Task
    • Resolution: Unresolved
    • Priority: Unknown
    • None
    • Affects Version/s: None
    • Component/s: None
    • None
    • Python Drivers
    • Not Needed
    • None
    • None
    • None
    • None
    • None
    • None

      Summary

      When tests fail and notify slack in the ai-ml-testing pipeline channel, we need a policy on how soon folks should mark the test as "skip." to then remove it. 

      As well, define what needs to sit in place of a skipped test. 

      Motivation

      Who is the affected end user?

      Who are the stakeholders?

      How does this affect the end user?

      Are they blocked? Are they annoyed? Are they confused?

      How likely is it that this problem or use case will occur?

      Main path? Edge case?

      If the problem does occur, what are the consequences and how severe are they?

      Minor annoyance at a log message? Performance concern? Outage/unavailability? Failover can't complete?

      Is this issue urgent?

      Does this ticket have a required timeline? What is it?

      Is this ticket required by a downstream team?

      Needed by e.g. Atlas, Shell, Compass?

      Is this ticket only for tests?

      Is this ticket have any functional impact, or is it just test improvements?

      Cast of Characters

      Engineering Lead:
      Document Author:
      POCers:
      Product Owner:
      Program Manager:
      Stakeholders:

      Channels & Docs

      Slack Channel

      [Scope Document|some.url]

      [Technical Design Document|some.url]

            Assignee:
            Steve Silvester
            Reporter:
            Jib Adegunloye
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated: