Show
Review the ticket description for general accuracy and completeness
Bug - Confirm that the bug still exists
Task / Feature / Improvement - Ensure every section of the template is filled out and makes sense
Build failure - Investigate and confirm the cause of the build failure
Spec change - Check whether any more recent changes have been made to the spec that might affect the implementation requirements
What is the expected behavior?
What do the official driver or server docs currently say about this functionality?
What should they say?
If revisions or additions are needed, mark the ticket as docs changes needed and fill out the doc changes form
What do our api or readme docs currently say about this functionality?
What should they say?
Capture any revisions or additions in the ticket documentation AC
If applicable, what does the common drivers spec say? (Note: your kickoff partner should independently review the spec)
Are any clarifications or revisions needed?
If applicable, what do other drivers do?
If there is no common spec, is a common spec needed?
What should the behavior be?
Update the ticket description and implementation requirements as needed
Review and address any unknowns explicitly called out in the ticket
What will be the impact on users?
Who will be impacted?
Why might users care about this change?
Capture relevant detail in the "User Impact" section of the ticket description
What will be the impact on any downstream projects? (e.g., shell, mongoose)
Update follow up requirements and create subtasks for follow up or coordination actions
What variables affect the feature in question?
Server versions
Deployment types
Auth settings
Server and client configuration options
Specific apis / api options
Runtime or bundler settings
Special sequences of operations
Any other special conditions
How should all the identified variables be tested?
Identify happy path and error case combinations of variables
Given [variables] , when [action is performed] , [feature] should [behave in the expected way]
How will we achieve the necessary coverage for these cases?
Automated spec tests?
Are there test runner changes required?
How up to date are our current tests and runners?
New integration or prose tests?
Unit tests?
Will we need to modify any existing tests?
Is there technical debt that will affect the implementation of new or existing tests?
Do we have the necessary tooling infrastructure already in place for any new tests?
Update test requirements on the ticket to reflect reality
Create subtasks for any testing groundwork that can happen independently of the implementation
What is the scope of the code changes?
List the code bases and the areas of each code base that will need changes
Is there technical debt in any of these areas that will affect the implementation?
Identify any existing adjacent functionality that could be impacted by these changes
Is there sufficient existing test coverage for the adjacent functionality?
Update ticket test AC and create subtask(s) to cover existing functionality if coverage is missing
If multiple libraries are affected, determine the order in which changes need to go in
Create subtasks for the implementation (at least one per affected codebase)
What is the expected impact on performance?
Do we have existing performance coverage for the affected areas?
Do we need to add new coverage?
Update ticket test AC and create subtask(s) as needed
Consider backport requirements
Should this be backported?
What would be the cost of a backport?
Is the metadata of this ticket accurate and complete?
Double check the acceptance criteria to ensure it accurately captures the expected behavior, test, and follow-up requirements
Double check the documentation requirements
Double check the task breakdown to ensure it covers all actionable items in the ticket AC