|
Search for related papers, the papers they cite, the papers that cite them, etc.
Read or skim selected papers and summarize them well enough for our paper's Related Work section: how do they differ from our approach, why can't we use their approach, how do we build upon them.
Some important titles:
- Model-Based Trace Checking
- Trace Analysis for Conformance and Arbitration Testing
- A General Approach to Trace-Checking in Distributed Computing Systems
- Verifying Software Traces Against a Formal Specification with TLA+ and TLC - self-published by Ron Pressler
- Concurrent development of model and implementation
- A model-based trace testing approach for validation of formal co-simulation models
Things we're looking for:
- Techniques to verify that an implementation matches a formal model. In "How Amazon Web Services Uses Formal Methods" they wrote, 'On learning about TLA+, engineers usually ask, “How do we know that the executable code correctly implements the verified design?” The answer is we do not know.' OTOH there is the "Model-Based Trace Checking" paper, but not for TLA+, not for industrial software like MongoDB, and not deployed to CI.
- Any estimates of the likelihood of mismatches due to "transcription bugs" in either direction: when implementing a spec or modeling an existing implementation. Any estimates of the cost of these bugs.
- Techniques to measure how much of the formal spec's state space is covered by implementation tests.
- Techniques to increase this coverage.
|