We should have some way of ensuring that all parts of the API configuration possibilities are tested. A simple manual approach would be to systematically list all API options and identifying which ones are tested (or not), and then working to keep it up to date.
A more advanced method could enforce that the coverage is up to date. So if add a new API option is added without a corresponding Python test, that gets flagged during PR testing. This could be aided by either tagging test functions with some annotation:
or better, for the Python testing itself to "track" which API configuration strings have been used and compare it against the list in dist/api_data.py .