Make lint_markdown_links.py more robust

XMLWordPrintableJSON

    • Type: Improvement
    • Resolution: Fixed
    • Priority: Minor - P4
    • 9.0.0-rc0
    • Affects Version/s: None
    • Component/s: None
    • None
    • DevProd Test Infrastructure
    • Fully Compatible
    • Correctness 2026-04-21
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Latent bug (exists now)

      In parse_links, the inner loop for i, char in enumerate(line) used i as the backtick-position index. Once the outer for-loop was refactored to a while i < ... loop (also in this commit), that inner i would silently clobber the outer loop counter, causing lines to be skipped or re-processed. Fixed by renaming the inner variable to j. This found new/existing instances of unpinned/broken links, repaired by AI.

      Future bug (triggered by Prettier reformatting)

      Prettier (CommonMark-compliant) can emit reference definitions split across two lines:

      [label]:
        https://...
      

       
      instead of the single-line form [label]: https://.... The old code only matched the single-line form via REF_DEF_RE, so after a mass reformat:

      1. collect_reference_definitions would fail to register the URL, causing every use of [text][label] referencing that definition to be reported as a broken link (false positive).
      2. parse_links would not recognize the label-only line as a definition and might try to extract links from it, and would not consume the following indented URL line.

      The fix adds REF_DEF_LABEL_ONLY_RE to match the label-only line, reads the URL from the next line in both functions, and advances the index past the URL line in parse_links so it isn't processed again.

            Assignee:
            Steve McClure
            Reporter:
            Steve McClure
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: