Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-13712

Reduce peak disk usage of test suites

    XMLWordPrintable

    Details

    • Type: Task
    • Status: Closed
    • Priority: Major - P3
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.6.2, 2.7.1
    • Component/s: Testing Infrastructure
    • Labels:
      None
    • Backwards Compatibility:
      Fully Compatible
    • Backport Completed:
    • Sprint:
      Server 2.7.1
    • Linked BF Score:
      0

      Description

      We need to ensure that our test suites never consume more than approximately 10 GB of /data on our MCI buildvariants.

      Currently MCI tasks are taking up to 33 GB of /data space. This prevents us from using ephemeral storage on some EC2 configurations

      Biggest offenders seen so far (see MCI-1449)

      Linux 64/Linux 64 debug:

      • max 33G: qa_repo_tests
      • max 21G: slow2
      • max 17G: noPassthroughWithMongod
      • max 16G: aggregation
      • max 14G: sharding
      • max 12G: durability
      • max 12G: sharding

      Windows

      • max 33G: qa_repo_tests
      • max 32G: replicasets (Win 32 only)
      • max 31G: sharding (Win 32 only)
      • max 28G: slow2
      • max 27G: replication (Win 32 only)
      • max 26G: tool (Win 32 only)
      • max 26g: auth (Win 32)

      The obvious offenders in sharding/replicasets/slow2 are suites that do not clean up test databases periodically. A simple fix may be to drop all databases every N tests, like smoke.py does.

      Related: MCI-1276, MCI-1449

        Attachments

        1. dropdb.diff
          0.8 kB
          Randolph Tan
        2. patch_535f23ed3ff12251b2000002_dbroot_mb.txt
          14 kB
          Matt Kangas
        3. s_pass.diff
          0.7 kB
          Randolph Tan

          Activity

            People

            • Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: