Create a test for very large table sizes

XMLWordPrintableJSON

    • Type: Workload
    • Resolution: Unresolved
    • Priority: Major - P3
    • None
    • Affects Version/s: None
    • Component/s: None
    • None
    • Storage Engines - Persistence
    • 1,965.27
    • SE Persistence backlog
    • None

      There is some very old documentation in WiredTiger that says a table stores offsets in a 32 bit value, and then multiplies that by the allocation size. Which infers a maximum table size of 16TB.

      The offset is stored in a 64 bit value (and always has been as far as I can remember).

      It would be useful to add a test that ensures file sizes that require block manager offsets greater than 32 bits are handled correctly (i.e: the offset is never stuffed into a 32 bit field).

      In the least we can tune the allocation size down to the minimum of 512B, but it would probably be more practical to introduce a new debug mode that uses additional space for the offset field somehow, to save having to provision lots of disk and waiting for a giant data set to be populated.

      This is becoming more relevant as users are creating tables with up to and over 16TB of content.

            Assignee:
            Unassigned
            Reporter:
            Alexander Gorrod
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated: