Uploaded image for project: 'WiredTiger'
  1. WiredTiger
  2. WT-7249

Adjust storage source extension APIs

    • Type: Icon: Improvement Improvement
    • Resolution: Fixed
    • Priority: Icon: Major - P3 Major - P3
    • WT10.0.0, 4.9.0, 4.4.5
    • Affects Version/s: None
    • Component/s: None
    • Labels:
      None

      After having some experience coding to the new storage source extension introduced in WT-7088, I notice some needed improvements:

      • A way to look up a storage source by name, like WT_CONNECTION->get_storage_source() . This is useful because there is no way to directly test a storage source implemented in a shared library from outside the shared library. This will let us build testers in C and/or Python.
      • WT_LOCATION_HANDLE should have a close method rather than WT_STORAGE_SOURCE->location_handle_free . This is more in keeping with other handles.

      In addition, we should more closely define how WT_FILE_HANDLES produced by a storage source behave. In particular, let's define clearly what WT_FILE_HANDLE->sync() and WT_FILE_HANDLE->close() do.

      One easy approach is to say that writes to cloud stores are done all at once (in terms of when they happen in the API). Then a sync would be ignored, and close would push the entire content up. This would allow "out of order" writes to be made to the file handle (would the implementation need to ever do that?). Using this approach there would be a network burden on a close, but nowhere else. On the other hand, it is currently seeming likely we will be generating files first locally, and essentially always copying pre-made files to the cloud, so writes will be in order.

      If we feel certain that writes will be in order (perhaps we need an open flag to indicate this - we can certainly enforce it), then we may have some freedom to push data up as the writes progress. This is viable if the provider allows splitting up writes and doesn't require us to give the entire size of the file or the entire checksum in advance. In this model, any write could push up collected data, and perhaps sync is a hint at a good push point.

      We do have to have some implementation of sync, as it's required by the API, but if could be a no-op and we don't need to call it, and thereby let the implementation decide what the best strategy is for the provider and the situation.

      The deliverable for the sync and close part of this ticket is to to add to the API documentation so the caller has the right expectations.

      A final documentation fixup would be to define when an object properly "exists" so that it can be seen from the point of view of listing objects, and whether open calls of a file "in progress" can succeed or not.  A POSIX file system makes objects exist when they are created.  Cloud providers make the objects exist when they are finalized, so really at the close call.  I think the documentation should define the latter behavior (and the local storage implementation should emulate that).

            Assignee:
            donald.anderson@mongodb.com Donald Anderson
            Reporter:
            donald.anderson@mongodb.com Donald Anderson
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: