Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-7487

Support overwriting existing documents in bulk insert operation

    • Type: Icon: Improvement Improvement
    • Resolution: Won't Fix
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: Write Ops
    • Labels:
      None

      I have a relatively simple use case, IMHO.

      The system receives events that eventually end up as objects in a Mongo collection.

      The system has an "event cache", which is used to accumulate single event data over some time. The event cache utilizes some primary key system to identify updates to the same events. This primary key is what is then used as document ID for Mongo.

      Once the event cache deems a number of events to be "completed", the events are flushed out into Mongo, and are removed from the cache. This "flushing" is done using bulk insert.

      Once in a blue moon, however, there is a problem, and the cache can not purge the events that have been flushed out. As a result, when the next flush occurs, the bulk insert fails because there is a document ID collision.

      Now, I've set the "ContinueOnLastError" to true, and I hope that this will prevent exceptions (I'm using Java driver) in the bulk inserts. However, I would prefer that in case of a collision, the documents are overwritten, instead of preferring the one that's already in the collection.

      Would it not be reasonable to add a feature to overwrite documents during bulk insert? This is not an upsert, as the document is completely overwritten, and not updated.

            Assignee:
            Unassigned Unassigned
            Reporter:
            pawel Pawel
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: