Duplicate _id in the same collection without sharding/replicaset configured

XMLWordPrintableJSON

    • Type: Bug
    • Resolution: Duplicate
    • Priority: Blocker - P1
    • None
    • Affects Version/s: 3.0.8
    • Component/s: Storage
    • None
    • Fully Compatible
    • ALL
    • Hide

      db.testDup.insert(
      {
      _id :

      { "field1":1, "field2":2 }

      }

      db.testDup.insert(
      {
      _id :

      { "field2":2, "field1":1 }

      }

      Show
      db.testDup.insert( { _id : { "field1":1, "field2":2 } } db.testDup.insert( { _id : { "field2":2, "field1":1 } }
    • None
    • 0
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      I've a compound _id that has duplicated entries as shown below.

      /* 1 */
      {
          "_id" : {
              "field1" : 6001,
              "field2" : 6004,
              "from" : "ORIGINAL",
              "field3" : 6006,
              "field4" : 6002
          }
      }
      
      /* 2 */
      {
          "_id" : {
              "field1" : 6001,
              "from" : "ORIGINAL",
              "field2" : 6004,
              "field3" : 6006,
              "field4" : 6002
          }
      }
      
      /* 3 */
      {
          "_id" : {
              "field1" : 6001,
              "field3" : 6006,
              "from" : "ORIGINAL",
              "field2" : 6004,
              "field4" : 6002
          }
      }
      
      /* 4 */
      {
          "_id" : {
              "field1" : 6001,
              "from" : "ORIGINAL",
              "field3" : 6006,
              "field2" : 6004,
              "field4" : 6002
          }
      }
      

      The only difference is the order of the fields inside. Is this behaviour ok?

              Assignee:
              Unassigned
              Reporter:
              José Mª Pérez
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: