updateOne + upsert without filtering on the shard key fails when the collection is only on one shard and the filter includes _id

XMLWordPrintableJSON

    • Type: Bug
    • Resolution: Duplicate
    • Priority: Major - P3
    • None
    • Affects Version/s: None
    • Component/s: None
    • None
    • Cluster Scalability
    • ALL
    • Hide

       

      Adding @linda.qin's repro: 

      [direct: mongos] test> db.coll1.find()
      [ { _id: 1 } ]
      [direct: mongos] test> sh.status()
      shardingVersion
      { _id: 1, clusterId: ObjectId('683f872d28620650923ed0c1') }
      ---
      shards
      [
        {
          _id: 'shard01',
          host: 'shard01/localhost:30001',
          state: 1,
          topologyTime: Timestamp({ t: 1748993840, i: 12 }),
          replSetConfigVersion: Long('-1')
        },
        {
          _id: 'shard02',
          host: 'shard02/localhost:30002',
          state: 1,
          topologyTime: Timestamp({ t: 1748993840, i: 31 }),
          replSetConfigVersion: Long('-1')
        },
        {
          _id: 'shard03',
          host: 'shard03/localhost:30003',
          state: 1,
          topologyTime: Timestamp({ t: 1748993840, i: 55 }),
          replSetConfigVersion: Long('-1')
        }
      ]
      ---
      active mongoses
      [ { '8.0.5': 1 } ]
      ---
      autosplit
      { 'Currently enabled': 'yes' }
      ---
      balancer
      {
        'Currently running': 'no',
        'Currently enabled': 'yes',
        'Failed balancer rounds in last 5 attempts': 0,
        'Migration Results for the last 24 hours': { '1': 'Success' }
      }
      ---
      shardedDataDistribution
      [
        {
          ns: 'test.coll1',
          shards: [
            {
              shardName: 'shard01',
              numOrphanedDocs: 0,
              numOwnedDocuments: 1,
              ownedSizeBytes: 14,
              orphanedSizeBytes: 0
            },
            {
              shardName: 'shard02',
              numOrphanedDocs: 0,
              numOwnedDocuments: 0,
              ownedSizeBytes: 0,
              orphanedSizeBytes: 0
            }
          ]
        }
      ]
      ---
      databases
      [
        {
          database: { _id: 'config', primary: 'config', partitioned: true },
          collections: {}
        },
        {
          database: {
            _id: 'test',
            primary: 'shard01',
            version: {
              uuid: UUID('15b94f0d-c704-465b-ace5-8f0c42c44747'),
              timestamp: Timestamp({ t: 1748993856, i: 2 }),
              lastMod: 1
            }
          },
          collections: {
            'test.coll1': {
              shardKey: { a: 1, b: 1 },
              unique: false,
              balancing: true,
              chunkMetadata: [
                { shard: 'shard01', nChunks: 1 },
                { shard: 'shard02', nChunks: 1 }
              ],
              chunks: [
                { min: { a: MinKey(), b: MinKey() }, max: { a: 100, b: 100 }, 'on shard': 'shard01', 'last modified': Timestamp({ t: 2, i: 1 }) },
                { min: { a: 100, b: 100 }, max: { a: MaxKey(), b: MaxKey() }, 'on shard': 'shard02', 'last modified': Timestamp({ t: 2, i: 0 }) }
              ],
              tags: []
            }
          }
        }
      ]
      [direct: mongos] test> db.coll1.updateOne({_id:1}, {$set:{a:100, b:100}},{upsert:true})
      {
        acknowledged: true,
        insertedId: null,
        matchedCount: 1,
        modifiedCount: 1,
        upsertedCount: 0
      }
      [direct: mongos] test> db.coll1.find()
      [ { _id: 1, a: 100, b: 100 } ]7:41[direct: mongos] test> db.coll1.updateOne({_id:1}, {$set:{a:10, b:10}},{upsert:true})
      {
        acknowledged: true,
        insertedId: null,
        matchedCount: 1,
        modifiedCount: 1,
        upsertedCount: 0
      } 

      if I move the chunk back to shard01, so only shard01 has the chunks, the update fails:
      [direct: mongos] test> db.adminCommand({moveChunk:"test.coll1",find:

      {a:100, b:100}

      , to:"shard01", _waitForDelete:true}) { millis: 258, ok: 1, '$clusterTime': { clusterTime: Timestamp(

      { t: 1748995234, i: 30 }

      ), signature: { hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0), keyId: Long('0') } }, operationTime: Timestamp({ t: 1748995234, i: 30 }) } [direct: mongos] test> db.coll1.find() [ { _id: 1, a: 10, b: 10 } ] [direct: mongos] test> db.coll1.updateOne({_id:1}, {$set:{a:10, b:20}},{upsert:true}) MongoServerError: Shard key update is not allowed without specifying the full shard key in the query
      sh.status()
       
      collections: { 'test.coll1': { shardKey:

      { a: 1, b: 1 }

      , unique: false, balancing: true, chunkMetadata: [ { shard: 'shard01', nChunks: 2 } ], chunks: [ { min:

      { a: MinKey(), b: MinKey() }

      , max: { a: 100, b: 100 }, 'on shard': 'shard01', 'last modified': Timestamp({ t: 2, i: 1 }) }, { min:

      { a: 100, b: 100 }

      , max: { a: MaxKey(), b: MaxKey() }, 'on shard': 'shard01', 'last modified': Timestamp({ t: 3, i: 0 }) } ], tags: [] } }

      For v7.0, the update always fails. So the above behaviour that the update can succeed when two or more shards have the chunks on v8.0, I guess it's related to the change for updates on missing shard key value that @Murat Akca mentioned above
       

      Show
        Adding @linda.qin's repro:  [direct: mongos] test> db.coll1.find() [ { _id: 1 } ] [direct: mongos] test> sh.status() shardingVersion { _id: 1, clusterId: ObjectId( '683f872d28620650923ed0c1' ) } --- shards [ { _id: 'shard01' , host: 'shard01/localhost:30001' , state: 1, topologyTime: Timestamp({ t: 1748993840, i: 12 }), replSetConfigVersion: Long ( '-1' ) }, { _id: 'shard02' , host: 'shard02/localhost:30002' , state: 1, topologyTime: Timestamp({ t: 1748993840, i: 31 }), replSetConfigVersion: Long ( '-1' ) }, { _id: 'shard03' , host: 'shard03/localhost:30003' , state: 1, topologyTime: Timestamp({ t: 1748993840, i: 55 }), replSetConfigVersion: Long ( '-1' ) } ] --- active mongoses [ { '8.0.5' : 1 } ] --- autosplit { 'Currently enabled' : 'yes' } --- balancer { 'Currently running' : 'no' , 'Currently enabled' : 'yes' , 'Failed balancer rounds in last 5 attempts' : 0, 'Migration Results for the last 24 hours' : { '1' : 'Success' } } --- shardedDataDistribution [ { ns: 'test.coll1' , shards: [ { shardName: 'shard01' , numOrphanedDocs: 0, numOwnedDocuments: 1, ownedSizeBytes: 14, orphanedSizeBytes: 0 }, { shardName: 'shard02' , numOrphanedDocs: 0, numOwnedDocuments: 0, ownedSizeBytes: 0, orphanedSizeBytes: 0 } ] } ] --- databases [ { database: { _id: 'config' , primary: 'config' , partitioned: true }, collections: {} }, { database: { _id: 'test' , primary: 'shard01' , version: { uuid: UUID( '15b94f0d-c704-465b-ace5-8f0c42c44747' ), timestamp: Timestamp({ t: 1748993856, i: 2 }), lastMod: 1 } }, collections: { 'test.coll1' : { shardKey: { a: 1, b: 1 }, unique: false , balancing: true , chunkMetadata: [ { shard: 'shard01' , nChunks: 1 }, { shard: 'shard02' , nChunks: 1 } ], chunks: [ { min: { a: MinKey(), b: MinKey() }, max: { a: 100, b: 100 }, 'on shard' : 'shard01' , 'last modified' : Timestamp({ t: 2, i: 1 }) }, { min: { a: 100, b: 100 }, max: { a: MaxKey(), b: MaxKey() }, 'on shard' : 'shard02' , 'last modified' : Timestamp({ t: 2, i: 0 }) } ], tags: [] } } } ] [direct: mongos] test> db.coll1.updateOne({_id:1}, {$set:{a:100, b:100}},{upsert: true }) { acknowledged: true , insertedId: null , matchedCount: 1, modifiedCount: 1, upsertedCount: 0 } [direct: mongos] test> db.coll1.find() [ { _id: 1, a: 100, b: 100 } ]7:41[direct: mongos] test> db.coll1.updateOne({_id:1}, {$set:{a:10, b:10}},{upsert: true }) { acknowledged: true , insertedId: null , matchedCount: 1, modifiedCount: 1, upsertedCount: 0 } if I move the chunk back to shard01, so only shard01 has the chunks, the update fails: [direct: mongos] test> db.adminCommand({moveChunk:"test.coll1",find: {a:100, b:100} , to:"shard01", _waitForDelete:true}) { millis: 258, ok: 1, '$clusterTime': { clusterTime: Timestamp( { t: 1748995234, i: 30 } ), signature: { hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0), keyId: Long('0') } }, operationTime: Timestamp({ t: 1748995234, i: 30 }) } [direct: mongos] test> db.coll1.find() [ { _id: 1, a: 10, b: 10 } ] [direct: mongos] test> db.coll1.updateOne({_id:1}, {$set:{a:10, b:20}},{upsert:true}) MongoServerError: Shard key update is not allowed without specifying the full shard key in the query sh.status()   collections: { 'test.coll1': { shardKey: { a: 1, b: 1 } , unique: false, balancing: true, chunkMetadata: [ { shard: 'shard01', nChunks: 2 } ], chunks: [ { min: { a: MinKey(), b: MinKey() } , max: { a: 100, b: 100 }, 'on shard': 'shard01', 'last modified': Timestamp({ t: 2, i: 1 }) }, { min: { a: 100, b: 100 } , max: { a: MaxKey(), b: MaxKey() }, 'on shard': 'shard01', 'last modified': Timestamp({ t: 3, i: 0 }) } ], tags: [] } } For v7.0, the update always fails. So the above behaviour that the update can succeed when two or more shards have the chunks on v8.0, I guess it's related to the change for updates on missing shard key value that @Murat Akca mentioned above  
    • None
    • 3
    • TBD
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      MongoDB 8.0 updateOne + upsert without filtering on the shard key and when the filter includes _id has the following behavior: * if the document to update doesn't exist, upsert succeeds

      • if the document exists:
        • if the collection is only on one shard fails with the msg: MongoServerError: }}{{Shard key{{ update is not allowed without specifying the full }}shard key{{ in the query}}
        • if the collection is at least on two shards it works 

      This is breaking updateOne + upsert without shard key but with _id in the filter when a user moves from a Replica set to 1 shard cluster before they are able to shard a collection after adding a second shard. 

       
       

            Assignee:
            Unassigned
            Reporter:
            Ratika Gandhi
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

              Created:
              Updated:
              Resolved: