Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-81259

updateOne without shard key does not handle WriteConcernErrors properly

    • Type: Icon: Bug Bug
    • Resolution: Unresolved
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: None
    • None
    • Query Execution
    • ALL
    • Cluster Scalability 2023-11-13, Cluster Scalability 2023-11-27, Cluster Scalability 2023-12-11, Cluster Scalability 2023-12-25, Cluster Scalability 2024-1-8, Cluster Scalability 2024-1-22, Cluster Scalability 2024-2-5, Cluster Scalability 2024-2-19, Cluster Scalability 2024-3-4, Cluster Scalability 2024-3-18, Cluster Scalability 2024-4-1, Cluster Scalability 2024-4-15, Cluster Scalability 2024-4-29, Cluster Scalability 2024-5-13, Cluster Scalability 2024-5-27, Cluster Scalability 2024-6-10, Cluster Scalability 06/24/24, Cluster Scalability 2024-07-08, Cluster Scalability 2024-07-22, Cluster Scalability 2024-08-19

      When an update with multi: true encounters a WriteConcernError on mongos, we get the following response. Note that the WCE appears in the writeConcernError field:

      {
      	"nModified" : 2,
      	"n" : 2,
      	"writeConcernError" : {
      		"code" : 64,
      		"codeName" : "WriteConcernFailed",
      		"errmsg" : "Multiple errors reported :: UnsatisfiableWriteConcern: Not enough data-bearing nodes; Error details: { writeConcern: { w: 3, wtimeout: 0, provenance: \"clientSupplied\" } } at shard-rs0 :: and :: UnsatisfiableWriteConcern: Not enough data-bearing nodes; Error details: { writeConcern: { w: 3, wtimeout: 0, provenance: \"clientSupplied\" } } at shard-rs1",
      		"errInfo" : {
      			
      		}
      	},
      	"ok" : 1,
      	"$clusterTime" : {
      		"clusterTime" : Timestamp(1695235981, 1),
      		"signature" : {
      			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
      			"keyId" : NumberLong(0)
      		}
      	},
      	"operationTime" : Timestamp(1695235981, 1)
      }
      

      On the other hand, when using multi: false on a multi-shard cluster (therefore triggering updateOne without shard key), we see that the WCE is placed into writeErrors instead, which is inconsistent with the above.

      {
      	"nModified" : 0,
      	"n" : 0,
      	"writeErrors" : [
      		{
      			"index" : 0,
      			"code" : 100,
      			"errmsg" : "Write results unavailable from failing to target a host in the shard shard-rs1 :: caused by :: Command error committing internal transaction :: caused by :: Not enough data-bearing nodes; Error details: { writeConcern: { w: 3, wtimeout: 0, provenance: \"clientSupplied\" } }"
      		}
      	],
      	"ok" : 1,
      	"$clusterTime" : {
      		"clusterTime" : Timestamp(1695235903, 2),
      		"signature" : {
      			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
      			"keyId" : NumberLong(0)
      		}
      	},
      	"operationTime" : Timestamp(1695235903, 2)
      }
      

      Also, when some other server error appears, like a TypeMismatch, the WCE is hidden and only the TypeMismatch is shown.

      {
      	"nModified" : 0,
      	"n" : 0,
      	"writeErrors" : [
      		{
      			"index" : 0,
      			"code" : 14,
      			"errmsg" : "Write results unavailable from failing to target a host in the shard shard-rs0 :: caused by :: Cannot apply $inc to a value of non-numeric type. {_id: 1000.0} has the field 'a' of non-numeric type array"
      		}
      	],
      	"ok" : 1,
      	"$clusterTime" : {
      		"clusterTime" : Timestamp(1695236285, 4),
      		"signature" : {
      			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
      			"keyId" : NumberLong(0)
      		}
      	},
      	"operationTime" : Timestamp(1695236285, 2)
      }
      

      See SERVER-81246 which is a ticket for a similar bug on FLE (also uses internal transactions).

            Assignee:
            Unassigned Unassigned
            Reporter:
            vishnu.kaushik@mongodb.com Vishnu Kaushik
            Votes:
            0 Vote for this issue
            Watchers:
            9 Start watching this issue

              Created:
              Updated: