[SERVER-25601] dropDatabase doesn't validate writeConcern until one replicated database is created Created: 13/Aug/16  Updated: 15/Aug/16  Resolved: 15/Aug/16

Status: Closed
Project: Core Server
Component/s: Write Ops
Affects Version/s: 3.3.10
Fix Version/s: None

Type: Bug Priority: Minor - P4
Reporter: A. Jesse Jiryu Davis Assignee: Unassigned
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Operating System: ALL
Participants:

 Description   

On a fresh replica set with three nodes, version 3.3.10-421-gbd66e1b, the dropDatabase command doesn't check if its writeConcern can be satisfied. A writeConcern of w: 99 should cause a writeConcernError, but it doesn't:

replset:PRIMARY> db.runCommand({dropDatabase: 1, writeConcern: {w: 99}})
{ "ok" : 1 }

Once we've created any databases, dropDatabase with writeConcern seems to work as expected. Now it causes a writeConcernError:

replset:PRIMARY> db.c.insert({})
WriteResult({ "nInserted" : 1 })
replset:PRIMARY> db.runCommand({dropDatabase: 1, writeConcern: {w: 99}})
{
       	"dropped" : "test",
       	"ok" : 1,
       	"writeConcernError" : {
       		"code" : 100,
       		"errmsg" : "Not enough data-bearing nodes"
       	}
}

From this point forward, even after dropping all databases except "local", dropDatabases still reports a writeConcernError with w: 99 as expected:

replset:PRIMARY> use foo
switched to db foo
replset:PRIMARY> db.runCommand({dropDatabase: 1, writeConcern: {w: 99}})
{
       	"ok" : 1,
       	"writeConcernError" : {
       		"code" : 100,
       		"errmsg" : "Not enough data-bearing nodes"
       	}
}
replset:PRIMARY> use bar
switched to db bar
replset:PRIMARY> db.runCommand({dropDatabase: 1, writeConcern: {w: 99}})
{
       	"ok" : 1,
       	"writeConcernError" : {
       		"code" : 100,
       		"errmsg" : "Not enough data-bearing nodes"
       	}
}
replset:PRIMARY> db.runCommand({dropDatabase: 1, writeConcern: {w: 99}})
{
       	"ok" : 1,
       	"writeConcernError" : {
       		"code" : 100,
       		"errmsg" : "Not enough data-bearing nodes"
       	}
}



 Comments   
Comment by A. Jesse Jiryu Davis [ 15/Aug/16 ]

Fine by me, I only noticed because of a driver test and the workaround is simple.

Comment by Judah Schvimer [ 15/Aug/16 ]

This is occurring due to these lines of code: https://github.com/mongodb/mongo/blob/8855c03bdf307ef74825e0274344b1ce8df0852b/src/mongo/db/write_concern.cpp#L221-L224

Basically, if no write has been done yet on the current client then we short circuit and don't wait for write concern to be satisfied. Once a write has been done on the client that short circuit no longer works. To make it consistent we could remove that block so it always calls awaitReplication, but that would slow it down marginally. Alternatively, we could special case a noop dropDatabase, but there's no obvious "clean" way to do it. I lean towards "works as designed" for this one, especially given the nature of write concern errors in general.

Comment by Githook User [ 13/Aug/16 ]

Author:

{u'username': u'ajdavis', u'name': u'A. Jesse Jiryu Davis', u'email': u'jesse@mongodb.com'}

Message: CDRIVER-1460 update dropDatabase writeConcern test

Work around SERVER-25601.
Branch: master
https://github.com/mongodb/mongo-c-driver/commit/9c58b553239cec67c866d5685b869bfe7059ee67

Generated at Thu Feb 08 04:09:39 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.