[SERVER-35265] Write concern error on dropDatabase command also reports command failure Created: 29/May/18  Updated: 27/Oct/23  Resolved: 11/Jun/18

Status: Closed
Project: Core Server
Component/s: Catalog, Replication
Affects Version/s: None
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Jeffrey Yemin Assignee: Louis Williams
Resolution: Works as Designed Votes: 0
Labels: nyc
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Backports
Depends
is depended on by JAVA-2868 Re-enable write concern tests for dro... Closed
Operating System: ALL
Backport Requested:
v4.0
Steps To Reproduce:

Start a 3-node replica set and connect to the primary in the shell (tested with 4.1.0-77-gf0e5229). Then create a database and drop it with w:5:

MongoDB Enterprise repl0:PRIMARY> db.test.insert({})
WriteResult({ "nInserted" : 1 })
MongoDB Enterprise repl0:PRIMARY> db.runCommand({dropDatabase: 1, writeConcern : {w : 5} })
{
	"writeConcernError" : {
		"code" : 100,
		"codeName" : "CannotSatisfyWriteConcern",
		"errmsg" : "Not enough data-bearing nodes"
	},
	"operationTime" : Timestamp(1527552755, 1),
	"ok" : 0,
	"errmsg" : "dropDatabase test failed waiting for 1 collection drops (most recent drop optime: { ts: Timestamp(1527552755, 1), t: 2 }) to replicate. :: caused by :: Not enough data-bearing nodes",
	"code" : 100,
	"codeName" : "CannotSatisfyWriteConcern",
	"$clusterTime" : {
		"clusterTime" : Timestamp(1527552755, 1),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

Expected results (tested with 4.0.0-rc0):

MongoDB Enterprise repl0:PRIMARY> db.test.insert({})
WriteResult({ "nInserted" : 1 })
MongoDB Enterprise repl0:PRIMARY> db.runCommand({dropDatabase: 1, writeConcern : {w : 5} })
{
	"dropped" : "test",
	"ok" : 1,
	"writeConcernError" : {
		"code" : 100,
		"codeName" : "CannotSatisfyWriteConcern",
		"errmsg" : "Not enough data-bearing nodes"
	},
	"operationTime" : Timestamp(1527553089, 2),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1527553089, 2),
		"signature" : {
			"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
			"keyId" : NumberLong(0)
		}
	}
}

Sprint: Storage NYC 2018-06-18
Participants:

 Description   

A write concern error normally results in a successful command (ok: 1) and a writeConcernError document. But in recent 4.1 builds it results in a command failure (ok: 0) as well for the dropDatabase command, making it inconsistent with other commands.

The result is that a driver may throw the wrong exception type. For example, the Java driver will throw a MongoCommandException instead of a MongoWriteConcernException. This causes several write concern-related regression tests to fail.



 Comments   
Comment by Louis Williams [ 11/Jun/18 ]

Based on the discussion I am closing as "Works as Designed"

Comment by Jeffrey Yemin [ 06/Jun/18 ]

My intent was to make server team aware in case this was not an intentional change. It's acceptable to me to close as Won't Fix.  

Comment by Louis Williams [ 06/Jun/18 ]

I believe that the dropDatabase command should not return ok: 1 unless the dropDatabase oplog entry has been written to the primary. With two-phase drops, the first phase renames the collections to drop-pending namespaces and waits for them to replicate to at least a majority of nodes. The second phase performs the database drop, writes the oplog entry, and waits again for the user's write concern.

If the first phase fails to replicate, it would not be acceptable to return ok: 1 to a user, because the command did not complete successfully, and should fail with a WriteConcernError. While this differs with how most other commands behave, I think it makes sense that a user receive a CommandException in this case.

Discussed with Jeff over Slack, and he thinks SERVER-35083 may be sufficient to force a writeConcern error, the goal of his example using w: 5. jeff.yemin please let me know how you would like us to proceed on this ticket.

Comment by Jeffrey Yemin [ 04/Jun/18 ]

The issue can be reproduced with 4.0.0-rc1 as well.

Comment by Ian Whalen (Inactive) [ 01/Jun/18 ]

When resolving this please check whether this needs to get backported to 4.0.

Generated at Thu Feb 08 04:39:19 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.