[SERVER-2329] Dropped database doesn't disappear due to replication Created: 04/Jan/11  Updated: 06/Dec/22  Resolved: 23/Nov/16

Status: Closed
Project: Core Server
Component/s: Replication
Affects Version/s: 1.6.4
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Justin Dearing Assignee: Backlog - Replication Team
Resolution: Done Votes: 13
Labels: repl1, sync
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Windows 2008 Datacenter edition


Issue Links:
Depends
Duplicate
is duplicated by SERVER-10099 Get profiling status via the REST int... Closed
is duplicated by SERVER-7879 (empty) database is created when non-... Closed
Related
is related to SERVER-2080 Connecting to an authenticated server... Closed
is related to SERVER-10783 MongoDB displays databases even after... Closed
Assigned Teams:
Replication
Operating System: Windows
Participants:

 Description   

I'm calling db.dropDatabase() on a master of a master slave pair on windows, and it doesn't stay deleted. Console log:

> db.getSisterDB('staging_KMI').dropDatabase()

{ "dropped" : "staging_KMI", "ok" : 1 }

> show dbs
STAGE_landroverKMI
STAGE_landroverRSVP
admin
landroverKMI
landroverRSVP
local
stageing_KMI
staging_KMI

> use staging_KMI
> db.dropDatabase()

{ "dropped" : "staging_KMI", "ok" : 1 }

> show dbs
STAGE_landroverKMI
STAGE_landroverRSVP
admin
landroverKMI
landroverRSVP
local
stageing_KMI
staging_KMI
>

Master/slave stuff

Config on the master:

"C:\Program Files\10gen\mongodb-win32-x86_64-1.6.4\bin\mongod.exe" --service --logpath c:\data\logs\mongo-master.log --logappend --master --bind_ip localhost

Log on the master:
Tue Jan 04 14:48:57 [conn201] dropDatabase staging_KMI

Slave configuration:

"C:\Program Files\10gen\mongodb-win32-x86_64-1.6.4\bin\mongod.exe" --service --logpath c:\data\logs\mongo-slave.log --logappend --dbpath c:\data\db_slave --slave --source localhost --port 27018 --bind_ip localhost

Log on the slave:

Tue Jan 04 14:48:55 [replslave] repl: applied 4 operations
Tue Jan 04 14:48:55 [replslave] repl: end sync_pullOpLog syncedTo: Jan 04 14:48:55 4d233357:1
Tue Jan 04 14:48:55 [replslave] repl: from host:localhost
Tue Jan 04 14:48:55 [replslave] An earlier initial clone of 'staging_KMI' did not complete, now resyncing.
Tue Jan 04 14:48:55 [replslave] resync: dropping database staging_KMI
Tue Jan 04 14:48:55 [replslave] resync: cloning database staging_KMI to get an initial copy
Tue Jan 04 14:48:56 [replslave] resync: done with initial clone for db: staging_KMI
Tue Jan 04 14:48:57 [replslave] repl: applied 1 operations
Tue Jan 04 14:48:57 [replslave] repl: end sync_pullOpLog syncedTo: Jan 04 14:48:57 4d233359:1
Tue Jan 04 14:48:57 [replslave] repl: from host:localhost
Tue Jan 04 14:48:57 [replslave] An earlier initial clone of 'staging_KMI' did not complete, now resyncing.
Tue Jan 04 14:48:57 [replslave] resync: dropping database staging_KMI
Tue Jan 04 14:48:57 [replslave] resync: cloning database staging_KMI to get an initial copy
Tue Jan 04 14:48:57 [replslave] resync: done with initial clone for db: staging_KMI



 Comments   
Comment by Spencer Brody (Inactive) [ 23/Nov/16 ]

We don't believe the original issue in this ticket still exists. If anyone is still seeing databases recreated due to replication, please file a new issue. Note that SERVER-17397 still exists in sharded clusters, but that is a separate issue.

Comment by Matt Muscari [ 31/May/13 ]

Due to database-level locking rather than collection-level locking, we end up creating and deleting databases around every 10 minutes. Our current workaround involves letting old databases sit for some time before deleting them, in which case they usually will drop successfully. Taking the server offline is not an option for our environment.

Comment by Kenny Gorman [ 09/Apr/13 ]

This really should be prioritized into 2.4.x because it's such a horrible experience to have to stepDown() to get dropDatabase() to essentially 'work'.

Comment by Gerric Chaplin [ 05/Apr/13 ]

Bumping for the greater good. Still an issue in 2.4.1.
I have had an issue where I had multiple dbs with the same name.
One had data and the other had none. I did not see any issue from the drivers while this was going on.

live_logs	1.49951171875GB
live_logs

And two had the same name but were empty.

_logs	(empty)
_logs	(empty)

I did attempted to remove these DBs on the master but I had no luck.

The empty DBs did not seem to be copied to the other nodes in the Replica Set.

What I did to fix it:
I failed over to a secondary node.
I restarted the primary node that had the issue.
The DBs disappeared after restart.
The correct number of DBs are listed now.

Comment by David Trefou [ 21/Jan/13 ]

We are also having this issue with a sharded database over 3 replicaset.
mongodb version 2.2.2

mongos> show dbs
Ilya 15.9462890625GB
admin 0.234375GB
config 0.0625GB
mongos> use Ilya
switched to db Ilya
mongos> db.dropDatabase()

{ "dropped" : "Ilya", "ok" : 1 }

mongos> show dbs
Ilya 15.9462890625GB
admin 0.234375GB
config 0.0625GB
mongos>

Comment by Bob Kuhar [ 08/Oct/12 ]

We're using plain-jane replication and have this issue. For some databases I can...
use dbName;
db.dropDatabase();
use test;
show dbs;
...and the database never goes away. Its frustrating.

Comment by Jason R. Coombs [ 17/Sep/12 ]

We're experiencing this issue as well. MongoDB 2.0.6.

Comment by auto [ 25/Apr/12 ]

Author:

{u'login': u'ajdavis', u'name': u'A. Jesse Jiryu Davis', u'email': u'jesse@10gen.com'}

Message: Skip test that fails due to SERVER-2329
Branch: master
https://github.com/mongodb/mongo-python-driver/commit/a5761ff2664ea1680cf73688d538efcb963a9c92

Comment by Mathieu Poumeyrol [ 08/Aug/11 ]

I have a very similar issue on a sharded and RS system:

> db.getCollectionNames()
[...]
"tmp.mr.type_albums_tmp_reduce_items_45993_47129_inc",
"tmp.mr.type_albums_tmp_reduce_items_45994_47130_inc",
[...]
> db.getCollection("tmp.mr.type_albums_tmp_reduce_items_45994_47130_inc").drop()
false
> db.getCollection("tmp.mr.type_albums_tmp_reduce_items_45994_47130_inc").stats()
{
"ns" : "indexer_cache.tmp.mr.type_albums_tmp_reduce_items_45994_47130_inc",
"sharded" : false,
"primary" : "pink-alpha",
"errmsg" : "ns not found",
"ok" : 0
}

Comment by Bernie Hackett [ 02/Jun/11 ]

From the slave log while the tests are running. Seems related:

Wed Jun 1 21:07:37 [replslave] repl: applied 1002 operations
Wed Jun 1 21:07:37 [replslave] repl: end sync_pullOpLog syncedTo: Jun 1 21:07:27 4de70c7f:1
Wed Jun 1 21:07:37 [replslave] repl: from host:localhost:27017
Wed Jun 1 21:07:37 [replslave] An earlier initial clone of 'pymongo-pooling-tests' did not complete, now resyncing.
Wed Jun 1 21:07:37 [replslave] resync: dropping database pymongo-pooling-tests
Wed Jun 1 21:07:37 [replslave] resync: cloning database pymongo-pooling-tests to get an initial copy
Wed Jun 1 21:07:37 [replslave] replauthenticate: no user in local.system.users to use for authentication
resync: done with initial clone for db: pymongo-pooling-tests
Wed Jun 1 21:07:37 [replslave] repl: applied 1 operations
Wed Jun 1 21:07:37 [replslave] repl: end sync_pullOpLog syncedTo: Jun 1 21:07:29 4de70c81:1
Wed Jun 1 21:07:37 [replslave] repl: from host:localhost:27017
Wed Jun 1 21:07:37 [replslave] An earlier initial clone of 'pymongo-pooling-tests' did not complete, now resyncing.
Wed Jun 1 21:07:37 [replslave] resync: dropping database pymongo-pooling-tests
Wed Jun 1 21:07:37 [replslave] resync: cloning database pymongo-pooling-tests to get an initial copy
Wed Jun 1 21:07:37 [replslave] replauthenticate: no user in local.system.users to use for authentication
resync: done with initial clone for db: pymongo-pooling-tests
Wed Jun 1 21:07:37 [replslave] repl: applied 1 operations
Wed Jun 1 21:07:37 [replslave] repl: end sync_pullOpLog syncedTo: Jun 1 21:07:30 4de70c82:1
Wed Jun 1 21:07:37 [replslave] repl: from host:localhost:27017
Wed Jun 1 21:07:37 [replslave] An earlier initial clone of 'pymongo-pooling-tests' did not complete, now resyncing.
Wed Jun 1 21:07:37 [replslave] resync: dropping database pymongo-pooling-tests
Wed Jun 1 21:07:37 [replslave] resync: cloning database pymongo-pooling-tests to get an initial copy
Wed Jun 1 21:07:37 [replslave] replauthenticate: no user in local.system.users to use for authentication
resync: done with initial clone for db: pymongo-pooling-tests

Comment by Bernie Hackett [ 02/Jun/11 ]

My tests were run with 1.8.1 and newer.

Comment by Bernie Hackett [ 02/Jun/11 ]

I should also point out that these tests pass running against a single instance of mongod (obviously the master_slave_connection test doesn't run that way).

Comment by Bernie Hackett [ 02/Jun/11 ]

I'm seeing the same problem on OSX and Linux x86_64.

Steps to reproduce:

Start up a master on the port 27017
Start up a slave on port 27018

Run the pymongo unittest suite using nose:
python setup.py test

One or more of the following tests will fail checking if a database name exists after trying to drop the database:

======================================================================
FAIL: test_copy_db (test.test_connection.TestConnection)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/behackett/work/mongo-python-driver/test/test_connection.py", line 158, in test_copy_db
self.assertFalse("pymongo_test1" in c.database_names())
AssertionError

======================================================================
FAIL: test_drop_database (test.test_connection.TestConnection)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/behackett/work/mongo-python-driver/test/test_connection.py", line 142, in test_drop_database
self.assert_("pymongo_test" not in dbs)
AssertionError

======================================================================
FAIL: test_drop_database (test.test_master_slave_connection.TestMasterSlaveConnection)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/behackett/work/mongo-python-driver/test/test_master_slave_connection.py", line 189, in test_drop_database
self.assert_("pymongo_test" not in dbs)
AssertionError

If you then log into the mongo shell you can see that the test dbs are listed as empty and can't be deleted:

> show dbs
admin 0.203125GB
foo (empty)
local 0.453125GB
pymongo-pooling-tests (empty)
pymongo_test1 (empty)
pymongo_test2 (empty)
pymongo_test_bernie (empty)
test (empty)
test_pymongo (empty)
> use pymongo_test1
switched to db pymongo_test1
> db.dropDatabase()

{ "dropped" : "pymongo_test1", "ok" : 1 }

> use local
switched to db local
> show dbs
admin 0.203125GB
foo (empty)
local 0.453125GB
pymongo-pooling-tests (empty)
pymongo_test1 (empty)
pymongo_test2 (empty)
pymongo_test_bernie (empty)
test (empty)
test_pymongo (empty)

Comment by Justin Dearing [ 04/Jan/11 ]

I meant master/slave. I have one master one slave on the same hardware. Slave goes down once a day for backups. Nothing queries the slave.

Configured based on these directions:
http://www.mongodb.org/display/DOCS/Master+Slave

Comment by Eliot Horowitz (Inactive) [ 04/Jan/11 ]

Do you mean replica pairs? or something else?
If so, can you switch to replica sets as pairs are going away.

Generated at Thu Feb 08 02:59:38 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.