[SERVER-2079] Authenticated Connections do not get terminated when dropDatabase() command is issued. Created: 09/Nov/10  Updated: 08/Mar/13  Resolved: 28/Feb/13

Status: Closed
Project: Core Server
Component/s: None
Affects Version/s: 1.6.3
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: Justin Smestad Assignee: Unassigned
Resolution: Duplicate Votes: 5
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

ubuntu 10.04


Issue Links:
Depends
is depended on by SERVER-2080 Connecting to an authenticated server... Closed
Duplicate
duplicates SERVER-6620 Auth credentials should be invalidate... Closed
Operating System: ALL
Participants:

 Description   

If the user has an active connection to a particular database, with the server running in an authenticated environment, if you go into the MongoDB shell and execute a `db.dropDatabase()` or `db.getSisterDB('db_name').dropDatabase()` the data is dropped, but the authenticated connections are left open allowing the system to recreate the database.

A temporary workaround is to restart /etc/networking to kill all active connections or restart mongodb for it to take effect. This is a pretty big bug for the MongoDB hosts out there like ourselves.

Here is a gist below of what the log output says: https://gist.github.com/9a6e2b1b81917d18b11c

Tue Nov 9 04:31:41 [conn65729] run command clobby-staging.$cmd

{ dropDatabase: 1.0 }

Tue Nov 9 04:31:41 [conn65729] dropDatabase clobby-staging
Tue Nov 9 04:31:41 [conn65729] dropDatabase clobby-staging

Tue Nov 9 04:31:41 [conn65729] run command clobby-staging.$cmd

{ dropDatabase: 1.0 }

Tue Nov 9 04:31:41 [conn65729] dropDatabase clobby-staging
Tue Nov 9 04:31:41 [conn65729] dropDatabase clobby-staging
Tue Nov 9 04:31:41 [conn4] getmore local.oplog.$main cid:5068368097862225500 getMore: { ts:

{ $gte: new Date(5534923916268535809) }

} bytes:105 nreturned:1 3282ms
Tue Nov 9 04:31:41 [conn65729] query clobby-staging.$cmd ntoreturn:1 command:

{ dropDatabase: 1.0 }

reslen:81 12ms

Tue Nov 9 04:31:42 [conn65447] Accessing: clobby-staging for the first time
Tue Nov 9 04:31:42 [conn65447] query clobby-staging.rooms ntoreturn:1 reslen:36 nreturned:0 0ms
Tue Nov 9 04:31:42 allocating new datafile /data/mongodb/clobby-staging/clobby-staging.ns, filling with zeroes...
Tue Nov 9 04:31:42 done allocating datafile /data/mongodb/clobby-staging/clobby-staging.ns, size: 16MB, took 0.002 secs
Tue Nov 9 04:31:42 allocating new datafile /data/mongodb/clobby-staging/clobby-staging.0, filling with zeroes...
Tue Nov 9 04:31:42 done allocating datafile /data/mongodb/clobby-staging/clobby-staging.0, size: 64MB, took 0.001 secs
Tue Nov 9 04:31:42 allocating new datafile /data/mongodb/clobby-staging/clobby-staging.1, filling with zeroes...
Tue Nov 9 04:31:42 [conn65447] New namespace: clobby-staging.rooms
New namespace: clobby-staging.system.namespaces
adding _id index for collection clobby-staging.rooms
Tue Nov 9 04:31:42 [conn65447] New namespace: clobby-staging.system.indexes
building new index on

{ _id: 1 }

for clobby-staging.rooms
Tue Nov 9 04:31:42 [conn65447] external sort root: /data/mongodb/_tmp/esort.1289277102.2034221856/
Tue Nov 9 04:31:42 [conn65447] external sort used : 0 files in 0 secs
Tue Nov 9 04:31:42 done allocating datafile /data/mongodb/clobby-staging/clobby-staging.1, size: 128MB, took 0.001 secs
Tue Nov 9 04:31:42 [conn65447] New namespace: clobby-staging.rooms.$id
done building bottom layer, going to commit
Tue Nov 9 04:31:42 [conn65447] fastBuildIndex dupsToDrop:0
Tue Nov 9 04:31:42 [conn65447] done for 0 records 0secs
Tue Nov 9 04:31:42 [conn4] getmore local.oplog.$main cid:5068368097862225500 getMore: { ts:

{ $gte: new Date(5534923916268535809) }

} bytes:184 nreturned:1 798ms
Tue Nov 9 04:31:42 [conn65447] insert clobby-staging.rooms 60ms



 Comments   
Comment by David Cardon [ 28/Feb/13 ]

Sorry, I should clarify my example, because I don't believe the duplicate you identify is a duplicate. Here's the full process:

Cluster routing service domain: mongos
Cluster shard domains: alice, bob, chuck

Main write process = M
Oplog follower process = F
Mongo console = C

M authenticates against: mongos/admin
F authenticates against: mongos/admin
C authenticates against: mongos/admin
M foo.bar.insert(

{...})
C foo_z.bar.ensureIndex({l:1}); (mongo assigns foo_z.bar to bob)
F foo_z.bar.insert({...}

) => which directs writes to bob/foo_z.bar
C foo.dropDatabase();
C foo_z.dropDatabase();
C foo_z.bar.ensureIndex(

{l:1}

); (mongo assigns foo_z.bar to chuck this time around)
M foo.bar.insert(

{...})
F foo_z.bar.insert({...}

) => which STILL directs writes to bob/foo_z.bar (even though config collections state that foo_z.bar resides on chuck)

So, I'm not removing any users or databases whose credentials should be invalidated. My admin credentials are still perfectly valid in this process.

Comment by Scott Hernandez (Inactive) [ 28/Feb/13 ]

David, what you are talking about is very different, and related to issues in mongos or the client app. The former will be fixed in 2.4 since we uniquely identify collections which have the same name but are different instances in mongos/sharded-clusters (using a unique ObjectId for each instance).

Comment by David Cardon [ 28/Feb/13 ]

Here is (I think) one of the side effects of this issue that we are seeing:

  • We have a custom oplog follower that observes changes and writes to the cluster
  • This oplog follower makes authenticated connections to mongos
  • When a database is dropped, the oplog follower's connection "remembers" the original routing to collections within the dropped database
  • If the database with the same name gets created again, the oplog follower will write to the collection's original location, NOT the newly assigned location for the collection.
  • As a result, connections through mongos (except for the follower's) are unaware of the misplaced records.

Example:

Cluster shard domains: alice, bob, chuck
Process inserts into: foo.bar
console: db.getSiblingDB('foo_z').bar.ensureIndex(

{l:1}

); (mongo assigns foo_z.bar to bob)
Oplog follower writes to: foo_z.bar which translates to bob/foo_z.bar
console: db.getSiblingDB('foo').dropDatabase();
console: db.getSiblingDB('foo_z').dropDatabase();
console: db.getSiblingDB('foo_z').bar.ensureIndex(

{l:1}

); (mongo assigns foo_z.bar to chuck
Process inserts into: foo.bar
Oplog follower writes to: foo_z.bar which STILL translates to bob/foo_z.bar (even though other connections expect chuck/foo_z.bar)

Comment by Anthony Crumley [ 01/May/11 ]

Justin,

I am working on this one but it is my first attempt at a MongoDB contribution so we will see how it works out.

Comment by Justin Smestad [ 01/May/11 ]

Is there any movement on this?

Generated at Thu Feb 08 02:58:54 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.