mongodb version: Thu Feb 2 02:48:31 /usr/bin/mongos db version v2.0.2, pdfile version 4.5 starting (--help for usage) Thu Feb 2 02:48:31 git version: 514b122d308928517f5841888ceaa4246a7f18e3 Thu Feb 2 02:48:31 build info: Linux bs-linux64.10gen.cc 2.6.21.7-2.ec2.v1.2.fc8xen #1 SMP Fri Nov 20 17:48:28 EST 2009 x86_64 BOOST_LIB_VERSION=1_41 ... Thu Feb 2 05:28:28 [mongosMain] connection accepted from 10.150.189.198:15350 #28 Thu Feb 2 05:28:28 [conn28] CMD: shardcollection: { shardcollection: "dummy.coll_1", unique: true, key: { _id: 1 } } Thu Feb 2 05:28:28 [conn28] enable sharding on: dummy.coll_1 with shard key: { _id: 1 } Thu Feb 2 05:28:28 [conn28] created new distributed lock for dummy.coll_1 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:28 [conn28] ChunkManager: time to load chunks for dummy.coll_1: 0ms sequenceNumber: 33 version: 0|0 Thu Feb 2 05:28:28 [conn28] going to create 1 chunk(s) for: dummy.coll_1 Thu Feb 2 05:28:29 [conn28] warning: version 0 found when reloading chunk manager, collection 'dummy.coll_1' initially detected as sharded Thu Feb 2 05:28:29 [conn28] created new distributed lock for dummy.coll_1 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:29 [conn28] ChunkManager: time to load chunks for dummy.coll_1: 0ms sequenceNumber: 34 version: 1|0 Thu Feb 2 05:28:30 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' acquired, ts : 4f2a654e7779ca786dc8fa39 Thu Feb 2 05:28:30 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:28:30 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' acquired, ts : 4f2a654ee14501777999878d Thu Feb 2 05:28:30 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:28:31 [mongosMain] connection accepted from 10.150.189.198:15351 #29 Thu Feb 2 05:28:31 [conn29] CMD: shardcollection: { shardcollection: "dummy.coll_2", unique: true, key: { did: 1, abc: 1 } } Thu Feb 2 05:28:31 [conn29] enable sharding on: dummy.coll_2 with shard key: { did: 1, abc: 1 } Thu Feb 2 05:28:31 [conn29] created new distributed lock for dummy.coll_2 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:31 [conn29] ChunkManager: time to load chunks for dummy.coll_2: 0ms sequenceNumber: 35 version: 0|0 Thu Feb 2 05:28:31 [conn29] going to create 1 chunk(s) for: dummy.coll_2 Thu Feb 2 05:28:31 [conn29] warning: version 0 found when reloading chunk manager, collection 'dummy.coll_2' initially detected as sharded Thu Feb 2 05:28:31 [conn29] created new distributed lock for dummy.coll_2 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:31 [conn29] ChunkManager: time to load chunks for dummy.coll_2: 0ms sequenceNumber: 36 version: 1|0 Thu Feb 2 05:28:34 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' acquired, ts : 4f2a65528a73058fbd954236 Thu Feb 2 05:28:34 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:28:34 [conn18] DROP: dummy.coll_1 Thu Feb 2 05:28:34 [conn18] about to log metadata event: { _id: "DAL1-2012-02-02T10:28:34-30", server: "DAL1", clientAddr: "N/A", time: new Date(1328178514676), what: "dropCollection.start", ns: "dummy.coll_1", details: {} } Thu Feb 2 05:28:35 [conn18] distributed lock 'dummy.coll_1/DAL1:27017:1328176658:1804289383' acquired, ts : 4f2a6552e14501777999878e Thu Feb 2 05:28:35 [conn18] about to log metadata event: { _id: "DAL1-2012-02-02T10:28:35-31", server: "DAL1", clientAddr: "N/A", time: new Date(1328178515121), what: "dropCollection", ns: "dummy.coll_1", details: {} } Thu Feb 2 05:28:35 [conn18] distributed lock 'dummy.coll_1/DAL1:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:28:35 [conn18] DROP: dummy.coll_2 Thu Feb 2 05:28:35 [conn18] about to log metadata event: { _id: "DAL1-2012-02-02T10:28:35-32", server: "DAL1", clientAddr: "N/A", time: new Date(1328178515345), what: "dropCollection.start", ns: "dummy.coll_2", details: {} } Thu Feb 2 05:28:35 [conn18] distributed lock 'dummy.coll_2/DAL1:27017:1328176658:1804289383' acquired, ts : 4f2a6553e14501777999878f Thu Feb 2 05:28:35 [conn18] about to log metadata event: { _id: "DAL1-2012-02-02T10:28:35-33", server: "DAL1", clientAddr: "N/A", time: new Date(1328178515645), what: "dropCollection", ns: "dummy.coll_2", details: {} } Thu Feb 2 05:28:35 [conn18] distributed lock 'dummy.coll_2/DAL1:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:28:36 [conn19] CMD: shardcollection: { shardcollection: "dummy.coll_3", unique: true, key: { _id: 1 } } Thu Feb 2 05:28:36 [conn19] enable sharding on: dummy.coll_3 with shard key: { _id: 1 } Thu Feb 2 05:28:36 [conn19] created new distributed lock for dummy.coll_3 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:36 [conn19] ChunkManager: time to load chunks for dummy.coll_3: 0ms sequenceNumber: 37 version: 0|0 Thu Feb 2 05:28:36 [conn19] going to create 1 chunk(s) for: dummy.coll_3 Thu Feb 2 05:28:37 [conn19] warning: version 0 found when reloading chunk manager, collection 'dummy.coll_3' initially detected as sharded Thu Feb 2 05:28:37 [conn19] created new distributed lock for dummy.coll_3 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:37 [conn19] ChunkManager: time to load chunks for dummy.coll_3: 0ms sequenceNumber: 38 version: 1|0 Thu Feb 2 05:28:38 [conn20] CMD: shardcollection: { shardcollection: "dummy.coll_4", unique: true, key: { did: 1, abc: 1 } } Thu Feb 2 05:28:38 [conn20] enable sharding on: dummy.coll_4 with shard key: { did: 1, abc: 1 } Thu Feb 2 05:28:38 [conn20] created new distributed lock for dummy.coll_4 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:38 [conn20] ChunkManager: time to load chunks for dummy.coll_4: 0ms sequenceNumber: 39 version: 0|0 Thu Feb 2 05:28:38 [conn20] going to create 1 chunk(s) for: dummy.coll_4 Thu Feb 2 05:28:38 [conn20] warning: version 0 found when reloading chunk manager, collection 'dummy.coll_4' initially detected as sharded Thu Feb 2 05:28:38 [conn20] created new distributed lock for dummy.coll_4 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:38 [conn20] ChunkManager: time to load chunks for dummy.coll_4: 0ms sequenceNumber: 40 version: 1|0 Thu Feb 2 05:28:39 [conn13] DROP: dummy.coll_4 Thu Feb 2 05:28:39 [conn13] about to log metadata event: { _id: "DAL1-2012-02-02T10:28:39-34", server: "DAL1", clientAddr: "N/A", time: new Date(1328178519815), what: "dropCollection.start", ns: "dummy.coll_4", details: {} } Thu Feb 2 05:28:40 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' acquired, ts : 4f2a65587779ca786dc8fa3a Thu Feb 2 05:28:40 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:28:40 [conn13] distributed lock 'dummy.coll_4/DAL1:27017:1328176658:1804289383' acquired, ts : 4f2a6557e145017779998790 Thu Feb 2 05:28:40 [conn13] about to log metadata event: { _id: "DAL1-2012-02-02T10:28:40-35", server: "DAL1", clientAddr: "N/A", time: new Date(1328178520270), what: "dropCollection", ns: "dummy.coll_4", details: {} } Thu Feb 2 05:28:40 [conn13] distributed lock 'dummy.coll_4/DAL1:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:28:41 [mongosMain] connection accepted from 10.150.189.198:15354 #30 Thu Feb 2 05:28:41 [conn30] SyncClusterConnection connecting to [DAL1:27019] Thu Feb 2 05:28:41 [conn30] SyncClusterConnection connecting to [DAL2:27019] Thu Feb 2 05:28:41 [conn30] SyncClusterConnection connecting to [DAL3:27019] Thu Feb 2 05:28:42 [conn28] CMD: shardcollection: { shardcollection: "dummy.coll_5", unique: false, key: { shardkey1: 1, shardkey2: 1 } } Thu Feb 2 05:28:42 [conn28] enable sharding on: dummy.coll_5 with shard key: { shardkey1: 1, shardkey2: 1 } Thu Feb 2 05:28:42 [conn28] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:42 [conn28] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 41 version: 0|0 Thu Feb 2 05:28:42 [conn28] going to create 1 chunk(s) for: dummy.coll_5 Thu Feb 2 05:28:43 [conn28] warning: version 0 found when reloading chunk manager, collection 'dummy.coll_5' initially detected as sharded Thu Feb 2 05:28:43 [conn28] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:43 [conn28] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 42 version: 1|0 Thu Feb 2 05:28:44 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' acquired, ts : 4f2a655c8a73058fbd954237 Thu Feb 2 05:28:44 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:28:49 [conn16] ns: dummy.coll_5 could not initialize cursor across all shards because : stale config detected for ns: dummy.coll_5 ClusteredCursor::_checkCursor @ DAL1_DAL2_DAL3/DAL3:27018,DAL2:27018,DAL1:27018 attempt: 0 Thu Feb 2 05:28:49 [conn16] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:49 [conn16] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 68 version: 1|0 Thu Feb 2 05:28:49 [conn16] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:49 [conn16] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 69 version: 1|0 Thu Feb 2 05:28:49 [conn16] ns: dummy.coll_5 could not initialize cursor across all shards because : stale config detected for ns: dummy.coll_5 ClusteredCursor::_checkCursor @ DAL1_DAL2_DAL3/DAL3:27018,DAL2:27018,DAL1:27018 attempt: 1 Thu Feb 2 05:28:50 [conn16] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:50 [conn16] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 70 version: 1|0 Thu Feb 2 05:28:50 [conn16] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:50 [conn16] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 71 version: 1|0 Thu Feb 2 05:28:50 [conn16] ns: dummy.coll_5 could not initialize cursor across all shards because : stale config detected for ns: dummy.coll_5 ClusteredCursor::_checkCursor @ DAL1_DAL2_DAL3/DAL3:27018,DAL2:27018,DAL1:27018 attempt: 2 Thu Feb 2 05:28:50 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' acquired, ts : 4f2a65627779ca786dc8fa3b Thu Feb 2 05:28:50 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:28:52 [conn16] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:52 [conn16] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 72 version: 1|0 Thu Feb 2 05:28:52 [conn16] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:52 [conn16] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 73 version: 1|0 Thu Feb 2 05:28:52 [conn16] ns: dummy.coll_5 could not initialize cursor across all shards because : stale config detected for ns: dummy.coll_5 ClusteredCursor::_checkCursor @ DAL1_DAL2_DAL3/DAL3:27018,DAL2:27018,DAL1:27018 attempt: 3 Thu Feb 2 05:28:55 [conn16] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:55 [conn16] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 74 version: 1|0 Thu Feb 2 05:28:55 [conn16] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:55 [conn16] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 75 version: 1|0 Thu Feb 2 05:28:55 [conn16] ns: dummy.coll_5 could not initialize cursor across all shards because : stale config detected for ns: dummy.coll_5 ClusteredCursor::_checkCursor @ DAL1_DAL2_DAL3/DAL3:27018,DAL2:27018,DAL1:27018 attempt: 4 Thu Feb 2 05:28:54 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' acquired, ts : 4f2a65668a73058fbd954238 Thu Feb 2 05:28:54 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:28:59 [conn16] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:59 [conn16] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 76 version: 1|0 Thu Feb 2 05:28:59 [conn16] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:28:59 [conn16] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 77 version: 1|0 Thu Feb 2 05:28:59 [conn16] ns: dummy.coll_5 could not initialize cursor across all shards because : stale config detected for ns: dummy.coll_5 ClusteredCursor::_checkCursor @ DAL1_DAL2_DAL3/DAL3:27018,DAL2:27018,DAL1:27018 attempt: 5 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 78 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 79 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 80 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] going to retry checkShardVersion host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 81 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] going to retry checkShardVersion host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 82 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] going to retry checkShardVersion host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 83 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] going to retry checkShardVersion host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 84 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] setShardVersion failed host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] Assertion: 10429:setShardVersion failed host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } 0x535572 0x7f4556 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x5c1e06 0x5bffe7 0x7674a9 0x76f9da 0x776a83 0x7b487d 0x7ec307 0x5255cf 0x527684 0x804350 0x2aaaaacce617 0x2aaaab748c2d /usr/bin/mongos(_ZN5mongo11msgassertedEiPKc+0x112) [0x535572] /usr/bin/mongos [0x7f4556] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos(_ZN5boost6detail8function17function_invoker4IPFbRN5mongo12DBClientBaseERKSsbiEbS5_S7_biE6invokeERNS1_15function_bufferES5_S7_bi+0x16) [0x5c1e06] /usr/bin/mongos(_ZN5mongo15ShardConnection11_finishInitEv+0x137) [0x5bffe7] /usr/bin/mongos(_ZN5mongo8Strategy6insertERKNS_5ShardEPKcRKNS_7BSONObjEib+0x89) [0x7674a9] /usr/bin/mongos(_ZN5mongo13ShardStrategy7_insertERNS_7RequestERNS_9DbMessageEN5boost10shared_ptrIKNS_12ChunkManagerEEE+0x7aa) [0x76f9da] /usr/bin/mongos(_ZN5mongo13ShardStrategy7writeOpEiRNS_7RequestE+0x153) [0x776a83] /usr/bin/mongos(_ZN5mongo7Request7processEi+0xdd) [0x7b487d] /usr/bin/mongos(_ZN5mongo17WriteBackListener3runEv+0x9c7) [0x7ec307] /usr/bin/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xbf) [0x5255cf] /usr/bin/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x74) [0x527684] /usr/bin/mongos(thread_proxy+0x80) [0x804350] /lib64/libpthread.so.0 [0x2aaaaacce617] /lib64/libc.so.6(clone+0x6d) [0x2aaaab748c2d] Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ~ScopedDBConnection: _conn != null Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ERROR: error processing writeback: 10429 setShardVersion failed host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 85 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 86 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 87 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] going to retry checkShardVersion host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 88 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] going to retry checkShardVersion host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 89 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] going to retry checkShardVersion host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 90 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] going to retry checkShardVersion host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 91 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] setShardVersion failed host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] Assertion: 10429:setShardVersion failed host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } 0x535572 0x7f4556 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x5c1e06 0x5bffe7 0x7674a9 0x76f9da 0x776a83 0x7b487d 0x7ec307 0x5255cf 0x527684 0x804350 0x2aaaaacce617 0x2aaaab748c2d /usr/bin/mongos(_ZN5mongo11msgassertedEiPKc+0x112) [0x535572] /usr/bin/mongos [0x7f4556] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos(_ZN5boost6detail8function17function_invoker4IPFbRN5mongo12DBClientBaseERKSsbiEbS5_S7_biE6invokeERNS1_15function_bufferES5_S7_bi+0x16) [0x5c1e06] /usr/bin/mongos(_ZN5mongo15ShardConnection11_finishInitEv+0x137) [0x5bffe7] /usr/bin/mongos(_ZN5mongo8Strategy6insertERKNS_5ShardEPKcRKNS_7BSONObjEib+0x89) [0x7674a9] /usr/bin/mongos(_ZN5mongo13ShardStrategy7_insertERNS_7RequestERNS_9DbMessageEN5boost10shared_ptrIKNS_12ChunkManagerEEE+0x7aa) [0x76f9da] /usr/bin/mongos(_ZN5mongo13ShardStrategy7writeOpEiRNS_7RequestE+0x153) [0x776a83] /usr/bin/mongos(_ZN5mongo7Request7processEi+0xdd) [0x7b487d] /usr/bin/mongos(_ZN5mongo17WriteBackListener3runEv+0x9c7) [0x7ec307] /usr/bin/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xbf) [0x5255cf] /usr/bin/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x74) [0x527684] /usr/bin/mongos(thread_proxy+0x80) [0x804350] /lib64/libpthread.so.0 [0x2aaaaacce617] /lib64/libc.so.6(clone+0x6d) [0x2aaaab748c2d] Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ~ScopedDBConnection: _conn != null Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ERROR: error processing writeback: 10429 setShardVersion failed host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 92 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 93 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 94 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] going to retry checkShardVersion host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 95 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] going to retry checkShardVersion host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 96 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] going to retry checkShardVersion host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 97 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] going to retry checkShardVersion host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] created new distributed lock for dummy.coll_5 on DAL1:27019,DAL2:27019,DAL3:27019 ( lock timeout : 900000, ping interval : 30000, process : 0 ) Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ChunkManager: time to load chunks for dummy.coll_5: 0ms sequenceNumber: 98 version: 1|0 Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] setShardVersion failed host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] Assertion: 10429:setShardVersion failed host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } 0x535572 0x7f4556 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x7f3d2d 0x5c1e06 0x5bffe7 0x7674a9 0x76f9da 0x776a83 0x7b487d 0x7ec307 0x5255cf 0x527684 0x804350 0x2aaaaacce617 0x2aaaab748c2d /usr/bin/mongos(_ZN5mongo11msgassertedEiPKc+0x112) [0x535572] /usr/bin/mongos [0x7f4556] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos [0x7f3d2d] /usr/bin/mongos(_ZN5boost6detail8function17function_invoker4IPFbRN5mongo12DBClientBaseERKSsbiEbS5_S7_biE6invokeERNS1_15function_bufferES5_S7_bi+0x16) [0x5c1e06] /usr/bin/mongos(_ZN5mongo15ShardConnection11_finishInitEv+0x137) [0x5bffe7] /usr/bin/mongos(_ZN5mongo8Strategy6insertERKNS_5ShardEPKcRKNS_7BSONObjEib+0x89) [0x7674a9] /usr/bin/mongos(_ZN5mongo13ShardStrategy7_insertERNS_7RequestERNS_9DbMessageEN5boost10shared_ptrIKNS_12ChunkManagerEEE+0x7aa) [0x76f9da] /usr/bin/mongos(_ZN5mongo13ShardStrategy7writeOpEiRNS_7RequestE+0x153) [0x776a83] /usr/bin/mongos(_ZN5mongo7Request7processEi+0xdd) [0x7b487d] /usr/bin/mongos(_ZN5mongo17WriteBackListener3runEv+0x9c7) [0x7ec307] /usr/bin/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xbf) [0x5255cf] /usr/bin/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x74) [0x527684] /usr/bin/mongos(thread_proxy+0x80) [0x804350] /lib64/libpthread.so.0 [0x2aaaaacce617] /lib64/libc.so.6(clone+0x6d) [0x2aaaab748c2d] Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ~ScopedDBConnection: _conn != null Thu Feb 2 05:29:00 [WriteBackListener-DAL3:27018] ERROR: error processing writeback: 10429 setShardVersion failed host: DAL3:27018 { oldVersion: Timestamp 0|0, ns: "dummy.coll_5", version: Timestamp 1000|2, globalVersion: Timestamp 1000|0, errmsg: "client version differs from config's for collection 'dummy.coll_5'", ok: 0.0 } Thu Feb 2 05:29:01 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' acquired, ts : 4f2a656c7779ca786dc8fa3c Thu Feb 2 05:29:01 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:05 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' acquired, ts : 4f2a65708a73058fbd954239 Thu Feb 2 05:29:05 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:06 [conn23] ns: dummy.coll_5 could not initialize cursor across all shards because : stale config detected for ns: dummy.coll_5 ParallelCursor::_init @ DAL1_DAL2_DAL3/DAL3:27018,DAL2:27018,DAL1:27018 attempt: 0 Thu Feb 2 05:29:10 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' acquired, ts : 4f2a6576e145017779998791 Thu Feb 2 05:29:11 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:11 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' acquired, ts : 4f2a65777779ca786dc8fa3d Thu Feb 2 05:29:11 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:14 [mongosMain] connection accepted from 127.0.0.1:46906 #24 Thu Feb 2 05:29:14 [conn24] end connection 127.0.0.1:46906 Thu Feb 2 05:29:14 [mongosMain] connection accepted from 10.150.189.198:15369 #31 Thu Feb 2 05:29:15 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' acquired, ts : 4f2a657b8a73058fbd95423a Thu Feb 2 05:29:15 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:22 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' acquired, ts : 4f2a65817779ca786dc8fa3e Thu Feb 2 05:29:21 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' acquired, ts : 4f2a6581e145017779998792 Thu Feb 2 05:29:21 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:22 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:25 [conn17] customOut: {} outServer: DAL1_DAL2_DAL3:DAL1_DAL2_DAL3/DAL3:27018,DAL2:27018,DAL1:27018 Thu Feb 2 05:29:25 [conn17] DROP: dummy.aggrtmp_4f2a6585_lk2gx4hN4RG8zRIxQwPCPA Thu Feb 2 05:29:25 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' acquired, ts : 4f2a65858a73058fbd95423b Thu Feb 2 05:29:25 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:26 [conn16] customOut: {} outServer: DAL1_DAL2_DAL3:DAL1_DAL2_DAL3/DAL3:27018,DAL2:27018,DAL1:27018 Thu Feb 2 05:29:26 [conn16] DROP: dummy.aggrtmp_4f2a6586_1pI9yIhN4RGu1xIxQwPCPA Thu Feb 2 05:29:31 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' acquired, ts : 4f2a658be145017779998793 Thu Feb 2 05:29:31 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:32 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' acquired, ts : 4f2a658c7779ca786dc8fa3f Thu Feb 2 05:29:32 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:35 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' acquired, ts : 4f2a658f8a73058fbd95423c Thu Feb 2 05:29:35 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:41 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' acquired, ts : 4f2a6595e145017779998794 Thu Feb 2 05:29:41 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:42 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' acquired, ts : 4f2a65967779ca786dc8fa40 Thu Feb 2 05:29:42 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:45 [conn20] DROP: dummy.coll_3 Thu Feb 2 05:29:45 [conn20] about to log metadata event: { _id: "DAL1-2012-02-02T10:29:45-36", server: "DAL1", clientAddr: "N/A", time: new Date(1328178585979), what: "dropCollection.start", ns: "dummy.coll_3", details: {} } Thu Feb 2 05:29:46 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' acquired, ts : 4f2a65998a73058fbd95423d Thu Feb 2 05:29:46 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:46 [conn20] distributed lock 'dummy.coll_3/DAL1:27017:1328176658:1804289383' acquired, ts : 4f2a659ae145017779998795 Thu Feb 2 05:29:46 [conn20] about to log metadata event: { _id: "DAL1-2012-02-02T10:29:46-37", server: "DAL1", clientAddr: "N/A", time: new Date(1328178586321), what: "dropCollection", ns: "dummy.coll_3", details: {} } Thu Feb 2 05:29:46 [conn20] distributed lock 'dummy.coll_3/DAL1:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:46 [conn20] DROP: dummy.coll_5 Thu Feb 2 05:29:46 [conn20] about to log metadata event: { _id: "DAL1-2012-02-02T10:29:46-38", server: "DAL1", clientAddr: "N/A", time: new Date(1328178586585), what: "dropCollection.start", ns: "dummy.coll_5", details: {} } Thu Feb 2 05:29:46 [conn20] distributed lock 'dummy.coll_5/DAL1:27017:1328176658:1804289383' acquired, ts : 4f2a659ae145017779998796 Thu Feb 2 05:29:46 [conn20] about to log metadata event: { _id: "DAL1-2012-02-02T10:29:46-39", server: "DAL1", clientAddr: "N/A", time: new Date(1328178586841), what: "dropCollection", ns: "dummy.coll_5", details: {} } Thu Feb 2 05:29:47 [conn20] distributed lock 'dummy.coll_5/DAL1:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:52 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' acquired, ts : 4f2a65a07779ca786dc8fa41 Thu Feb 2 05:29:52 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:52 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' acquired, ts : 4f2a659fe145017779998797 Thu Feb 2 05:29:52 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:56 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' acquired, ts : 4f2a65a48a73058fbd95423e Thu Feb 2 05:29:56 [Balancer] distributed lock 'balancer/DAL2:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:29:59 [mongosMain] connection accepted from 127.0.0.1:27196 #32 Thu Feb 2 05:29:59 [conn32] end connection 127.0.0.1:27196 Thu Feb 2 05:30:02 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' acquired, ts : 4f2a65aae145017779998798 Thu Feb 2 05:30:02 [Balancer] distributed lock 'balancer/DAL1:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:30:03 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' acquired, ts : 4f2a65aa7779ca786dc8fa42 Thu Feb 2 05:30:03 [Balancer] distributed lock 'balancer/DAL3:27017:1328176658:1804289383' unlocked. Thu Feb 2 05:30:03 [mongosMain] connection accepted from 127.0.0.1:60872 #23 Thu Feb 2 05:30:03 [conn23] end connection 127.0.0.1:60872