2014-04-30T16:00:06.703+0200 [mongosMain] MongoS version 2.6.1-rc0 starting: pid=7988 port=27017 64-bit host=VM01-SHARD-TEST (--help for usage) 2014-04-30T16:00:06.704+0200 [mongosMain] db version v2.6.1-rc0 2014-04-30T16:00:06.704+0200 [mongosMain] git version: a7f594977627996aa8731e936ef9c3801d512fc0 2014-04-30T16:00:06.704+0200 [mongosMain] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 2014-04-30T16:00:06.704+0200 [mongosMain] allocator: system 2014-04-30T16:00:06.704+0200 [mongosMain] options: { net: { port: 27017 }, sharding: { configDB: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" }, systemLog: { destination: "file", logAppend: true, path: "C:\bbu\SERVER\MongoDB\logs\mongos1.log" } } 2014-04-30T16:00:06.711+0200 [mongosMain] SyncClusterConnection connecting to [VM01-SHARD-TEST:27119] 2014-04-30T16:00:06.711+0200 [mongosMain] SyncClusterConnection connecting to [VM01-SHARD-TEST:27219] 2014-04-30T16:00:06.711+0200 [mongosMain] SyncClusterConnection connecting to [VM01-SHARD-TEST:27319] 2014-04-30T16:00:06.762+0200 [mongosMain] scoped connection to VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 not being returned to the pool 2014-04-30T16:00:06.769+0200 [mongosMain] SyncClusterConnection connecting to [VM01-SHARD-TEST:27119] 2014-04-30T16:00:06.769+0200 [LockPinger] creating distributed lock ping thread for VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 and process VM01-SHARD-TEST:27017:1398866406:41 (sleeping for 30000ms) 2014-04-30T16:00:06.769+0200 [LockPinger] SyncClusterConnection connecting to [VM01-SHARD-TEST:27119] 2014-04-30T16:00:06.770+0200 [mongosMain] SyncClusterConnection connecting to [VM01-SHARD-TEST:27219] 2014-04-30T16:00:06.770+0200 [LockPinger] SyncClusterConnection connecting to [VM01-SHARD-TEST:27219] 2014-04-30T16:00:06.770+0200 [mongosMain] SyncClusterConnection connecting to [VM01-SHARD-TEST:27319] 2014-04-30T16:00:06.770+0200 [LockPinger] SyncClusterConnection connecting to [VM01-SHARD-TEST:27319] 2014-04-30T16:00:07.105+0200 [LockPinger] cluster VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 pinged successfully at Wed Apr 30 16:00:06 2014 by distributed lock pinger 'VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319/VM01-SHARD-TEST:27017:1398866406:41', sleeping for 30000ms 2014-04-30T16:00:07.309+0200 [mongosMain] distributed lock 'configUpgrade/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 536101e7eeb9507e5adc918f 2014-04-30T16:00:07.311+0200 [mongosMain] starting upgrade of config server from v0 to v5 2014-04-30T16:00:07.311+0200 [mongosMain] starting next upgrade step from v0 to v5 2014-04-30T16:00:07.311+0200 [mongosMain] about to log new metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:00:07-536101e7eeb9507e5adc9190", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866407311), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 5 } } 2014-04-30T16:00:07.446+0200 [mongosMain] creating WriteBackListener for: VM01-SHARD-TEST:27119 serverID: 000000000000000000000000 2014-04-30T16:00:07.448+0200 [mongosMain] creating WriteBackListener for: VM01-SHARD-TEST:27219 serverID: 000000000000000000000000 2014-04-30T16:00:07.450+0200 [mongosMain] creating WriteBackListener for: VM01-SHARD-TEST:27319 serverID: 000000000000000000000000 2014-04-30T16:00:10.474+0200 [mongosMain] writing initial config version at v5 2014-04-30T16:00:10.713+0200 [mongosMain] about to log new metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:00:10-536101eaeeb9507e5adc9192", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866410713), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 5 } } 2014-04-30T16:00:11.142+0200 [mongosMain] upgrade of config server to v5 successful 2014-04-30T16:00:11.291+0200 [mongosMain] distributed lock 'configUpgrade/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:00:14.241+0200 [mongosMain] scoped connection to VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 not being returned to the pool 2014-04-30T16:00:14.241+0200 [Balancer] about to contact config servers and shards 2014-04-30T16:00:14.241+0200 [Balancer] SyncClusterConnection connecting to [VM01-SHARD-TEST:27119] 2014-04-30T16:00:14.241+0200 [mongosMain] waiting for connections on port 27017 2014-04-30T16:00:14.242+0200 [Balancer] SyncClusterConnection connecting to [VM01-SHARD-TEST:27219] 2014-04-30T16:00:14.242+0200 [Balancer] SyncClusterConnection connecting to [VM01-SHARD-TEST:27319] 2014-04-30T16:00:14.243+0200 [Balancer] config servers and shards contacted successfully 2014-04-30T16:00:14.243+0200 [Balancer] balancer id: VM01-SHARD-TEST:27017 started at Apr 30 16:00:14 2014-04-30T16:00:14.244+0200 [Balancer] SyncClusterConnection connecting to [VM01-SHARD-TEST:27119] 2014-04-30T16:00:14.245+0200 [Balancer] SyncClusterConnection connecting to [VM01-SHARD-TEST:27219] 2014-04-30T16:00:14.245+0200 [Balancer] SyncClusterConnection connecting to [VM01-SHARD-TEST:27319] 2014-04-30T16:00:14.692+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 536101eeeeb9507e5adc9194 2014-04-30T16:00:14.828+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:00:19.755+0200 [mongosMain] connection accepted from 127.0.0.1:6009 #1 (1 connection now open) 2014-04-30T16:00:19.760+0200 [conn1] couldn't find database [admin] in config db 2014-04-30T16:00:20.230+0200 [conn1] put [admin] on: config:VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 2014-04-30T16:00:20.237+0200 [conn1] going to add shard: { _id: "shard_001", host: "VM01-SHARD-TEST:20117" } 2014-04-30T16:00:20.596+0200 [conn1] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:00:20-536101f4eeb9507e5adc9195", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866420596), what: "addShard", ns: "", details: { name: "shard_001", host: "VM01-SHARD-TEST:20117" } } 2014-04-30T16:00:20.968+0200 [conn1] going to add shard: { _id: "shard_002", host: "VM01-SHARD-TEST:20217" } 2014-04-30T16:00:21.345+0200 [conn1] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:00:21-536101f5eeb9507e5adc9197", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866421345), what: "addShard", ns: "", details: { name: "shard_002", host: "VM01-SHARD-TEST:20217" } } 2014-04-30T16:00:21.665+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 536101f4eeb9507e5adc9196 2014-04-30T16:00:21.884+0200 [conn1] going to add shard: { _id: "shard_003", host: "VM01-SHARD-TEST:20317" } 2014-04-30T16:00:21.885+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:00:22.241+0200 [conn1] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:00:22-536101f6eeb9507e5adc9198", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866422241), what: "addShard", ns: "", details: { name: "shard_003", host: "VM01-SHARD-TEST:20317" } } 2014-04-30T16:00:22.484+0200 [conn1] going to add shard: { _id: "shard_004", host: "VM01-SHARD-TEST:20417" } 2014-04-30T16:00:22.797+0200 [conn1] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:00:22-536101f6eeb9507e5adc9199", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866422797), what: "addShard", ns: "", details: { name: "shard_004", host: "VM01-SHARD-TEST:20417" } } 2014-04-30T16:00:23.033+0200 [conn1] end connection 127.0.0.1:6009 (0 connections now open) 2014-04-30T16:00:28.161+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 536101fbeeb9507e5adc919a 2014-04-30T16:00:28.297+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:00:34.553+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 53610202eeb9507e5adc919b 2014-04-30T16:00:34.690+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:00:37.377+0200 [LockPinger] cluster VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 pinged successfully at Wed Apr 30 16:00:37 2014 by distributed lock pinger 'VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319/VM01-SHARD-TEST:27017:1398866406:41', sleeping for 30000ms 2014-04-30T16:00:40.947+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 53610208eeb9507e5adc919c 2014-04-30T16:00:41.083+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:00:47.343+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 5361020feeb9507e5adc919d 2014-04-30T16:00:47.479+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:00:53.633+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 53610215eeb9507e5adc919e 2014-04-30T16:00:53.701+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:00:56.934+0200 [mongosMain] connection accepted from 192.168.1.130:6018 #2 (1 connection now open) 2014-04-30T16:00:57.011+0200 [conn2] creating WriteBackListener for: VM01-SHARD-TEST:20117 serverID: 536101eeeeb9507e5adc9193 2014-04-30T16:00:57.013+0200 [conn2] creating WriteBackListener for: VM01-SHARD-TEST:20217 serverID: 536101eeeeb9507e5adc9193 2014-04-30T16:00:57.014+0200 [conn2] creating WriteBackListener for: VM01-SHARD-TEST:20317 serverID: 536101eeeeb9507e5adc9193 2014-04-30T16:00:57.016+0200 [conn2] creating WriteBackListener for: VM01-SHARD-TEST:20417 serverID: 536101eeeeb9507e5adc9193 2014-04-30T16:00:57.017+0200 [conn2] SyncClusterConnection connecting to [VM01-SHARD-TEST:27119] 2014-04-30T16:00:57.017+0200 [conn2] SyncClusterConnection connecting to [VM01-SHARD-TEST:27219] 2014-04-30T16:00:57.018+0200 [conn2] SyncClusterConnection connecting to [VM01-SHARD-TEST:27319] 2014-04-30T16:00:57.023+0200 [conn2] couldn't find database [ci_400000000000001] in config db 2014-04-30T16:00:57.222+0200 [conn2] put [ci_400000000000001] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:00:57.222+0200 [conn2] DROP DATABASE: ci_400000000000001 2014-04-30T16:00:57.222+0200 [conn2] erased database ci_400000000000001 from local registry 2014-04-30T16:00:57.223+0200 [conn2] DBConfig::dropDatabase: ci_400000000000001 2014-04-30T16:00:57.223+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:00:57-53610219eeb9507e5adc919f", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866457223), what: "dropDatabase.start", ns: "ci_400000000000001", details: {} } 2014-04-30T16:00:57.681+0200 [conn2] DBConfig::dropDatabase: ci_400000000000001 dropped sharded collections: 0 2014-04-30T16:00:57.729+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:00:57-53610219eeb9507e5adc91a0", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866457729), what: "dropDatabase", ns: "ci_400000000000001", details: {} } 2014-04-30T16:00:57.988+0200 [conn2] couldn't find database [ci_400000000000002] in config db 2014-04-30T16:00:58.209+0200 [conn2] put [ci_400000000000002] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:00:58.210+0200 [conn2] DROP DATABASE: ci_400000000000002 2014-04-30T16:00:58.210+0200 [conn2] erased database ci_400000000000002 from local registry 2014-04-30T16:00:58.210+0200 [conn2] DBConfig::dropDatabase: ci_400000000000002 2014-04-30T16:00:58.211+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:00:58-5361021aeeb9507e5adc91a1", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866458210), what: "dropDatabase.start", ns: "ci_400000000000002", details: {} } 2014-04-30T16:00:58.627+0200 [conn2] DBConfig::dropDatabase: ci_400000000000002 dropped sharded collections: 0 2014-04-30T16:00:58.630+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:00:58-5361021aeeb9507e5adc91a2", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866458630), what: "dropDatabase", ns: "ci_400000000000002", details: {} } 2014-04-30T16:00:59.376+0200 [conn2] couldn't find database [ci_400000000000003] in config db 2014-04-30T16:00:59.654+0200 [conn2] put [ci_400000000000003] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:00:59.654+0200 [conn2] DROP DATABASE: ci_400000000000003 2014-04-30T16:00:59.654+0200 [conn2] erased database ci_400000000000003 from local registry 2014-04-30T16:00:59.655+0200 [conn2] DBConfig::dropDatabase: ci_400000000000003 2014-04-30T16:00:59.655+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:00:59-5361021beeb9507e5adc91a3", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866459655), what: "dropDatabase.start", ns: "ci_400000000000003", details: {} } 2014-04-30T16:01:00.192+0200 [conn2] DBConfig::dropDatabase: ci_400000000000003 dropped sharded collections: 0 2014-04-30T16:01:00.195+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:00-5361021ceeb9507e5adc91a4", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866460195), what: "dropDatabase", ns: "ci_400000000000003", details: {} } 2014-04-30T16:01:00.424+0200 [conn2] couldn't find database [ci_400000000000004] in config db 2014-04-30T16:01:00.652+0200 [conn2] put [ci_400000000000004] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:00.652+0200 [conn2] DROP DATABASE: ci_400000000000004 2014-04-30T16:01:00.652+0200 [conn2] erased database ci_400000000000004 from local registry 2014-04-30T16:01:00.653+0200 [conn2] DBConfig::dropDatabase: ci_400000000000004 2014-04-30T16:01:00.653+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:00-5361021ceeb9507e5adc91a5", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866460653), what: "dropDatabase.start", ns: "ci_400000000000004", details: {} } 2014-04-30T16:01:01.142+0200 [conn2] DBConfig::dropDatabase: ci_400000000000004 dropped sharded collections: 0 2014-04-30T16:01:01.145+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:01-5361021deeb9507e5adc91a6", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866461145), what: "dropDatabase", ns: "ci_400000000000004", details: {} } 2014-04-30T16:01:01.407+0200 [conn2] couldn't find database [ci_400000000000005] in config db 2014-04-30T16:01:01.623+0200 [conn2] put [ci_400000000000005] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:01.623+0200 [conn2] DROP DATABASE: ci_400000000000005 2014-04-30T16:01:01.623+0200 [conn2] erased database ci_400000000000005 from local registry 2014-04-30T16:01:01.624+0200 [conn2] DBConfig::dropDatabase: ci_400000000000005 2014-04-30T16:01:01.624+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:01-5361021deeb9507e5adc91a7", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866461624), what: "dropDatabase.start", ns: "ci_400000000000005", details: {} } 2014-04-30T16:01:02.056+0200 [conn2] DBConfig::dropDatabase: ci_400000000000005 dropped sharded collections: 0 2014-04-30T16:01:02.059+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:02-5361021eeeb9507e5adc91a8", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866462059), what: "dropDatabase", ns: "ci_400000000000005", details: {} } 2014-04-30T16:01:02.263+0200 [conn2] couldn't find database [ci_400000000000006] in config db 2014-04-30T16:01:02.475+0200 [conn2] put [ci_400000000000006] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:02.475+0200 [conn2] DROP DATABASE: ci_400000000000006 2014-04-30T16:01:02.475+0200 [conn2] erased database ci_400000000000006 from local registry 2014-04-30T16:01:02.476+0200 [conn2] DBConfig::dropDatabase: ci_400000000000006 2014-04-30T16:01:02.476+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:02-5361021eeeb9507e5adc91a9", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866462476), what: "dropDatabase.start", ns: "ci_400000000000006", details: {} } 2014-04-30T16:01:03.080+0200 [conn2] DBConfig::dropDatabase: ci_400000000000006 dropped sharded collections: 0 2014-04-30T16:01:03.083+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:03-5361021feeb9507e5adc91aa", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866463083), what: "dropDatabase", ns: "ci_400000000000006", details: {} } 2014-04-30T16:01:03.378+0200 [conn2] couldn't find database [ci_400000000000007] in config db 2014-04-30T16:01:03.597+0200 [conn2] put [ci_400000000000007] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:03.597+0200 [conn2] DROP DATABASE: ci_400000000000007 2014-04-30T16:01:03.597+0200 [conn2] erased database ci_400000000000007 from local registry 2014-04-30T16:01:03.598+0200 [conn2] DBConfig::dropDatabase: ci_400000000000007 2014-04-30T16:01:03.598+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:03-5361021feeb9507e5adc91ab", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866463598), what: "dropDatabase.start", ns: "ci_400000000000007", details: {} } 2014-04-30T16:01:04.057+0200 [conn2] DBConfig::dropDatabase: ci_400000000000007 dropped sharded collections: 0 2014-04-30T16:01:04.060+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:04-53610220eeb9507e5adc91ac", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866464060), what: "dropDatabase", ns: "ci_400000000000007", details: {} } 2014-04-30T16:01:04.310+0200 [conn2] couldn't find database [ci_400000000000008] in config db 2014-04-30T16:01:04.512+0200 [conn2] put [ci_400000000000008] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:04.512+0200 [conn2] DROP DATABASE: ci_400000000000008 2014-04-30T16:01:04.512+0200 [conn2] erased database ci_400000000000008 from local registry 2014-04-30T16:01:04.514+0200 [conn2] DBConfig::dropDatabase: ci_400000000000008 2014-04-30T16:01:04.514+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:04-53610220eeb9507e5adc91ad", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866464514), what: "dropDatabase.start", ns: "ci_400000000000008", details: {} } 2014-04-30T16:01:05.186+0200 [conn2] DBConfig::dropDatabase: ci_400000000000008 dropped sharded collections: 0 2014-04-30T16:01:05.189+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:05-53610221eeb9507e5adc91ae", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866465189), what: "dropDatabase", ns: "ci_400000000000008", details: {} } 2014-04-30T16:01:05.428+0200 [conn2] couldn't find database [ci_400000000000009] in config db 2014-04-30T16:01:05.644+0200 [conn2] put [ci_400000000000009] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:05.644+0200 [conn2] DROP DATABASE: ci_400000000000009 2014-04-30T16:01:05.644+0200 [conn2] erased database ci_400000000000009 from local registry 2014-04-30T16:01:05.645+0200 [conn2] DBConfig::dropDatabase: ci_400000000000009 2014-04-30T16:01:05.645+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:05-53610221eeb9507e5adc91af", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866465645), what: "dropDatabase.start", ns: "ci_400000000000009", details: {} } 2014-04-30T16:01:06.174+0200 [conn2] DBConfig::dropDatabase: ci_400000000000009 dropped sharded collections: 0 2014-04-30T16:01:06.177+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:06-53610222eeb9507e5adc91b0", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866466177), what: "dropDatabase", ns: "ci_400000000000009", details: {} } 2014-04-30T16:01:06.425+0200 [conn2] couldn't find database [ci_400000000000010] in config db 2014-04-30T16:01:06.845+0200 [conn2] put [ci_400000000000010] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:06.845+0200 [conn2] DROP DATABASE: ci_400000000000010 2014-04-30T16:01:06.845+0200 [conn2] erased database ci_400000000000010 from local registry 2014-04-30T16:01:06.846+0200 [conn2] DBConfig::dropDatabase: ci_400000000000010 2014-04-30T16:01:06.846+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:06-53610222eeb9507e5adc91b1", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866466846), what: "dropDatabase.start", ns: "ci_400000000000010", details: {} } 2014-04-30T16:01:07.268+0200 [conn2] DBConfig::dropDatabase: ci_400000000000010 dropped sharded collections: 0 2014-04-30T16:01:07.272+0200 [conn2] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:07-53610223eeb9507e5adc91b2", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866467272), what: "dropDatabase", ns: "ci_400000000000010", details: {} } 2014-04-30T16:01:07.527+0200 [mongosMain] connection accepted from 192.168.1.130:6032 #3 (2 connections now open) 2014-04-30T16:01:07.527+0200 [mongosMain] connection accepted from 192.168.1.130:6033 #4 (3 connections now open) 2014-04-30T16:01:07.528+0200 [conn4] couldn't find database [ci_400000000000003] in config db 2014-04-30T16:01:07.528+0200 [mongosMain] connection accepted from 192.168.1.130:6034 #5 (4 connections now open) 2014-04-30T16:01:07.529+0200 [mongosMain] connection accepted from 192.168.1.130:6035 #6 (5 connections now open) 2014-04-30T16:01:07.529+0200 [mongosMain] connection accepted from 192.168.1.130:6036 #7 (6 connections now open) 2014-04-30T16:01:07.529+0200 [mongosMain] connection accepted from 192.168.1.130:6037 #8 (7 connections now open) 2014-04-30T16:01:07.531+0200 [mongosMain] connection accepted from 192.168.1.130:6038 #9 (8 connections now open) 2014-04-30T16:01:07.531+0200 [mongosMain] connection accepted from 192.168.1.130:6039 #10 (9 connections now open) 2014-04-30T16:01:07.531+0200 [mongosMain] connection accepted from 192.168.1.130:6040 #11 (10 connections now open) 2014-04-30T16:01:07.533+0200 [mongosMain] connection accepted from 192.168.1.130:6041 #12 (11 connections now open) 2014-04-30T16:01:07.654+0200 [LockPinger] cluster VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 pinged successfully at Wed Apr 30 16:01:07 2014 by distributed lock pinger 'VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319/VM01-SHARD-TEST:27017:1398866406:41', sleeping for 30000ms 2014-04-30T16:01:08.083+0200 [conn4] put [ci_400000000000003] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:08.083+0200 [conn4] enabling sharding on: ci_400000000000003 2014-04-30T16:01:08.083+0200 [conn3] couldn't find database [ci_400000000000001] in config db 2014-04-30T16:01:09.038+0200 [conn3] put [ci_400000000000001] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:09.038+0200 [conn3] enabling sharding on: ci_400000000000001 2014-04-30T16:01:09.038+0200 [conn7] couldn't find database [ci_400000000000005] in config db 2014-04-30T16:01:09.340+0200 [conn7] put [ci_400000000000005] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:09.340+0200 [conn7] enabling sharding on: ci_400000000000005 2014-04-30T16:01:09.340+0200 [conn5] couldn't find database [ci_400000000000002] in config db 2014-04-30T16:01:09.624+0200 [conn5] put [ci_400000000000002] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:09.624+0200 [conn5] enabling sharding on: ci_400000000000002 2014-04-30T16:01:09.624+0200 [conn8] couldn't find database [ci_400000000000004] in config db 2014-04-30T16:01:09.850+0200 [conn8] put [ci_400000000000004] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:09.851+0200 [conn8] enabling sharding on: ci_400000000000004 2014-04-30T16:01:09.851+0200 [conn10] couldn't find database [ci_400000000000009] in config db 2014-04-30T16:01:10.073+0200 [conn10] put [ci_400000000000009] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:10.073+0200 [conn10] enabling sharding on: ci_400000000000009 2014-04-30T16:01:10.073+0200 [conn9] couldn't find database [ci_400000000000007] in config db 2014-04-30T16:01:10.357+0200 [conn9] put [ci_400000000000007] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:10.357+0200 [conn9] enabling sharding on: ci_400000000000007 2014-04-30T16:01:10.357+0200 [conn12] couldn't find database [ci_400000000000010] in config db 2014-04-30T16:01:10.568+0200 [mongosMain] connection accepted from 127.0.0.1:6042 #13 (12 connections now open) 2014-04-30T16:01:10.601+0200 [conn12] put [ci_400000000000010] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:10.601+0200 [conn12] enabling sharding on: ci_400000000000010 2014-04-30T16:01:10.601+0200 [conn6] couldn't find database [ci_400000000000006] in config db 2014-04-30T16:01:11.063+0200 [conn6] put [ci_400000000000006] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:11.063+0200 [conn6] enabling sharding on: ci_400000000000006 2014-04-30T16:01:11.063+0200 [conn11] couldn't find database [ci_400000000000008] in config db 2014-04-30T16:01:11.284+0200 [conn11] put [ci_400000000000008] on: shard_001:VM01-SHARD-TEST:20117 2014-04-30T16:01:11.284+0200 [conn11] enabling sharding on: ci_400000000000008 2014-04-30T16:01:13.701+0200 [conn4] CMD: shardcollection: { shardCollection: "ci_400000000000003.informations", key: { _id: "hashed" } } 2014-04-30T16:01:13.701+0200 [conn4] enable sharding on: ci_400000000000003.informations with shard key: { _id: "hashed" } 2014-04-30T16:01:13.701+0200 [conn4] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:13-53610229eeb9507e5adc91b3", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866473701), what: "shardCollection.start", ns: "ci_400000000000003.informations", details: { shardKey: { _id: "hashed" }, collection: "ci_400000000000003.informations", primary: "shard_001:VM01-SHARD-TEST:20117", initShards: [], numChunks: 4 } } 2014-04-30T16:01:13.919+0200 [conn4] going to create 4 chunk(s) for: ci_400000000000003.informations using new epoch 53610229eeb9507e5adc91b4 2014-04-30T16:01:14.391+0200 [conn3] CMD: shardcollection: { shardCollection: "ci_400000000000001.informations", key: { _id: "hashed" } } 2014-04-30T16:01:14.391+0200 [conn3] enable sharding on: ci_400000000000001.informations with shard key: { _id: "hashed" } 2014-04-30T16:01:14.391+0200 [conn3] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:14-5361022aeeb9507e5adc91b5", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866474391), what: "shardCollection.start", ns: "ci_400000000000001.informations", details: { shardKey: { _id: "hashed" }, collection: "ci_400000000000001.informations", primary: "shard_001:VM01-SHARD-TEST:20117", initShards: [], numChunks: 4 } } 2014-04-30T16:01:14.548+0200 [conn3] going to create 4 chunk(s) for: ci_400000000000001.informations using new epoch 5361022aeeb9507e5adc91b6 2014-04-30T16:01:14.644+0200 [conn7] CMD: shardcollection: { shardCollection: "ci_400000000000005.informations", key: { _id: "hashed" } } 2014-04-30T16:01:14.644+0200 [conn7] enable sharding on: ci_400000000000005.informations with shard key: { _id: "hashed" } 2014-04-30T16:01:14.644+0200 [conn7] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:14-5361022aeeb9507e5adc91b7", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866474644), what: "shardCollection.start", ns: "ci_400000000000005.informations", details: { shardKey: { _id: "hashed" }, collection: "ci_400000000000005.informations", primary: "shard_001:VM01-SHARD-TEST:20117", initShards: [], numChunks: 4 } } 2014-04-30T16:01:14.925+0200 [conn5] CMD: shardcollection: { shardCollection: "ci_400000000000002.informations", key: { _id: "hashed" } } 2014-04-30T16:01:14.925+0200 [conn5] enable sharding on: ci_400000000000002.informations with shard key: { _id: "hashed" } 2014-04-30T16:01:14.925+0200 [conn5] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:14-5361022aeeb9507e5adc91b8", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866474925), what: "shardCollection.start", ns: "ci_400000000000002.informations", details: { shardKey: { _id: "hashed" }, collection: "ci_400000000000002.informations", primary: "shard_001:VM01-SHARD-TEST:20117", initShards: [], numChunks: 4 } } 2014-04-30T16:01:15.167+0200 [conn8] CMD: shardcollection: { shardCollection: "ci_400000000000004.informations", key: { _id: "hashed" } } 2014-04-30T16:01:15.167+0200 [conn8] enable sharding on: ci_400000000000004.informations with shard key: { _id: "hashed" } 2014-04-30T16:01:15.167+0200 [conn8] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:15-5361022beeb9507e5adc91b9", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866475167), what: "shardCollection.start", ns: "ci_400000000000004.informations", details: { shardKey: { _id: "hashed" }, collection: "ci_400000000000004.informations", primary: "shard_001:VM01-SHARD-TEST:20117", initShards: [], numChunks: 4 } } 2014-04-30T16:01:15.441+0200 [conn10] CMD: shardcollection: { shardCollection: "ci_400000000000009.informations", key: { _id: "hashed" } } 2014-04-30T16:01:15.441+0200 [conn10] enable sharding on: ci_400000000000009.informations with shard key: { _id: "hashed" } 2014-04-30T16:01:15.441+0200 [conn10] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:15-5361022beeb9507e5adc91ba", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866475441), what: "shardCollection.start", ns: "ci_400000000000009.informations", details: { shardKey: { _id: "hashed" }, collection: "ci_400000000000009.informations", primary: "shard_001:VM01-SHARD-TEST:20117", initShards: [], numChunks: 4 } } 2014-04-30T16:01:15.640+0200 [conn7] going to create 4 chunk(s) for: ci_400000000000005.informations using new epoch 5361022beeb9507e5adc91bb 2014-04-30T16:01:15.681+0200 [conn9] CMD: shardcollection: { shardCollection: "ci_400000000000007.informations", key: { _id: "hashed" } } 2014-04-30T16:01:15.681+0200 [conn9] enable sharding on: ci_400000000000007.informations with shard key: { _id: "hashed" } 2014-04-30T16:01:15.681+0200 [conn9] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:15-5361022beeb9507e5adc91bc", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866475681), what: "shardCollection.start", ns: "ci_400000000000007.informations", details: { shardKey: { _id: "hashed" }, collection: "ci_400000000000007.informations", primary: "shard_001:VM01-SHARD-TEST:20117", initShards: [], numChunks: 4 } } 2014-04-30T16:01:15.962+0200 [conn12] CMD: shardcollection: { shardCollection: "ci_400000000000010.informations", key: { _id: "hashed" } } 2014-04-30T16:01:15.962+0200 [conn12] enable sharding on: ci_400000000000010.informations with shard key: { _id: "hashed" } 2014-04-30T16:01:15.962+0200 [conn12] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:15-5361022beeb9507e5adc91bd", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866475962), what: "shardCollection.start", ns: "ci_400000000000010.informations", details: { shardKey: { _id: "hashed" }, collection: "ci_400000000000010.informations", primary: "shard_001:VM01-SHARD-TEST:20117", initShards: [], numChunks: 4 } } 2014-04-30T16:01:16.094+0200 [conn8] going to create 4 chunk(s) for: ci_400000000000004.informations using new epoch 5361022ceeb9507e5adc91be 2014-04-30T16:01:16.094+0200 [conn10] going to create 4 chunk(s) for: ci_400000000000009.informations using new epoch 5361022ceeb9507e5adc91bf 2014-04-30T16:01:16.094+0200 [conn5] going to create 4 chunk(s) for: ci_400000000000002.informations using new epoch 5361022ceeb9507e5adc91c0 2014-04-30T16:01:16.094+0200 [conn5] SyncClusterConnection connecting to [VM01-SHARD-TEST:27119] 2014-04-30T16:01:16.095+0200 [conn5] SyncClusterConnection connecting to [VM01-SHARD-TEST:27219] 2014-04-30T16:01:16.096+0200 [conn5] SyncClusterConnection connecting to [VM01-SHARD-TEST:27319] 2014-04-30T16:01:16.353+0200 [conn6] CMD: shardcollection: { shardCollection: "ci_400000000000006.informations", key: { _id: "hashed" } } 2014-04-30T16:01:16.354+0200 [conn6] enable sharding on: ci_400000000000006.informations with shard key: { _id: "hashed" } 2014-04-30T16:01:16.354+0200 [conn6] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:16-5361022ceeb9507e5adc91c1", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866476354), what: "shardCollection.start", ns: "ci_400000000000006.informations", details: { shardKey: { _id: "hashed" }, collection: "ci_400000000000006.informations", primary: "shard_001:VM01-SHARD-TEST:20117", initShards: [], numChunks: 4 } } 2014-04-30T16:01:16.609+0200 [conn11] CMD: shardcollection: { shardCollection: "ci_400000000000008.informations", key: { _id: "hashed" } } 2014-04-30T16:01:16.609+0200 [conn11] enable sharding on: ci_400000000000008.informations with shard key: { _id: "hashed" } 2014-04-30T16:01:16.609+0200 [conn11] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:16-5361022ceeb9507e5adc91c2", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866476609), what: "shardCollection.start", ns: "ci_400000000000008.informations", details: { shardKey: { _id: "hashed" }, collection: "ci_400000000000008.informations", primary: "shard_001:VM01-SHARD-TEST:20117", initShards: [], numChunks: 4 } } 2014-04-30T16:01:17.769+0200 [conn4] ChunkManager: time to load chunks for ci_400000000000003.informations: 1673ms sequenceNumber: 2 version: 1|3||53610229eeb9507e5adc91b4 based on: (empty) 2014-04-30T16:01:17.854+0200 [conn12] going to create 4 chunk(s) for: ci_400000000000010.informations using new epoch 5361022deeb9507e5adc91c3 2014-04-30T16:01:17.854+0200 [conn9] going to create 4 chunk(s) for: ci_400000000000007.informations using new epoch 5361022deeb9507e5adc91c4 2014-04-30T16:01:19.907+0200 [conn6] going to create 4 chunk(s) for: ci_400000000000006.informations using new epoch 5361022feeb9507e5adc91c5 2014-04-30T16:01:20.429+0200 [conn11] going to create 4 chunk(s) for: ci_400000000000008.informations using new epoch 53610230eeb9507e5adc91c6 2014-04-30T16:01:22.448+0200 [conn3] ChunkManager: time to load chunks for ci_400000000000001.informations: 367ms sequenceNumber: 3 version: 1|3||5361022aeeb9507e5adc91b6 based on: (empty) 2014-04-30T16:01:24.441+0200 [conn4] resetting shard version of ci_400000000000003.informations on VM01-SHARD-TEST:20217, version is zero 2014-04-30T16:01:24.441+0200 [conn4] resetting shard version of ci_400000000000003.informations on VM01-SHARD-TEST:20317, version is zero 2014-04-30T16:01:24.442+0200 [conn4] resetting shard version of ci_400000000000003.informations on VM01-SHARD-TEST:20417, version is zero 2014-04-30T16:01:24.442+0200 [conn4] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:24-53610234eeb9507e5adc91c7", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866484442), what: "shardCollection", ns: "ci_400000000000003.informations", details: { version: "1|3||53610229eeb9507e5adc91b4" } } 2014-04-30T16:01:27.340+0200 [conn4] moving chunk ns: ci_400000000000003.informations moving ( ns: ci_400000000000003.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:01:27.341+0200 [conn5] ChunkManager: time to load chunks for ci_400000000000002.informations: 0ms sequenceNumber: 7 version: 1|3||5361022ceeb9507e5adc91c0 based on: (empty) 2014-04-30T16:01:27.761+0200 [conn10] ChunkManager: time to load chunks for ci_400000000000009.informations: 0ms sequenceNumber: 6 version: 1|3||5361022ceeb9507e5adc91bf based on: (empty) 2014-04-30T16:01:28.036+0200 [conn8] SyncClusterConnection connecting to [VM01-SHARD-TEST:27119] 2014-04-30T16:01:28.036+0200 [conn7] ChunkManager: time to load chunks for ci_400000000000005.informations: 0ms sequenceNumber: 4 version: 1|3||5361022beeb9507e5adc91bb based on: (empty) 2014-04-30T16:01:28.037+0200 [conn8] SyncClusterConnection connecting to [VM01-SHARD-TEST:27219] 2014-04-30T16:01:28.038+0200 [conn8] SyncClusterConnection connecting to [VM01-SHARD-TEST:27319] 2014-04-30T16:01:28.206+0200 [conn8] ChunkManager: time to load chunks for ci_400000000000004.informations: 170ms sequenceNumber: 5 version: 1|3||5361022ceeb9507e5adc91be based on: (empty) 2014-04-30T16:01:28.466+0200 [conn3] resetting shard version of ci_400000000000001.informations on VM01-SHARD-TEST:20217, version is zero 2014-04-30T16:01:28.468+0200 [conn3] resetting shard version of ci_400000000000001.informations on VM01-SHARD-TEST:20317, version is zero 2014-04-30T16:01:28.469+0200 [conn9] ChunkManager: time to load chunks for ci_400000000000007.informations: 0ms sequenceNumber: 9 version: 1|3||5361022deeb9507e5adc91c4 based on: (empty) 2014-04-30T16:01:28.470+0200 [conn12] ChunkManager: time to load chunks for ci_400000000000010.informations: 0ms sequenceNumber: 8 version: 1|3||5361022deeb9507e5adc91c3 based on: (empty) 2014-04-30T16:01:28.470+0200 [conn3] resetting shard version of ci_400000000000001.informations on VM01-SHARD-TEST:20417, version is zero 2014-04-30T16:01:28.470+0200 [conn3] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:28-53610238eeb9507e5adc91c8", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866488470), what: "shardCollection", ns: "ci_400000000000001.informations", details: { version: "1|3||5361022aeeb9507e5adc91b6" } } 2014-04-30T16:01:29.215+0200 [conn11] ChunkManager: time to load chunks for ci_400000000000008.informations: 0ms sequenceNumber: 11 version: 1|3||53610230eeb9507e5adc91c6 based on: (empty) 2014-04-30T16:01:29.362+0200 [conn5] resetting shard version of ci_400000000000002.informations on VM01-SHARD-TEST:20217, version is zero 2014-04-30T16:01:29.364+0200 [conn5] resetting shard version of ci_400000000000002.informations on VM01-SHARD-TEST:20317, version is zero 2014-04-30T16:01:29.366+0200 [conn5] resetting shard version of ci_400000000000002.informations on VM01-SHARD-TEST:20417, version is zero 2014-04-30T16:01:29.366+0200 [conn5] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:29-53610239eeb9507e5adc91c9", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866489366), what: "shardCollection", ns: "ci_400000000000002.informations", details: { version: "1|3||5361022ceeb9507e5adc91c0" } } 2014-04-30T16:01:29.620+0200 [conn6] ChunkManager: time to load chunks for ci_400000000000006.informations: 0ms sequenceNumber: 10 version: 1|3||5361022feeb9507e5adc91c5 based on: (empty) 2014-04-30T16:01:29.620+0200 [conn3] moving chunk ns: ci_400000000000001.informations moving ( ns: ci_400000000000001.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:01:29.963+0200 [conn5] moving chunk ns: ci_400000000000002.informations moving ( ns: ci_400000000000002.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:01:29.969+0200 [conn9] resetting shard version of ci_400000000000007.informations on VM01-SHARD-TEST:20217, version is zero 2014-04-30T16:01:29.972+0200 [conn10] resetting shard version of ci_400000000000009.informations on VM01-SHARD-TEST:20217, version is zero 2014-04-30T16:01:29.972+0200 [conn8] resetting shard version of ci_400000000000004.informations on VM01-SHARD-TEST:20217, version is zero 2014-04-30T16:01:29.972+0200 [conn9] resetting shard version of ci_400000000000007.informations on VM01-SHARD-TEST:20317, version is zero 2014-04-30T16:01:29.972+0200 [conn7] resetting shard version of ci_400000000000005.informations on VM01-SHARD-TEST:20217, version is zero 2014-04-30T16:01:29.972+0200 [conn12] resetting shard version of ci_400000000000010.informations on VM01-SHARD-TEST:20217, version is zero 2014-04-30T16:01:29.972+0200 [conn8] resetting shard version of ci_400000000000004.informations on VM01-SHARD-TEST:20317, version is zero 2014-04-30T16:01:29.973+0200 [conn10] resetting shard version of ci_400000000000009.informations on VM01-SHARD-TEST:20317, version is zero 2014-04-30T16:01:29.973+0200 [conn9] resetting shard version of ci_400000000000007.informations on VM01-SHARD-TEST:20417, version is zero 2014-04-30T16:01:29.973+0200 [conn9] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:29-53610239eeb9507e5adc91ca", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866489973), what: "shardCollection", ns: "ci_400000000000007.informations", details: { version: "1|3||5361022deeb9507e5adc91c4" } } 2014-04-30T16:01:29.974+0200 [conn12] resetting shard version of ci_400000000000010.informations on VM01-SHARD-TEST:20317, version is zero 2014-04-30T16:01:29.974+0200 [conn8] resetting shard version of ci_400000000000004.informations on VM01-SHARD-TEST:20417, version is zero 2014-04-30T16:01:29.974+0200 [conn10] resetting shard version of ci_400000000000009.informations on VM01-SHARD-TEST:20417, version is zero 2014-04-30T16:01:29.974+0200 [conn10] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:29-53610239eeb9507e5adc91cb", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866489974), what: "shardCollection", ns: "ci_400000000000009.informations", details: { version: "1|3||5361022ceeb9507e5adc91bf" } } 2014-04-30T16:01:29.974+0200 [conn8] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:29-53610239eeb9507e5adc91cc", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866489974), what: "shardCollection", ns: "ci_400000000000004.informations", details: { version: "1|3||5361022ceeb9507e5adc91be" } } 2014-04-30T16:01:29.975+0200 [conn11] resetting shard version of ci_400000000000008.informations on VM01-SHARD-TEST:20217, version is zero 2014-04-30T16:01:29.975+0200 [conn7] resetting shard version of ci_400000000000005.informations on VM01-SHARD-TEST:20317, version is zero 2014-04-30T16:01:29.975+0200 [conn12] resetting shard version of ci_400000000000010.informations on VM01-SHARD-TEST:20417, version is zero 2014-04-30T16:01:29.975+0200 [conn12] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:29-53610239eeb9507e5adc91cd", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866489975), what: "shardCollection", ns: "ci_400000000000010.informations", details: { version: "1|3||5361022deeb9507e5adc91c3" } } 2014-04-30T16:01:29.976+0200 [conn7] resetting shard version of ci_400000000000005.informations on VM01-SHARD-TEST:20417, version is zero 2014-04-30T16:01:29.976+0200 [conn11] resetting shard version of ci_400000000000008.informations on VM01-SHARD-TEST:20317, version is zero 2014-04-30T16:01:29.976+0200 [conn7] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:29-53610239eeb9507e5adc91ce", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866489976), what: "shardCollection", ns: "ci_400000000000005.informations", details: { version: "1|3||5361022beeb9507e5adc91bb" } } 2014-04-30T16:01:29.977+0200 [conn11] resetting shard version of ci_400000000000008.informations on VM01-SHARD-TEST:20417, version is zero 2014-04-30T16:01:29.977+0200 [conn11] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:29-53610239eeb9507e5adc91cf", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866489977), what: "shardCollection", ns: "ci_400000000000008.informations", details: { version: "1|3||53610230eeb9507e5adc91c6" } } 2014-04-30T16:01:30.414+0200 [conn7] SyncClusterConnection connecting to [VM01-SHARD-TEST:27119] 2014-04-30T16:01:30.414+0200 [conn12] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:01:30.414+0200 [conn8] moving chunk ns: ci_400000000000004.informations moving ( ns: ci_400000000000004.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:01:30.414+0200 [conn10] moving chunk ns: ci_400000000000009.informations moving ( ns: ci_400000000000009.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:01:30.415+0200 [conn7] SyncClusterConnection connecting to [VM01-SHARD-TEST:27219] 2014-04-30T16:01:30.418+0200 [conn7] SyncClusterConnection connecting to [VM01-SHARD-TEST:27319] 2014-04-30T16:01:30.418+0200 [conn9] moving chunk ns: ci_400000000000007.informations moving ( ns: ci_400000000000007.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:01:30.420+0200 [conn7] moving chunk ns: ci_400000000000005.informations moving ( ns: ci_400000000000005.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:01:30.420+0200 [conn6] resetting shard version of ci_400000000000006.informations on VM01-SHARD-TEST:20217, version is zero 2014-04-30T16:01:30.421+0200 [conn6] resetting shard version of ci_400000000000006.informations on VM01-SHARD-TEST:20317, version is zero 2014-04-30T16:01:30.421+0200 [conn6] resetting shard version of ci_400000000000006.informations on VM01-SHARD-TEST:20417, version is zero 2014-04-30T16:01:30.422+0200 [conn6] about to log metadata event: { _id: "VM01-SHARD-TEST-2014-04-30T14:01:30-5361023aeeb9507e5adc91d0", server: "VM01-SHARD-TEST", clientAddr: "N/A", time: new Date(1398866490422), what: "shardCollection", ns: "ci_400000000000006.informations", details: { version: "1|3||5361022feeb9507e5adc91c5" } } 2014-04-30T16:01:31.296+0200 [conn11] moving chunk ns: ci_400000000000008.informations moving ( ns: ci_400000000000008.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:01:31.309+0200 [conn6] moving chunk ns: ci_400000000000006.informations moving ( ns: ci_400000000000006.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:01:31.964+0200 [conn12] moveChunk result: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:31.964+0200 [conn3] moveChunk result: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:31.964+0200 [conn5] moveChunk result: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:31.965+0200 [conn12] warning: Couldn't move chunk 00000000057D7680 to shard shard_002:VM01-SHARD-TEST:20217 while sharding collection ci_400000000000010.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:31.965+0200 [conn12] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|2||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:01:31.965+0200 [conn3] warning: Couldn't move chunk 0000000002041E30 to shard shard_002:VM01-SHARD-TEST:20217 while sharding collection ci_400000000000001.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:31.965+0200 [conn5] warning: Couldn't move chunk 00000000020424C0 to shard shard_002:VM01-SHARD-TEST:20217 while sharding collection ci_400000000000002.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:31.965+0200 [conn3] moving chunk ns: ci_400000000000001.informations moving ( ns: ci_400000000000001.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|2||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:01:31.965+0200 [conn5] moving chunk ns: ci_400000000000002.informations moving ( ns: ci_400000000000002.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|2||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:01:32.185+0200 [conn7] moveChunk result: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:32.185+0200 [conn10] moveChunk result: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:32.185+0200 [conn9] moveChunk result: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:32.186+0200 [conn10] warning: Couldn't move chunk 000000000204CEA0 to shard shard_002:VM01-SHARD-TEST:20217 while sharding collection ci_400000000000009.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:32.186+0200 [conn7] warning: Couldn't move chunk 000000000204D3E0 to shard shard_002:VM01-SHARD-TEST:20217 while sharding collection ci_400000000000005.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:32.186+0200 [conn10] moving chunk ns: ci_400000000000009.informations moving ( ns: ci_400000000000009.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|2||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:01:32.186+0200 [conn7] moving chunk ns: ci_400000000000005.informations moving ( ns: ci_400000000000005.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|2||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:01:32.186+0200 [conn9] warning: Couldn't move chunk 00000000057D4F00 to shard shard_002:VM01-SHARD-TEST:20217 while sharding collection ci_400000000000007.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:32.186+0200 [conn9] moving chunk ns: ci_400000000000007.informations moving ( ns: ci_400000000000007.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|2||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:01:32.408+0200 [conn6] moveChunk result: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:32.408+0200 [conn9] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:32.409+0200 [conn6] warning: Couldn't move chunk 000000000203F440 to shard shard_002:VM01-SHARD-TEST:20217 while sharding collection ci_400000000000006.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: -4611686018427387900 }" } 2014-04-30T16:01:32.409+0200 [conn6] moving chunk ns: ci_400000000000006.informations moving ( ns: ci_400000000000006.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|2||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:01:32.410+0200 [conn9] warning: Couldn't move chunk 00000000057D5050 to shard shard_003:VM01-SHARD-TEST:20317 while sharding collection ci_400000000000007.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:32.410+0200 [conn9] moving chunk ns: ci_400000000000007.informations moving ( ns: ci_400000000000007.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|3||000000000000000000000000, min: { _id: 4611686018427387900 }, max: { _id: MaxKey }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:01:32.412+0200 [conn7] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:33.395+0200 [conn7] warning: Couldn't move chunk 000000000204D530 to shard shard_003:VM01-SHARD-TEST:20317 while sharding collection ci_400000000000005.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:33.395+0200 [conn7] moving chunk ns: ci_400000000000005.informations moving ( ns: ci_400000000000005.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|3||000000000000000000000000, min: { _id: 4611686018427387900 }, max: { _id: MaxKey }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:01:33.753+0200 [conn6] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:33.754+0200 [conn6] warning: Couldn't move chunk 000000000203F590 to shard shard_003:VM01-SHARD-TEST:20317 while sharding collection ci_400000000000006.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:33.754+0200 [conn6] moving chunk ns: ci_400000000000006.informations moving ( ns: ci_400000000000006.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|3||000000000000000000000000, min: { _id: 4611686018427387900 }, max: { _id: MaxKey }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:01:33.982+0200 [conn9] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:33.983+0200 [conn10] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:33.983+0200 [conn7] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:33.983+0200 [conn9] warning: Couldn't move chunk 00000000057D51A0 to shard shard_004:VM01-SHARD-TEST:20417 while sharding collection ci_400000000000007.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:33.984+0200 [conn10] warning: Couldn't move chunk 000000000204CFF0 to shard shard_003:VM01-SHARD-TEST:20317 while sharding collection ci_400000000000009.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:33.984+0200 [conn10] moving chunk ns: ci_400000000000009.informations moving ( ns: ci_400000000000009.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|3||000000000000000000000000, min: { _id: 4611686018427387900 }, max: { _id: MaxKey }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:01:33.984+0200 [conn7] warning: Couldn't move chunk 000000000204D680 to shard shard_004:VM01-SHARD-TEST:20417 while sharding collection ci_400000000000005.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:33.984+0200 [conn11] moveChunk result: { ok: 0.0, errmsg: "error locking distributed lock for migration migrate-{ _id: -4611686018427387900 } :: caused by :: 13661 distributed lock ci_400000000000008.customer_..." } 2014-04-30T16:01:33.987+0200 [conn11] warning: Couldn't move chunk 00000000057D7A70 to shard shard_002:VM01-SHARD-TEST:20217 while sharding collection ci_400000000000008.informations. Reason: { ok: 0.0, errmsg: "error locking distributed lock for migration migrate-{ _id: -4611686018427387900 } :: caused by :: 13661 distributed lock ci_400000000000008.customer_..." } 2014-04-30T16:01:33.987+0200 [conn11] moving chunk ns: ci_400000000000008.informations moving ( ns: ci_400000000000008.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|2||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:01:34.642+0200 [conn6] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:34.642+0200 [conn3] moveChunk result: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: 0 }" } 2014-04-30T16:01:34.642+0200 [conn6] warning: Couldn't move chunk 000000000203F6E0 to shard shard_004:VM01-SHARD-TEST:20417 while sharding collection ci_400000000000006.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:34.642+0200 [conn3] warning: Couldn't move chunk 0000000002041F80 to shard shard_003:VM01-SHARD-TEST:20317 while sharding collection ci_400000000000001.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: 0 }" } 2014-04-30T16:01:34.642+0200 [conn10] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:34.643+0200 [conn3] moving chunk ns: ci_400000000000001.informations moving ( ns: ci_400000000000001.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|3||000000000000000000000000, min: { _id: 4611686018427387900 }, max: { _id: MaxKey }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:01:34.643+0200 [conn10] warning: Couldn't move chunk 000000000204D140 to shard shard_004:VM01-SHARD-TEST:20417 while sharding collection ci_400000000000009.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:35.702+0200 [conn12] moveChunk result: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: 0 }" } 2014-04-30T16:01:35.702+0200 [conn12] warning: Couldn't move chunk 00000000057D77D0 to shard shard_003:VM01-SHARD-TEST:20317 while sharding collection ci_400000000000010.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: 0 }" } 2014-04-30T16:01:35.703+0200 [conn12] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|3||000000000000000000000000, min: { _id: 4611686018427387900 }, max: { _id: MaxKey }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:01:35.838+0200 [conn11] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:35.838+0200 [conn8] moveChunk result: { ok: 0.0, errmsg: "moveChunk is already in progress from this shard" } 2014-04-30T16:01:35.839+0200 [conn3] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:36.058+0200 [conn11] warning: Couldn't move chunk 00000000057D7BC0 to shard shard_003:VM01-SHARD-TEST:20317 while sharding collection ci_400000000000008.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:36.058+0200 [conn11] moving chunk ns: ci_400000000000008.informations moving ( ns: ci_400000000000008.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|3||000000000000000000000000, min: { _id: 4611686018427387900 }, max: { _id: MaxKey }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:01:36.059+0200 [conn3] warning: Couldn't move chunk 00000000020420D0 to shard shard_004:VM01-SHARD-TEST:20417 while sharding collection ci_400000000000001.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:36.059+0200 [conn8] warning: Couldn't move chunk 00000000057D45D0 to shard shard_002:VM01-SHARD-TEST:20217 while sharding collection ci_400000000000004.informations. Reason: { ok: 0.0, errmsg: "moveChunk is already in progress from this shard" } 2014-04-30T16:01:36.059+0200 [conn8] moving chunk ns: ci_400000000000004.informations moving ( ns: ci_400000000000004.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|2||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:01:37.152+0200 [conn12] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:37.153+0200 [conn12] warning: Couldn't move chunk 00000000057D7920 to shard shard_004:VM01-SHARD-TEST:20417 while sharding collection ci_400000000000010.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:37.187+0200 [conn10] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000009.informations", keyPattern: { _id: "hashed" }, min: { _id: MinKey }, max: { _id: -4611686018427387900 }, from: "shard_001", splitKeys: [ { _id: -6917529027641081850 } ], shardId: "ci_400000000000009.informations-_id_MinKey", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:37.187+0200 [conn9] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000007.informations", keyPattern: { _id: "hashed" }, min: { _id: MinKey }, max: { _id: -4611686018427387900 }, from: "shard_001", splitKeys: [ { _id: -6917529027641081850 } ], shardId: "ci_400000000000007.informations-_id_MinKey", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:37.187+0200 [conn10] warning: Couldn't split chunk 00000000020428B0 while sharding collection ci_400000000000009.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:37.187+0200 [conn9] warning: Couldn't split chunk 00000000057D4DB0 while sharding collection ci_400000000000007.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:37.188+0200 [conn6] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000006.informations", keyPattern: { _id: "hashed" }, min: { _id: MinKey }, max: { _id: -4611686018427387900 }, from: "shard_001", splitKeys: [ { _id: -6917529027641081850 } ], shardId: "ci_400000000000006.informations-_id_MinKey", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:37.188+0200 [conn6] warning: Couldn't split chunk 00000000057D84F0 while sharding collection ci_400000000000006.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:37.598+0200 [conn4] ChunkManager: time to load chunks for ci_400000000000003.informations: 0ms sequenceNumber: 12 version: 2|1||53610229eeb9507e5adc91b4 based on: 1|3||53610229eeb9507e5adc91b4 2014-04-30T16:01:37.598+0200 [conn4] moving chunk ns: ci_400000000000003.informations moving ( ns: ci_400000000000003.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|2||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:01:38.223+0200 [conn4] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:38.223+0200 [conn4] warning: Couldn't move chunk 000000000203A660 to shard shard_003:VM01-SHARD-TEST:20317 while sharding collection ci_400000000000003.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:38.224+0200 [conn4] moving chunk ns: ci_400000000000003.informations moving ( ns: ci_400000000000003.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|3||000000000000000000000000, min: { _id: 4611686018427387900 }, max: { _id: MaxKey }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:01:38.405+0200 [conn11] moveChunk result: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: 4611686018427387900 }" } 2014-04-30T16:01:38.744+0200 [conn3] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000001.informations", keyPattern: { _id: "hashed" }, min: { _id: MinKey }, max: { _id: -4611686018427387900 }, from: "shard_001", splitKeys: [ { _id: -6917529027641081850 } ], shardId: "ci_400000000000001.informations-_id_MinKey", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { ok: 0.0, errmsg: "Error locking distributed lock for split. :: caused by :: 13661 distributed lock ci_400000000000001.informations/VM01-SHARD-TEST:20117:1398866487:41..." } 2014-04-30T16:01:38.745+0200 [conn3] warning: Couldn't split chunk 0000000002041B90 while sharding collection ci_400000000000001.informations. Reason: { ok: 0.0, errmsg: "Error locking distributed lock for split. :: caused by :: 13661 distributed lock ci_400000000000001.informations/VM01-SHARD-TEST:20117:1398866487:41..." } 2014-04-30T16:01:39.559+0200 [conn11] warning: Couldn't move chunk 00000000057D7D10 to shard shard_004:VM01-SHARD-TEST:20417 while sharding collection ci_400000000000008.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: 4611686018427387900 }" } 2014-04-30T16:01:39.560+0200 [conn6] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000006.informations", keyPattern: { _id: "hashed" }, min: { _id: -4611686018427387900 }, max: { _id: 0 }, from: "shard_001", splitKeys: [ { _id: -2305843009213693950 } ], shardId: "ci_400000000000006.informations-_id_-4611686018427387900", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:39.560+0200 [conn6] warning: Couldn't split chunk 000000000203F440 while sharding collection ci_400000000000006.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:39.560+0200 [conn9] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000007.informations", keyPattern: { _id: "hashed" }, min: { _id: -4611686018427387900 }, max: { _id: 0 }, from: "shard_001", splitKeys: [ { _id: -2305843009213693950 } ], shardId: "ci_400000000000007.informations-_id_-4611686018427387900", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:39.560+0200 [conn9] warning: Couldn't split chunk 00000000057D4F00 while sharding collection ci_400000000000007.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:39.571+0200 [conn4] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:39.571+0200 [conn4] warning: Couldn't move chunk 000000000203A7B0 to shard shard_004:VM01-SHARD-TEST:20417 while sharding collection ci_400000000000003.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:39.707+0200 [LockPinger] cluster VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 pinged successfully at Wed Apr 30 16:01:37 2014 by distributed lock pinger 'VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319/VM01-SHARD-TEST:27017:1398866406:41', sleeping for 30000ms 2014-04-30T16:01:40.053+0200 [conn12] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000010.informations", keyPattern: { _id: "hashed" }, min: { _id: MinKey }, max: { _id: -4611686018427387900 }, from: "shard_001", splitKeys: [ { _id: -6917529027641081850 } ], shardId: "ci_400000000000010.informations-_id_MinKey", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { ok: 0.0, errmsg: "Error locking distributed lock for split. :: caused by :: 13661 distributed lock ci_400000000000010.informations/VM01-SHARD-TEST:20117:1398866487:41..." } 2014-04-30T16:01:40.053+0200 [conn12] warning: Couldn't split chunk 00000000057D52F0 while sharding collection ci_400000000000010.informations. Reason: { ok: 0.0, errmsg: "Error locking distributed lock for split. :: caused by :: 13661 distributed lock ci_400000000000010.informations/VM01-SHARD-TEST:20117:1398866487:41..." } 2014-04-30T16:01:40.364+0200 [conn8] moveChunk result: { ok: 0.0, errmsg: "moveChunk is already in progress from this shard" } 2014-04-30T16:01:40.364+0200 [conn11] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000008.informations", keyPattern: { _id: "hashed" }, min: { _id: MinKey }, max: { _id: -4611686018427387900 }, from: "shard_001", splitKeys: [ { _id: -6917529027641081850 } ], shardId: "ci_400000000000008.informations-_id_MinKey", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:40.364+0200 [conn11] warning: Couldn't split chunk 00000000057D4C60 while sharding collection ci_400000000000008.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:40.364+0200 [conn8] warning: Couldn't move chunk 00000000057D4720 to shard shard_003:VM01-SHARD-TEST:20317 while sharding collection ci_400000000000004.informations. Reason: { ok: 0.0, errmsg: "moveChunk is already in progress from this shard" } 2014-04-30T16:01:40.364+0200 [conn8] moving chunk ns: ci_400000000000004.informations moving ( ns: ci_400000000000004.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|3||000000000000000000000000, min: { _id: 4611686018427387900 }, max: { _id: MaxKey }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:01:40.366+0200 [conn12] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000010.informations", keyPattern: { _id: "hashed" }, min: { _id: -4611686018427387900 }, max: { _id: 0 }, from: "shard_001", splitKeys: [ { _id: -2305843009213693950 } ], shardId: "ci_400000000000010.informations-_id_-4611686018427387900", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:40.366+0200 [conn12] warning: Couldn't split chunk 00000000057D7680 while sharding collection ci_400000000000010.informations. Reason: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:40.367+0200 [conn12] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000010.informations", keyPattern: { _id: "hashed" }, min: { _id: 0 }, max: { _id: 4611686018427387900 }, from: "shard_001", splitKeys: [ { _id: 2305843009213693950 } ], shardId: "ci_400000000000010.informations-_id_0", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:40.367+0200 [conn12] warning: Couldn't split chunk 00000000057D77D0 while sharding collection ci_400000000000010.informations. Reason: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:40.367+0200 [conn12] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000010.informations", keyPattern: { _id: "hashed" }, min: { _id: 4611686018427387900 }, max: { _id: MaxKey }, from: "shard_001", splitKeys: [ { _id: 6917529027641081850 } ], shardId: "ci_400000000000010.informations-_id_4611686018427387900", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:40.367+0200 [conn12] warning: Couldn't split chunk 00000000057D7920 while sharding collection ci_400000000000010.informations. Reason: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:40.556+0200 [conn10] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000009.informations", keyPattern: { _id: "hashed" }, min: { _id: -4611686018427387900 }, max: { _id: 0 }, from: "shard_001", splitKeys: [ { _id: -2305843009213693950 } ], shardId: "ci_400000000000009.informations-_id_-4611686018427387900", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { ok: 0.0, errmsg: "Error locking distributed lock for split. :: caused by :: 13661 distributed lock ci_400000000000009.informations/VM01-SHARD-TEST:20117:1398866487:41..." } 2014-04-30T16:01:40.556+0200 [conn10] warning: Couldn't split chunk 000000000204CEA0 while sharding collection ci_400000000000009.informations. Reason: { ok: 0.0, errmsg: "Error locking distributed lock for split. :: caused by :: 13661 distributed lock ci_400000000000009.informations/VM01-SHARD-TEST:20117:1398866487:41..." } 2014-04-30T16:01:41.113+0200 [conn9] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000007.informations", keyPattern: { _id: "hashed" }, min: { _id: 0 }, max: { _id: 4611686018427387900 }, from: "shard_001", splitKeys: [ { _id: 2305843009213693950 } ], shardId: "ci_400000000000007.informations-_id_0", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { ok: 0.0, errmsg: "Error locking distributed lock for split. :: caused by :: 13661 distributed lock ci_400000000000007.informations/VM01-SHARD-TEST:20117:1398866487:41..." } 2014-04-30T16:01:41.113+0200 [conn9] warning: Couldn't split chunk 00000000057D5050 while sharding collection ci_400000000000007.informations. Reason: { ok: 0.0, errmsg: "Error locking distributed lock for split. :: caused by :: 13661 distributed lock ci_400000000000007.informations/VM01-SHARD-TEST:20117:1398866487:41..." } 2014-04-30T16:01:41.314+0200 [conn8] moveChunk result: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:41.314+0200 [conn8] warning: Couldn't move chunk 00000000057D4870 to shard shard_004:VM01-SHARD-TEST:20417 while sharding collection ci_400000000000004.informations. Reason: { ok: 0.0, errmsg: "migration already in progress" } 2014-04-30T16:01:41.485+0200 [conn7] ChunkManager: time to load chunks for ci_400000000000005.informations: 0ms sequenceNumber: 13 version: 1|5||5361022beeb9507e5adc91bb based on: 1|3||5361022beeb9507e5adc91bb 2014-04-30T16:01:41.581+0200 [conn6] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000006.informations", keyPattern: { _id: "hashed" }, min: { _id: 0 }, max: { _id: 4611686018427387900 }, from: "shard_001", splitKeys: [ { _id: 2305843009213693950 } ], shardId: "ci_400000000000006.informations-_id_0", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { ok: 0.0, errmsg: "Error locking distributed lock for split. :: caused by :: 13661 distributed lock ci_400000000000006.informations/VM01-SHARD-TEST:20117:1398866487:41..." } 2014-04-30T16:01:41.582+0200 [conn6] warning: Couldn't split chunk 000000000203F590 while sharding collection ci_400000000000006.informations. Reason: { ok: 0.0, errmsg: "Error locking distributed lock for split. :: caused by :: 13661 distributed lock ci_400000000000006.informations/VM01-SHARD-TEST:20117:1398866487:41..." } 2014-04-30T16:01:41.911+0200 [conn5] ChunkManager: time to load chunks for ci_400000000000002.informations: 0ms sequenceNumber: 14 version: 2|1||5361022ceeb9507e5adc91c0 based on: 1|3||5361022ceeb9507e5adc91c0 2014-04-30T16:01:41.911+0200 [conn5] moving chunk ns: ci_400000000000002.informations moving ( ns: ci_400000000000002.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|3||000000000000000000000000, min: { _id: 4611686018427387900 }, max: { _id: MaxKey }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:01:41.947+0200 [conn10] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000009.informations", keyPattern: { _id: "hashed" }, min: { _id: 0 }, max: { _id: 4611686018427387900 }, from: "shard_001", splitKeys: [ { _id: 2305843009213693950 } ], shardId: "ci_400000000000009.informations-_id_0", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:41.947+0200 [conn10] warning: Couldn't split chunk 000000000204CFF0 while sharding collection ci_400000000000009.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:41.983+0200 [conn4] ChunkManager: time to load chunks for ci_400000000000003.informations: 0ms sequenceNumber: 15 version: 2|3||53610229eeb9507e5adc91b4 based on: 2|1||53610229eeb9507e5adc91b4 2014-04-30T16:01:43.924+0200 [conn10] warning: splitChunk failed - cmd: { splitChunk: "ci_400000000000009.informations", keyPattern: { _id: "hashed" }, min: { _id: 4611686018427387900 }, max: { _id: MaxKey }, from: "shard_001", splitKeys: [ { _id: 6917529027641081850 } ], shardId: "ci_400000000000009.informations-_id_4611686018427387900", configdb: "VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319" } result: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:43.924+0200 [conn10] warning: Couldn't split chunk 000000000204D140 while sharding collection ci_400000000000009.informations. Reason: { who: {}, ok: 0.0, errmsg: "the collection's metadata lock is taken" } 2014-04-30T16:01:44.670+0200 [conn3] ChunkManager: time to load chunks for ci_400000000000001.informations: 431ms sequenceNumber: 16 version: 1|5||5361022aeeb9507e5adc91b6 based on: 1|3||5361022aeeb9507e5adc91b6 2014-04-30T16:01:46.069+0200 [conn11] ChunkManager: time to load chunks for ci_400000000000008.informations: 625ms sequenceNumber: 17 version: 1|5||53610230eeb9507e5adc91c6 based on: 1|3||53610230eeb9507e5adc91c6 2014-04-30T16:01:46.070+0200 [conn8] ChunkManager: time to load chunks for ci_400000000000004.informations: 0ms sequenceNumber: 18 version: 1|5||5361022ceeb9507e5adc91be based on: 1|3||5361022ceeb9507e5adc91be 2014-04-30T16:01:46.121+0200 [conn9] ChunkManager: time to load chunks for ci_400000000000007.informations: 0ms sequenceNumber: 19 version: 1|5||5361022deeb9507e5adc91c4 based on: 1|3||5361022deeb9507e5adc91c4 2014-04-30T16:01:46.160+0200 [conn7] ChunkManager: time to load chunks for ci_400000000000005.informations: 0ms sequenceNumber: 20 version: 1|7||5361022beeb9507e5adc91bb based on: 1|5||5361022beeb9507e5adc91bb 2014-04-30T16:01:46.160+0200 [conn4] ChunkManager: time to load chunks for ci_400000000000003.informations: 0ms sequenceNumber: 21 version: 2|5||53610229eeb9507e5adc91b4 based on: 2|3||53610229eeb9507e5adc91b4 2014-04-30T16:01:49.373+0200 [conn6] ChunkManager: time to load chunks for ci_400000000000006.informations: 2ms sequenceNumber: 22 version: 1|5||5361022feeb9507e5adc91c5 based on: 1|3||5361022feeb9507e5adc91c5 2014-04-30T16:01:49.911+0200 [conn3] ChunkManager: time to load chunks for ci_400000000000001.informations: 0ms sequenceNumber: 23 version: 1|7||5361022aeeb9507e5adc91b6 based on: 1|5||5361022aeeb9507e5adc91b6 2014-04-30T16:01:51.223+0200 [conn11] ChunkManager: time to load chunks for ci_400000000000008.informations: 0ms sequenceNumber: 25 version: 1|7||53610230eeb9507e5adc91c6 based on: 1|5||53610230eeb9507e5adc91c6 2014-04-30T16:01:51.223+0200 [conn8] ChunkManager: time to load chunks for ci_400000000000004.informations: 0ms sequenceNumber: 24 version: 1|7||5361022ceeb9507e5adc91be based on: 1|5||5361022ceeb9507e5adc91be 2014-04-30T16:01:51.901+0200 [conn5] ChunkManager: time to load chunks for ci_400000000000002.informations: 0ms sequenceNumber: 26 version: 3|1||5361022ceeb9507e5adc91c0 based on: 2|1||5361022ceeb9507e5adc91c0 2014-04-30T16:01:51.935+0200 [conn4] ChunkManager: time to load chunks for ci_400000000000003.informations: 0ms sequenceNumber: 27 version: 2|7||53610229eeb9507e5adc91b4 based on: 2|5||53610229eeb9507e5adc91b4 2014-04-30T16:01:51.936+0200 [conn7] ChunkManager: time to load chunks for ci_400000000000005.informations: 0ms sequenceNumber: 28 version: 1|9||5361022beeb9507e5adc91bb based on: 1|7||5361022beeb9507e5adc91bb 2014-04-30T16:01:53.530+0200 [conn8] ChunkManager: time to load chunks for ci_400000000000004.informations: 355ms sequenceNumber: 29 version: 1|9||5361022ceeb9507e5adc91be based on: 1|7||5361022ceeb9507e5adc91be 2014-04-30T16:01:53.530+0200 [conn11] ChunkManager: time to load chunks for ci_400000000000008.informations: 355ms sequenceNumber: 30 version: 1|9||53610230eeb9507e5adc91c6 based on: 1|7||53610230eeb9507e5adc91c6 2014-04-30T16:01:53.531+0200 [conn3] ChunkManager: time to load chunks for ci_400000000000001.informations: 0ms sequenceNumber: 31 version: 1|9||5361022aeeb9507e5adc91b6 based on: 1|7||5361022aeeb9507e5adc91b6 2014-04-30T16:01:53.714+0200 [conn5] ChunkManager: time to load chunks for ci_400000000000002.informations: 0ms sequenceNumber: 33 version: 3|3||5361022ceeb9507e5adc91c0 based on: 3|1||5361022ceeb9507e5adc91c0 2014-04-30T16:01:53.714+0200 [conn4] ChunkManager: time to load chunks for ci_400000000000003.informations: 0ms sequenceNumber: 32 version: 2|9||53610229eeb9507e5adc91b4 based on: 2|7||53610229eeb9507e5adc91b4 2014-04-30T16:01:53.781+0200 [conn7] ChunkManager: time to load chunks for ci_400000000000005.informations: 0ms sequenceNumber: 34 version: 1|11||5361022beeb9507e5adc91bb based on: 1|9||5361022beeb9507e5adc91bb 2014-04-30T16:01:57.998+0200 [conn5] ChunkManager: time to load chunks for ci_400000000000002.informations: 0ms sequenceNumber: 35 version: 3|5||5361022ceeb9507e5adc91c0 based on: 3|3||5361022ceeb9507e5adc91c0 2014-04-30T16:01:58.032+0200 [conn8] ChunkManager: time to load chunks for ci_400000000000004.informations: 0ms sequenceNumber: 36 version: 1|11||5361022ceeb9507e5adc91be based on: 1|9||5361022ceeb9507e5adc91be 2014-04-30T16:01:59.685+0200 [conn5] ChunkManager: time to load chunks for ci_400000000000002.informations: 0ms sequenceNumber: 37 version: 3|7||5361022ceeb9507e5adc91c0 based on: 3|5||5361022ceeb9507e5adc91c0 2014-04-30T16:02:00.400+0200 [conn5] ChunkManager: time to load chunks for ci_400000000000002.informations: 0ms sequenceNumber: 38 version: 3|9||5361022ceeb9507e5adc91c0 based on: 3|7||5361022ceeb9507e5adc91c0 2014-04-30T16:02:05.402+0200 [conn2] end connection 192.168.1.130:6018 (11 connections now open) 2014-04-30T16:02:05.403+0200 [conn12] end connection 192.168.1.130:6041 (10 connections now open) 2014-04-30T16:02:05.403+0200 [conn10] end connection 192.168.1.130:6039 (9 connections now open) 2014-04-30T16:02:05.403+0200 [conn9] end connection 192.168.1.130:6038 (8 connections now open) 2014-04-30T16:02:05.404+0200 [conn6] end connection 192.168.1.130:6035 (7 connections now open) 2014-04-30T16:02:05.404+0200 [conn11] end connection 192.168.1.130:6040 (6 connections now open) 2014-04-30T16:02:05.404+0200 [conn3] end connection 192.168.1.130:6032 (5 connections now open) 2014-04-30T16:02:05.404+0200 [conn4] end connection 192.168.1.130:6033 (4 connections now open) 2014-04-30T16:02:05.404+0200 [conn7] end connection 192.168.1.130:6036 (3 connections now open) 2014-04-30T16:02:05.405+0200 [conn8] end connection 192.168.1.130:6037 (2 connections now open) 2014-04-30T16:02:05.405+0200 [conn5] end connection 192.168.1.130:6034 (1 connection now open) 2014-04-30T16:02:09.847+0200 [LockPinger] cluster VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 pinged successfully at Wed Apr 30 16:02:09 2014 by distributed lock pinger 'VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319/VM01-SHARD-TEST:27017:1398866406:41', sleeping for 30000ms 2014-04-30T16:02:40.123+0200 [LockPinger] cluster VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 pinged successfully at Wed Apr 30 16:02:39 2014 by distributed lock pinger 'VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319/VM01-SHARD-TEST:27017:1398866406:41', sleeping for 30000ms 2014-04-30T16:03:09.533+0200 [conn13] couldn't find database [test] in config db 2014-04-30T16:03:09.537+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 5361029deeb9507e5adc91d1 2014-04-30T16:03:09.816+0200 [conn13] put [test] on: shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:03:09.823+0200 [Balancer] ns: ci_400000000000003.informations going to move { _id: "ci_400000000000003.informations-_id_MinKey", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('53610229eeb9507e5adc91b4'), ns: "ci_400000000000003.informations", min: { _id: MinKey }, max: { _id: -6917529027641081850 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:03:10.248+0200 [Balancer] ns: ci_400000000000001.informations going to move { _id: "ci_400000000000001.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022aeeb9507e5adc91b6'), ns: "ci_400000000000001.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:03:10.262+0200 [Balancer] ns: ci_400000000000002.informations going to move { _id: "ci_400000000000002.informations-_id_MinKey", lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('5361022ceeb9507e5adc91c0'), ns: "ci_400000000000002.informations", min: { _id: MinKey }, max: { _id: -6917529027641081850 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:03:10.289+0200 [Balancer] ns: ci_400000000000009.informations going to move { _id: "ci_400000000000009.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022ceeb9507e5adc91bf'), ns: "ci_400000000000009.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:03:10.492+0200 [Balancer] ns: ci_400000000000005.informations going to move { _id: "ci_400000000000005.informations-_id_MinKey", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5361022beeb9507e5adc91bb'), ns: "ci_400000000000005.informations", min: { _id: MinKey }, max: { _id: -6917529027641081850 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:03:10.492+0200 [LockPinger] cluster VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 pinged successfully at Wed Apr 30 16:03:10 2014 by distributed lock pinger 'VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319/VM01-SHARD-TEST:27017:1398866406:41', sleeping for 30000ms 2014-04-30T16:03:10.506+0200 [Balancer] ns: ci_400000000000004.informations going to move { _id: "ci_400000000000004.informations-_id_MinKey", lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5361022ceeb9507e5adc91be'), ns: "ci_400000000000004.informations", min: { _id: MinKey }, max: { _id: -6917529027641081850 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:03:10.518+0200 [Balancer] ns: ci_400000000000007.informations going to move { _id: "ci_400000000000007.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c4'), ns: "ci_400000000000007.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:03:10.530+0200 [Balancer] ns: ci_400000000000008.informations going to move { _id: "ci_400000000000008.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('53610230eeb9507e5adc91c6'), ns: "ci_400000000000008.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:03:10.564+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:03:10.574+0200 [Balancer] ns: ci_400000000000006.informations going to move { _id: "ci_400000000000006.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022feeb9507e5adc91c5'), ns: "ci_400000000000006.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:03:10.574+0200 [Balancer] moving chunk ns: ci_400000000000003.informations moving ( ns: ci_400000000000003.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 2|2||000000000000000000000000, min: { _id: MinKey }, max: { _id: -6917529027641081850 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:03:12.150+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000003.informations: 0ms sequenceNumber: 39 version: 3|1||53610229eeb9507e5adc91b4 based on: 2|9||53610229eeb9507e5adc91b4 2014-04-30T16:03:12.150+0200 [Balancer] moving chunk ns: ci_400000000000001.informations moving ( ns: ci_400000000000001.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:03:13.760+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000001.informations: 0ms sequenceNumber: 40 version: 2|1||5361022aeeb9507e5adc91b6 based on: 1|9||5361022aeeb9507e5adc91b6 2014-04-30T16:03:13.760+0200 [Balancer] moving chunk ns: ci_400000000000002.informations moving ( ns: ci_400000000000002.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 3|2||000000000000000000000000, min: { _id: MinKey }, max: { _id: -6917529027641081850 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:03:15.668+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000002.informations: 0ms sequenceNumber: 41 version: 4|1||5361022ceeb9507e5adc91c0 based on: 3|9||5361022ceeb9507e5adc91c0 2014-04-30T16:03:15.669+0200 [Balancer] moving chunk ns: ci_400000000000009.informations moving ( ns: ci_400000000000009.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:03:18.058+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000009.informations: 0ms sequenceNumber: 42 version: 2|1||5361022ceeb9507e5adc91bf based on: 1|3||5361022ceeb9507e5adc91bf 2014-04-30T16:03:18.059+0200 [Balancer] moving chunk ns: ci_400000000000005.informations moving ( ns: ci_400000000000005.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|4||000000000000000000000000, min: { _id: MinKey }, max: { _id: -6917529027641081850 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:03:20.589+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000005.informations: 0ms sequenceNumber: 43 version: 2|1||5361022beeb9507e5adc91bb based on: 1|11||5361022beeb9507e5adc91bb 2014-04-30T16:03:20.589+0200 [Balancer] moving chunk ns: ci_400000000000004.informations moving ( ns: ci_400000000000004.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|4||000000000000000000000000, min: { _id: MinKey }, max: { _id: -6917529027641081850 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:03:22.534+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000004.informations: 0ms sequenceNumber: 44 version: 2|1||5361022ceeb9507e5adc91be based on: 1|11||5361022ceeb9507e5adc91be 2014-04-30T16:03:22.534+0200 [Balancer] moving chunk ns: ci_400000000000007.informations moving ( ns: ci_400000000000007.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:03:24.416+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000007.informations: 0ms sequenceNumber: 45 version: 2|1||5361022deeb9507e5adc91c4 based on: 1|5||5361022deeb9507e5adc91c4 2014-04-30T16:03:24.416+0200 [Balancer] moving chunk ns: ci_400000000000008.informations moving ( ns: ci_400000000000008.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:03:26.590+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000008.informations: 0ms sequenceNumber: 46 version: 2|1||53610230eeb9507e5adc91c6 based on: 1|9||53610230eeb9507e5adc91c6 2014-04-30T16:03:26.590+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:03:27.662+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:03:27.662+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:03:27.662+0200 [Balancer] moving chunk ns: ci_400000000000006.informations moving ( ns: ci_400000000000006.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:03:29.971+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000006.informations: 0ms sequenceNumber: 47 version: 2|1||5361022feeb9507e5adc91c5 based on: 1|5||5361022feeb9507e5adc91c5 2014-04-30T16:03:30.100+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:03:31.358+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 536102b3eeb9507e5adc91d2 2014-04-30T16:03:31.589+0200 [Balancer] ns: ci_400000000000003.informations going to move { _id: "ci_400000000000003.informations-_id_-6917529027641081850", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('53610229eeb9507e5adc91b4'), ns: "ci_400000000000003.informations", min: { _id: -6917529027641081850 }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_004 tag [] 2014-04-30T16:03:31.619+0200 [Balancer] ns: ci_400000000000001.informations going to move { _id: "ci_400000000000001.informations-_id_-4611686018427387900", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('5361022aeeb9507e5adc91b6'), ns: "ci_400000000000001.informations", min: { _id: -4611686018427387900 }, max: { _id: -2305843009213693950 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:03:31.632+0200 [Balancer] ns: ci_400000000000002.informations going to move { _id: "ci_400000000000002.informations-_id_-6917529027641081850", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('5361022ceeb9507e5adc91c0'), ns: "ci_400000000000002.informations", min: { _id: -6917529027641081850 }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:03:31.645+0200 [Balancer] ns: ci_400000000000009.informations going to move { _id: "ci_400000000000009.informations-_id_-4611686018427387900", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('5361022ceeb9507e5adc91bf'), ns: "ci_400000000000009.informations", min: { _id: -4611686018427387900 }, max: { _id: 0 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:03:31.655+0200 [Balancer] ns: ci_400000000000005.informations going to move { _id: "ci_400000000000005.informations-_id_-6917529027641081850", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('5361022beeb9507e5adc91bb'), ns: "ci_400000000000005.informations", min: { _id: -6917529027641081850 }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:03:31.665+0200 [Balancer] ns: ci_400000000000004.informations going to move { _id: "ci_400000000000004.informations-_id_-6917529027641081850", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('5361022ceeb9507e5adc91be'), ns: "ci_400000000000004.informations", min: { _id: -6917529027641081850 }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:03:31.675+0200 [Balancer] ns: ci_400000000000007.informations going to move { _id: "ci_400000000000007.informations-_id_-4611686018427387900", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c4'), ns: "ci_400000000000007.informations", min: { _id: -4611686018427387900 }, max: { _id: 0 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:03:31.684+0200 [Balancer] ns: ci_400000000000008.informations going to move { _id: "ci_400000000000008.informations-_id_-4611686018427387900", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('53610230eeb9507e5adc91c6'), ns: "ci_400000000000008.informations", min: { _id: -4611686018427387900 }, max: { _id: -2305843009213693950 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:03:31.694+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:03:31.703+0200 [Balancer] ns: ci_400000000000006.informations going to move { _id: "ci_400000000000006.informations-_id_-4611686018427387900", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('5361022feeb9507e5adc91c5'), ns: "ci_400000000000006.informations", min: { _id: -4611686018427387900 }, max: { _id: 0 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:03:31.703+0200 [Balancer] moving chunk ns: ci_400000000000003.informations moving ( ns: ci_400000000000003.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 3|1||000000000000000000000000, min: { _id: -6917529027641081850 }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:03:33.764+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000003.informations: 0ms sequenceNumber: 48 version: 4|1||53610229eeb9507e5adc91b4 based on: 3|1||53610229eeb9507e5adc91b4 2014-04-30T16:03:33.764+0200 [Balancer] moving chunk ns: ci_400000000000001.informations moving ( ns: ci_400000000000001.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 2|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: -2305843009213693950 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:03:35.690+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000001.informations: 0ms sequenceNumber: 49 version: 3|1||5361022aeeb9507e5adc91b6 based on: 2|1||5361022aeeb9507e5adc91b6 2014-04-30T16:03:35.690+0200 [Balancer] moving chunk ns: ci_400000000000002.informations moving ( ns: ci_400000000000002.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 4|1||000000000000000000000000, min: { _id: -6917529027641081850 }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:03:37.788+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000002.informations: 0ms sequenceNumber: 50 version: 5|1||5361022ceeb9507e5adc91c0 based on: 4|1||5361022ceeb9507e5adc91c0 2014-04-30T16:03:37.788+0200 [Balancer] moving chunk ns: ci_400000000000009.informations moving ( ns: ci_400000000000009.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 2|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:03:39.786+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000009.informations: 0ms sequenceNumber: 51 version: 3|1||5361022ceeb9507e5adc91bf based on: 2|1||5361022ceeb9507e5adc91bf 2014-04-30T16:03:39.786+0200 [Balancer] moving chunk ns: ci_400000000000005.informations moving ( ns: ci_400000000000005.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 2|1||000000000000000000000000, min: { _id: -6917529027641081850 }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:03:40.944+0200 [LockPinger] cluster VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 pinged successfully at Wed Apr 30 16:03:40 2014 by distributed lock pinger 'VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319/VM01-SHARD-TEST:27017:1398866406:41', sleeping for 30000ms 2014-04-30T16:03:41.684+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000005.informations: 0ms sequenceNumber: 52 version: 3|1||5361022beeb9507e5adc91bb based on: 2|1||5361022beeb9507e5adc91bb 2014-04-30T16:03:41.685+0200 [Balancer] moving chunk ns: ci_400000000000004.informations moving ( ns: ci_400000000000004.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 2|1||000000000000000000000000, min: { _id: -6917529027641081850 }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:03:44.157+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000004.informations: 0ms sequenceNumber: 53 version: 3|1||5361022ceeb9507e5adc91be based on: 2|1||5361022ceeb9507e5adc91be 2014-04-30T16:03:44.157+0200 [Balancer] moving chunk ns: ci_400000000000007.informations moving ( ns: ci_400000000000007.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 2|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:03:46.523+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000007.informations: 0ms sequenceNumber: 54 version: 3|1||5361022deeb9507e5adc91c4 based on: 2|1||5361022deeb9507e5adc91c4 2014-04-30T16:03:46.523+0200 [Balancer] moving chunk ns: ci_400000000000008.informations moving ( ns: ci_400000000000008.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 2|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: -2305843009213693950 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:03:49.866+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000008.informations: 0ms sequenceNumber: 55 version: 3|1||53610230eeb9507e5adc91c6 based on: 2|1||53610230eeb9507e5adc91c6 2014-04-30T16:03:49.866+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:03:50.089+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:03:50.090+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:03:50.090+0200 [Balancer] moving chunk ns: ci_400000000000006.informations moving ( ns: ci_400000000000006.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 2|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:03:52.817+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000006.informations: 0ms sequenceNumber: 56 version: 3|1||5361022feeb9507e5adc91c5 based on: 2|1||5361022feeb9507e5adc91c5 2014-04-30T16:03:53.043+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:03:55.398+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 536102caeeb9507e5adc91d3 2014-04-30T16:03:59.108+0200 [Balancer] ns: ci_400000000000003.informations going to move { _id: "ci_400000000000003.informations-_id_0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('53610229eeb9507e5adc91b4'), ns: "ci_400000000000003.informations", min: { _id: 0 }, max: { _id: 2305843009213693950 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:03:59.301+0200 [Balancer] ns: ci_400000000000001.informations going to move { _id: "ci_400000000000001.informations-_id_-2305843009213693950", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5361022aeeb9507e5adc91b6'), ns: "ci_400000000000001.informations", min: { _id: -2305843009213693950 }, max: { _id: 0 }, shard: "shard_001" } from: shard_001 to: shard_004 tag [] 2014-04-30T16:03:59.522+0200 [Balancer] ns: ci_400000000000009.informations going to move { _id: "ci_400000000000009.informations-_id_0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5361022ceeb9507e5adc91bf'), ns: "ci_400000000000009.informations", min: { _id: 0 }, max: { _id: 4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_004 tag [] 2014-04-30T16:03:59.534+0200 [Balancer] ns: ci_400000000000005.informations going to move { _id: "ci_400000000000005.informations-_id_-4611686018427387900", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5361022beeb9507e5adc91bb'), ns: "ci_400000000000005.informations", min: { _id: -4611686018427387900 }, max: { _id: -2305843009213693950 }, shard: "shard_001" } from: shard_001 to: shard_004 tag [] 2014-04-30T16:03:59.546+0200 [Balancer] ns: ci_400000000000004.informations going to move { _id: "ci_400000000000004.informations-_id_-4611686018427387900", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5361022ceeb9507e5adc91be'), ns: "ci_400000000000004.informations", min: { _id: -4611686018427387900 }, max: { _id: -2305843009213693950 }, shard: "shard_001" } from: shard_001 to: shard_004 tag [] 2014-04-30T16:03:59.555+0200 [Balancer] ns: ci_400000000000007.informations going to move { _id: "ci_400000000000007.informations-_id_0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c4'), ns: "ci_400000000000007.informations", min: { _id: 0 }, max: { _id: 4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_004 tag [] 2014-04-30T16:03:59.564+0200 [Balancer] ns: ci_400000000000008.informations going to move { _id: "ci_400000000000008.informations-_id_-2305843009213693950", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('53610230eeb9507e5adc91c6'), ns: "ci_400000000000008.informations", min: { _id: -2305843009213693950 }, max: { _id: 0 }, shard: "shard_001" } from: shard_001 to: shard_004 tag [] 2014-04-30T16:03:59.573+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:03:59.584+0200 [Balancer] ns: ci_400000000000006.informations going to move { _id: "ci_400000000000006.informations-_id_0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5361022feeb9507e5adc91c5'), ns: "ci_400000000000006.informations", min: { _id: 0 }, max: { _id: 4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_004 tag [] 2014-04-30T16:03:59.584+0200 [Balancer] moving chunk ns: ci_400000000000003.informations moving ( ns: ci_400000000000003.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 4|1||000000000000000000000000, min: { _id: 0 }, max: { _id: 2305843009213693950 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:04:01.892+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000003.informations: 0ms sequenceNumber: 57 version: 5|1||53610229eeb9507e5adc91b4 based on: 4|1||53610229eeb9507e5adc91b4 2014-04-30T16:04:01.892+0200 [Balancer] moving chunk ns: ci_400000000000001.informations moving ( ns: ci_400000000000001.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 3|1||000000000000000000000000, min: { _id: -2305843009213693950 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:04:04.529+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000001.informations: 0ms sequenceNumber: 58 version: 4|1||5361022aeeb9507e5adc91b6 based on: 3|1||5361022aeeb9507e5adc91b6 2014-04-30T16:04:04.529+0200 [Balancer] moving chunk ns: ci_400000000000009.informations moving ( ns: ci_400000000000009.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 3|1||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:04:06.266+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000009.informations: 0ms sequenceNumber: 59 version: 4|1||5361022ceeb9507e5adc91bf based on: 3|1||5361022ceeb9507e5adc91bf 2014-04-30T16:04:06.266+0200 [Balancer] moving chunk ns: ci_400000000000005.informations moving ( ns: ci_400000000000005.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 3|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: -2305843009213693950 }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:04:08.207+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000005.informations: 0ms sequenceNumber: 60 version: 4|1||5361022beeb9507e5adc91bb based on: 3|1||5361022beeb9507e5adc91bb 2014-04-30T16:04:08.207+0200 [Balancer] moving chunk ns: ci_400000000000004.informations moving ( ns: ci_400000000000004.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 3|1||000000000000000000000000, min: { _id: -4611686018427387900 }, max: { _id: -2305843009213693950 }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:04:11.452+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000004.informations: 0ms sequenceNumber: 61 version: 4|1||5361022ceeb9507e5adc91be based on: 3|1||5361022ceeb9507e5adc91be 2014-04-30T16:04:11.452+0200 [Balancer] moving chunk ns: ci_400000000000007.informations moving ( ns: ci_400000000000007.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 3|1||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:04:11.544+0200 [LockPinger] cluster VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 pinged successfully at Wed Apr 30 16:04:10 2014 by distributed lock pinger 'VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319/VM01-SHARD-TEST:27017:1398866406:41', sleeping for 30000ms 2014-04-30T16:04:13.641+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000007.informations: 0ms sequenceNumber: 62 version: 4|1||5361022deeb9507e5adc91c4 based on: 3|1||5361022deeb9507e5adc91c4 2014-04-30T16:04:13.641+0200 [Balancer] moving chunk ns: ci_400000000000008.informations moving ( ns: ci_400000000000008.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 3|1||000000000000000000000000, min: { _id: -2305843009213693950 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:04:15.947+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000008.informations: 0ms sequenceNumber: 63 version: 4|1||53610230eeb9507e5adc91c6 based on: 3|1||53610230eeb9507e5adc91c6 2014-04-30T16:04:15.947+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:04:16.546+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:04:16.546+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:04:16.546+0200 [Balancer] moving chunk ns: ci_400000000000006.informations moving ( ns: ci_400000000000006.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 3|1||000000000000000000000000, min: { _id: 0 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:04:18.397+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000006.informations: 0ms sequenceNumber: 64 version: 4|1||5361022feeb9507e5adc91c5 based on: 3|1||5361022feeb9507e5adc91c5 2014-04-30T16:04:18.460+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:04:19.616+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 536102e3eeb9507e5adc91d4 2014-04-30T16:04:19.922+0200 [Balancer] ns: ci_400000000000003.informations going to move { _id: "ci_400000000000003.informations-_id_2305843009213693950", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('53610229eeb9507e5adc91b4'), ns: "ci_400000000000003.informations", min: { _id: 2305843009213693950 }, max: { _id: 4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_004 tag [] 2014-04-30T16:04:19.929+0200 [Balancer] ns: ci_400000000000001.informations going to move { _id: "ci_400000000000001.informations-_id_0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('5361022aeeb9507e5adc91b6'), ns: "ci_400000000000001.informations", min: { _id: 0 }, max: { _id: 2305843009213693950 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:04:19.970+0200 [Balancer] ns: ci_400000000000005.informations going to move { _id: "ci_400000000000005.informations-_id_-2305843009213693950", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('5361022beeb9507e5adc91bb'), ns: "ci_400000000000005.informations", min: { _id: -2305843009213693950 }, max: { _id: 0 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:04:19.975+0200 [Balancer] ns: ci_400000000000004.informations going to move { _id: "ci_400000000000004.informations-_id_-2305843009213693950", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('5361022ceeb9507e5adc91be'), ns: "ci_400000000000004.informations", min: { _id: -2305843009213693950 }, max: { _id: 0 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:04:19.987+0200 [Balancer] ns: ci_400000000000008.informations going to move { _id: "ci_400000000000008.informations-_id_0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('53610230eeb9507e5adc91c6'), ns: "ci_400000000000008.informations", min: { _id: 0 }, max: { _id: 2305843009213693950 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:04:19.992+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:04:19.996+0200 [Balancer] moving chunk ns: ci_400000000000003.informations moving ( ns: ci_400000000000003.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 5|1||000000000000000000000000, min: { _id: 2305843009213693950 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:04:22.069+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000003.informations: 0ms sequenceNumber: 65 version: 6|1||53610229eeb9507e5adc91b4 based on: 5|1||53610229eeb9507e5adc91b4 2014-04-30T16:04:22.070+0200 [Balancer] moving chunk ns: ci_400000000000001.informations moving ( ns: ci_400000000000001.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 4|1||000000000000000000000000, min: { _id: 0 }, max: { _id: 2305843009213693950 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:04:25.520+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000001.informations: 0ms sequenceNumber: 66 version: 5|1||5361022aeeb9507e5adc91b6 based on: 4|1||5361022aeeb9507e5adc91b6 2014-04-30T16:04:25.520+0200 [Balancer] moving chunk ns: ci_400000000000005.informations moving ( ns: ci_400000000000005.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 4|1||000000000000000000000000, min: { _id: -2305843009213693950 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:04:27.721+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000005.informations: 0ms sequenceNumber: 67 version: 5|1||5361022beeb9507e5adc91bb based on: 4|1||5361022beeb9507e5adc91bb 2014-04-30T16:04:27.722+0200 [Balancer] moving chunk ns: ci_400000000000004.informations moving ( ns: ci_400000000000004.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 4|1||000000000000000000000000, min: { _id: -2305843009213693950 }, max: { _id: 0 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:04:29.753+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000004.informations: 0ms sequenceNumber: 68 version: 5|1||5361022ceeb9507e5adc91be based on: 4|1||5361022ceeb9507e5adc91be 2014-04-30T16:04:29.753+0200 [Balancer] moving chunk ns: ci_400000000000008.informations moving ( ns: ci_400000000000008.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 4|1||000000000000000000000000, min: { _id: 0 }, max: { _id: 2305843009213693950 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:04:31.551+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000008.informations: 0ms sequenceNumber: 69 version: 5|1||53610230eeb9507e5adc91c6 based on: 4|1||53610230eeb9507e5adc91c6 2014-04-30T16:04:31.551+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:04:32.081+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:04:32.081+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:04:32.216+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:04:33.508+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 536102f1eeb9507e5adc91d5 2014-04-30T16:04:34.092+0200 [Balancer] ns: ci_400000000000001.informations going to move { _id: "ci_400000000000001.informations-_id_2305843009213693950", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('5361022aeeb9507e5adc91b6'), ns: "ci_400000000000001.informations", min: { _id: 2305843009213693950 }, max: { _id: 4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:04:34.111+0200 [Balancer] ns: ci_400000000000005.informations going to move { _id: "ci_400000000000005.informations-_id_0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('5361022beeb9507e5adc91bb'), ns: "ci_400000000000005.informations", min: { _id: 0 }, max: { _id: 2305843009213693950 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:04:34.117+0200 [Balancer] ns: ci_400000000000004.informations going to move { _id: "ci_400000000000004.informations-_id_0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('5361022ceeb9507e5adc91be'), ns: "ci_400000000000004.informations", min: { _id: 0 }, max: { _id: 2305843009213693950 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:04:34.128+0200 [Balancer] ns: ci_400000000000008.informations going to move { _id: "ci_400000000000008.informations-_id_2305843009213693950", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('53610230eeb9507e5adc91c6'), ns: "ci_400000000000008.informations", min: { _id: 2305843009213693950 }, max: { _id: 4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_003 tag [] 2014-04-30T16:04:34.134+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:04:34.139+0200 [Balancer] moving chunk ns: ci_400000000000001.informations moving ( ns: ci_400000000000001.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 5|1||000000000000000000000000, min: { _id: 2305843009213693950 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:04:35.970+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000001.informations: 0ms sequenceNumber: 70 version: 6|1||5361022aeeb9507e5adc91b6 based on: 5|1||5361022aeeb9507e5adc91b6 2014-04-30T16:04:35.971+0200 [Balancer] moving chunk ns: ci_400000000000005.informations moving ( ns: ci_400000000000005.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 5|1||000000000000000000000000, min: { _id: 0 }, max: { _id: 2305843009213693950 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:04:38.184+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000005.informations: 0ms sequenceNumber: 71 version: 6|1||5361022beeb9507e5adc91bb based on: 5|1||5361022beeb9507e5adc91bb 2014-04-30T16:04:38.184+0200 [Balancer] moving chunk ns: ci_400000000000004.informations moving ( ns: ci_400000000000004.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 5|1||000000000000000000000000, min: { _id: 0 }, max: { _id: 2305843009213693950 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:04:40.217+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000004.informations: 0ms sequenceNumber: 72 version: 6|1||5361022ceeb9507e5adc91be based on: 5|1||5361022ceeb9507e5adc91be 2014-04-30T16:04:40.217+0200 [Balancer] moving chunk ns: ci_400000000000008.informations moving ( ns: ci_400000000000008.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 5|1||000000000000000000000000, min: { _id: 2305843009213693950 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_003:VM01-SHARD-TEST:20317 2014-04-30T16:04:41.799+0200 [LockPinger] cluster VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 pinged successfully at Wed Apr 30 16:04:41 2014 by distributed lock pinger 'VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319/VM01-SHARD-TEST:27017:1398866406:41', sleeping for 30000ms 2014-04-30T16:04:42.113+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000008.informations: 0ms sequenceNumber: 73 version: 6|1||53610230eeb9507e5adc91c6 based on: 5|1||53610230eeb9507e5adc91c6 2014-04-30T16:04:42.113+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:04:43.286+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:04:43.287+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:04:43.490+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:04:44.752+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 536102fceeb9507e5adc91d6 2014-04-30T16:04:45.090+0200 [Balancer] ns: ci_400000000000005.informations going to move { _id: "ci_400000000000005.informations-_id_2305843009213693950", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('5361022beeb9507e5adc91bb'), ns: "ci_400000000000005.informations", min: { _id: 2305843009213693950 }, max: { _id: 4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_004 tag [] 2014-04-30T16:04:45.095+0200 [Balancer] ns: ci_400000000000004.informations going to move { _id: "ci_400000000000004.informations-_id_2305843009213693950", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('5361022ceeb9507e5adc91be'), ns: "ci_400000000000004.informations", min: { _id: 2305843009213693950 }, max: { _id: 4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_004 tag [] 2014-04-30T16:04:45.109+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:04:45.114+0200 [Balancer] moving chunk ns: ci_400000000000005.informations moving ( ns: ci_400000000000005.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 6|1||000000000000000000000000, min: { _id: 2305843009213693950 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:04:47.025+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000005.informations: 0ms sequenceNumber: 74 version: 7|1||5361022beeb9507e5adc91bb based on: 6|1||5361022beeb9507e5adc91bb 2014-04-30T16:04:47.025+0200 [Balancer] moving chunk ns: ci_400000000000004.informations moving ( ns: ci_400000000000004.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 6|1||000000000000000000000000, min: { _id: 2305843009213693950 }, max: { _id: 4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_004:VM01-SHARD-TEST:20417 2014-04-30T16:04:48.757+0200 [Balancer] ChunkManager: time to load chunks for ci_400000000000004.informations: 0ms sequenceNumber: 75 version: 7|1||5361022ceeb9507e5adc91be based on: 6|1||5361022ceeb9507e5adc91be 2014-04-30T16:04:48.757+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:04:49.014+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:04:49.014+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:04:49.149+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:04:50.408+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 53610302eeb9507e5adc91d7 2014-04-30T16:04:50.774+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:04:50.779+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:04:50.784+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:04:50.784+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:04:50.917+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:04:57.108+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 53610308eeb9507e5adc91d8 2014-04-30T16:04:57.379+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:04:57.398+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:04:57.404+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:04:57.404+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:04:57.482+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:05:03.670+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 5361030feeb9507e5adc91d9 2014-04-30T16:05:03.984+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:05:03.989+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:05:03.994+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:05:03.994+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:05:04.112+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:05:10.592+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 53610316eeb9507e5adc91da 2014-04-30T16:05:11.009+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:05:11.013+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:05:11.019+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:05:11.019+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:05:11.102+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:05:11.952+0200 [LockPinger] cluster VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 pinged successfully at Wed Apr 30 16:05:11 2014 by distributed lock pinger 'VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319/VM01-SHARD-TEST:27017:1398866406:41', sleeping for 30000ms 2014-04-30T16:05:17.258+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 5361031deeb9507e5adc91db 2014-04-30T16:05:17.633+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:05:17.638+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:05:17.643+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:05:17.644+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:05:17.785+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:05:24.041+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 53610323eeb9507e5adc91dc 2014-04-30T16:05:24.364+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:05:24.368+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:05:24.374+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:05:24.374+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398866497271), why: "split-{ _id: MinKey }", ts: ObjectId('536102415c2542f57fcb88ae') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } ... 2014-04-30T16:22:36.236+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:22:42.379+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 53610732eeb9507e5adc9274 2014-04-30T16:22:42.945+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:22:42.952+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:22:42.959+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398867405844), why: "migrate-{ _id: MinKey }", ts: ObjectId('536105cd5c2542f57fcb89ef') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:22:42.959+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398867405844), why: "migrate-{ _id: MinKey }", ts: ObjectId('536105cd5c2542f57fcb89ef') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:22:43.109+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:22:49.399+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' acquired, ts : 53610739eeb9507e5adc9275 2014-04-30T16:22:49.755+0200 [Balancer] ns: ci_400000000000010.informations going to move { _id: "ci_400000000000010.informations-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5361022deeb9507e5adc91c3'), ns: "ci_400000000000010.informations", min: { _id: MinKey }, max: { _id: -4611686018427387900 }, shard: "shard_001" } from: shard_001 to: shard_002 tag [] 2014-04-30T16:22:49.760+0200 [Balancer] moving chunk ns: ci_400000000000010.informations moving ( ns: ci_400000000000010.informations, shard: shard_001:VM01-SHARD-TEST:20117, lastmod: 1|0||000000000000000000000000, min: { _id: MinKey }, max: { _id: -4611686018427387900 }) shard_001:VM01-SHARD-TEST:20117 -> shard_002:VM01-SHARD-TEST:20217 2014-04-30T16:22:49.766+0200 [Balancer] moveChunk result: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398867405844), why: "migrate-{ _id: MinKey }", ts: ObjectId('536105cd5c2542f57fcb89ef') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } 2014-04-30T16:22:49.766+0200 [Balancer] balancer move failed: { who: { _id: "ci_400000000000010.informations", state: 1, who: "VM01-SHARD-TEST:20117:1398866487:41:conn14:41", process: "VM01-SHARD-TEST:20117:1398866487:41", when: new Date(1398867405844), why: "migrate-{ _id: MinKey }", ts: ObjectId('536105cd5c2542f57fcb89ef') }, ok: 0.0, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }" } from: shard_001 to: shard_002 chunk: min: { _id: MinKey } max: { _id: -4611686018427387900 } 2014-04-30T16:22:49.898+0200 [Balancer] distributed lock 'balancer/VM01-SHARD-TEST:27017:1398866406:41' unlocked. 2014-04-30T16:22:50.272+0200 [LockPinger] cluster VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319 pinged successfully at Wed Apr 30 16:22:50 2014 by distributed lock pinger 'VM01-SHARD-TEST:27119,VM01-SHARD-TEST:27219,VM01-SHARD-TEST:27319/VM01-SHARD-TEST:27017:1398866406:41', sleeping for 30000ms 2014-04-30T16:22:53.942+0200 CTRL_CLOSE_EVENT signal 2014-04-30T16:22:53.942+0200 [consoleTerminate] got CTRL_CLOSE_EVENT, will terminate after current cmd ends 2014-04-30T16:22:53.943+0200 [consoleTerminate] dbexit: rc:12