Mon Dec 17 15:31:37.542 [conn66] end connection 127.0.0.1:52180 (0 connections now open) MongoDB shell version: 2.3.2-pre- null ---- Setting up new ShardingTest ---- Resetting db path '/data/db/mrShardedOutput0' Mon Dec 17 15:31:37.992 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30000 --dbpath /data/db/mrShardedOutput0 --setParameter enableTestCommands=1 m30000| Mon Dec 17 15:31:38.031 m30000| Mon Dec 17 15:31:38.032 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. m30000| Mon Dec 17 15:31:38.032 m30000| Mon Dec 17 15:31:38.043 [initandlisten] MongoDB starting : pid=20333 port=30000 dbpath=/data/db/mrShardedOutput0 32-bit host=domU-12-31-39-01-70-B4 m30000| Mon Dec 17 15:31:38.043 [initandlisten] m30000| Mon Dec 17 15:31:38.043 [initandlisten] ** NOTE: This is a development version (2.3.2-pre-) of MongoDB. m30000| Mon Dec 17 15:31:38.043 [initandlisten] ** Not recommended for production. m30000| Mon Dec 17 15:31:38.043 [initandlisten] m30000| Mon Dec 17 15:31:38.043 [initandlisten] ** NOTE: This is a 32 bit MongoDB binary. m30000| Mon Dec 17 15:31:38.073 [initandlisten] ** 32 bit builds are limited to less than 2GB of data (or less with --journal). m30000| Mon Dec 17 15:31:38.073 [initandlisten] ** Note that journaling defaults to off for 32 bit and is currently off. m30000| Mon Dec 17 15:31:38.073 [initandlisten] ** See http://www.mongodb.org/display/DOCS/32+bit m30000| Mon Dec 17 15:31:38.073 [initandlisten] m30000| Mon Dec 17 15:31:38.073 [initandlisten] db version v2.3.2-pre-, pdfile version 4.5 m30000| Mon Dec 17 15:31:38.073 [initandlisten] git version: 41cce287ffad7ea0c04facdd0986631bade7d027 m30000| Mon Dec 17 15:31:38.073 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 m30000| Mon Dec 17 15:31:38.073 [initandlisten] options: { dbpath: "/data/db/mrShardedOutput0", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Mon Dec 17 15:31:38.073 [initandlisten] Unable to check for journal files due to: boost::filesystem::directory_iterator::construct: No such file or directory: "/data/db/mrShardedOutput0/journal" m30000| Mon Dec 17 15:31:38.085 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/local.ns, filling with zeroes... m30000| Mon Dec 17 15:31:38.085 [FileAllocator] creating directory /data/db/mrShardedOutput0/_tmp m30000| Mon Dec 17 15:31:38.348 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/local.ns, size: 16MB, took 0.259 secs m30000| Mon Dec 17 15:31:38.348 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/local.0, filling with zeroes... m30000| Mon Dec 17 15:31:38.625 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/local.0, size: 16MB, took 0.276 secs m30000| Mon Dec 17 15:31:38.626 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 541ms m30000| Mon Dec 17 15:31:38.626 [initandlisten] waiting for connections on port 30000 m30000| Mon Dec 17 15:31:38.627 [websvr] admin web console waiting for connections on port 31000 Resetting db path '/data/db/mrShardedOutput1' m30000| Mon Dec 17 15:31:38.829 [initandlisten] connection accepted from 127.0.0.1:39822 #1 (1 connection now open) Mon Dec 17 15:31:38.838 shell: started program /mnt/slaves/Linux_32bit/mongo/mongod --port 30001 --dbpath /data/db/mrShardedOutput1 --setParameter enableTestCommands=1 m30001| Mon Dec 17 15:31:38.878 m30001| Mon Dec 17 15:31:38.878 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability. m30001| Mon Dec 17 15:31:38.878 m30001| Mon Dec 17 15:31:38.889 [initandlisten] MongoDB starting : pid=20351 port=30001 dbpath=/data/db/mrShardedOutput1 32-bit host=domU-12-31-39-01-70-B4 m30001| Mon Dec 17 15:31:38.889 [initandlisten] m30001| Mon Dec 17 15:31:38.889 [initandlisten] ** NOTE: This is a development version (2.3.2-pre-) of MongoDB. m30001| Mon Dec 17 15:31:38.889 [initandlisten] ** Not recommended for production. m30001| Mon Dec 17 15:31:38.889 [initandlisten] m30001| Mon Dec 17 15:31:38.889 [initandlisten] ** NOTE: This is a 32 bit MongoDB binary. m30001| Mon Dec 17 15:31:38.889 [initandlisten] ** 32 bit builds are limited to less than 2GB of data (or less with --journal). m30001| Mon Dec 17 15:31:38.889 [initandlisten] ** Note that journaling defaults to off for 32 bit and is currently off. m30001| Mon Dec 17 15:31:38.889 [initandlisten] ** See http://www.mongodb.org/display/DOCS/32+bit m30001| Mon Dec 17 15:31:38.889 [initandlisten] m30001| Mon Dec 17 15:31:38.889 [initandlisten] db version v2.3.2-pre-, pdfile version 4.5 m30001| Mon Dec 17 15:31:38.889 [initandlisten] git version: 41cce287ffad7ea0c04facdd0986631bade7d027 m30001| Mon Dec 17 15:31:38.889 [initandlisten] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 m30001| Mon Dec 17 15:31:38.889 [initandlisten] options: { dbpath: "/data/db/mrShardedOutput1", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Mon Dec 17 15:31:38.889 [initandlisten] Unable to check for journal files due to: boost::filesystem::directory_iterator::construct: No such file or directory: "/data/db/mrShardedOutput1/journal" m30001| Mon Dec 17 15:31:38.925 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/local.ns, filling with zeroes... m30001| Mon Dec 17 15:31:38.925 [FileAllocator] creating directory /data/db/mrShardedOutput1/_tmp m30001| Mon Dec 17 15:31:39.155 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/local.ns, size: 16MB, took 0.226 secs m30001| Mon Dec 17 15:31:39.156 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/local.0, filling with zeroes... m30001| Mon Dec 17 15:31:39.440 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/local.0, size: 16MB, took 0.284 secs m30001| Mon Dec 17 15:31:39.442 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 517ms m30001| Mon Dec 17 15:31:39.442 [initandlisten] waiting for connections on port 30001 m30001| Mon Dec 17 15:31:39.442 [websvr] admin web console waiting for connections on port 31001 "localhost:30000" ShardingTest mrShardedOutput : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Mon Dec 17 15:31:39.457 shell: started program /mnt/slaves/Linux_32bit/mongo/mongos --port 30999 --configdb localhost:30000 -v --chunkSize 1 --setParameter enableTestCommands=1 m30001| Mon Dec 17 15:31:39.453 [initandlisten] connection accepted from 127.0.0.1:42498 #1 (1 connection now open) m30000| Mon Dec 17 15:31:39.454 [initandlisten] connection accepted from 127.0.0.1:39827 #2 (2 connections now open) m30999| Mon Dec 17 15:31:39.468 running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Mon Dec 17 15:31:39.469 [mongosMain] MongoS version 2.3.2-pre- starting: pid=20370 port=30999 32-bit host=domU-12-31-39-01-70-B4 (--help for usage) m30999| Mon Dec 17 15:31:39.469 [mongosMain] git version: 41cce287ffad7ea0c04facdd0986631bade7d027 m30999| Mon Dec 17 15:31:39.469 [mongosMain] build info: Linux domU-12-31-39-01-70-B4 2.6.21.7-2.fc8xen #1 SMP Fri Feb 15 12:39:36 EST 2008 i686 BOOST_LIB_VERSION=1_49 m30999| Mon Dec 17 15:31:39.469 [mongosMain] options: { chunkSize: 1, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Mon Dec 17 15:31:39.469 [mongosMain] config string : localhost:30000 m30999| Mon Dec 17 15:31:39.469 [mongosMain] creating new connection to:localhost:30000 m30999| Mon Dec 17 15:31:39.469 BackgroundJob starting: ConnectBG m30999| Mon Dec 17 15:31:39.469 [mongosMain] connected connection! m30999| Mon Dec 17 15:31:39.470 BackgroundJob starting: CheckConfigServers m30999| Mon Dec 17 15:31:39.470 [CheckConfigServers] creating new connection to:localhost:30000 m30999| Mon Dec 17 15:31:39.470 BackgroundJob starting: ConnectBG m30999| Mon Dec 17 15:31:39.470 [CheckConfigServers] connected connection! m30000| Mon Dec 17 15:31:39.469 [initandlisten] connection accepted from 127.0.0.1:39829 #3 (3 connections now open) m30000| Mon Dec 17 15:31:39.470 [initandlisten] connection accepted from 127.0.0.1:39830 #4 (4 connections now open) m30000| Mon Dec 17 15:31:39.472 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/config.ns, filling with zeroes... m30000| Mon Dec 17 15:31:39.748 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/config.ns, size: 16MB, took 0.275 secs m30000| Mon Dec 17 15:31:39.748 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/config.0, filling with zeroes... m30000| Mon Dec 17 15:31:40.049 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/config.0, size: 16MB, took 0.301 secs m30000| Mon Dec 17 15:31:40.051 [conn3] build index config.version { _id: 1 } m30000| Mon Dec 17 15:31:40.052 [conn3] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:40.052 [conn3] insert config.version keyUpdates:0 locks(micros) w:579659 579ms m30000| Mon Dec 17 15:31:40.052 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/config.1, filling with zeroes... m30000| Mon Dec 17 15:31:40.053 [conn3] build index config.settings { _id: 1 } m30000| Mon Dec 17 15:31:40.053 [conn3] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:40.054 [conn3] build index config.chunks { _id: 1 } m30000| Mon Dec 17 15:31:40.054 [conn3] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:40.054 [conn3] info: creating collection config.chunks on add index m30000| Mon Dec 17 15:31:40.054 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Mon Dec 17 15:31:40.055 [conn3] build index done. scanned 0 total records. 0 secs m30999| Mon Dec 17 15:31:40.055 [mongosMain] fd limit hard:1024 soft:1024 max conn: 819 m30999| Mon Dec 17 15:31:40.055 [mongosMain] waiting for connections on port 30999 m30999| Mon Dec 17 15:31:40.055 [websvr] fd limit hard:1024 soft:1024 max conn: 819 m30999| Mon Dec 17 15:31:40.055 [websvr] admin web console waiting for connections on port 31999 m30999| Mon Dec 17 15:31:40.055 BackgroundJob starting: Balancer m30999| Mon Dec 17 15:31:40.055 [Balancer] about to contact config servers and shards m30999| Mon Dec 17 15:31:40.055 BackgroundJob starting: cursorTimeout m30999| Mon Dec 17 15:31:40.055 BackgroundJob starting: PeriodicTask::Runner m30000| Mon Dec 17 15:31:40.055 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Mon Dec 17 15:31:40.055 [conn3] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:40.055 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Mon Dec 17 15:31:40.056 [conn3] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:40.056 [conn3] build index config.shards { _id: 1 } m30000| Mon Dec 17 15:31:40.057 [conn3] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:40.057 [conn3] info: creating collection config.shards on add index m30000| Mon Dec 17 15:31:40.057 [conn3] build index config.shards { host: 1 } m30000| Mon Dec 17 15:31:40.058 [conn3] build index done. scanned 0 total records. 0 secs m30999| Mon Dec 17 15:31:40.058 [Balancer] config servers and shards contacted successfully m30999| Mon Dec 17 15:31:40.058 [Balancer] balancer id: domU-12-31-39-01-70-B4:30999 started at Dec 17 15:31:40 m30999| Mon Dec 17 15:31:40.058 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30000| Mon Dec 17 15:31:40.058 [conn3] build index config.mongos { _id: 1 } m30000| Mon Dec 17 15:31:40.059 [conn3] build index done. scanned 0 total records. 0 secs m30999| Mon Dec 17 15:31:40.059 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:31:40.059 [Balancer] creating new connection to:localhost:30000 m30999| Mon Dec 17 15:31:40.060 BackgroundJob starting: ConnectBG m30999| Mon Dec 17 15:31:40.060 [Balancer] connected connection! m30999| Mon Dec 17 15:31:40.061 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:31:40.061 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Mon Dec 17 15:31:40.061 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:31:40 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf812c5ec0810ee359b568" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Mon Dec 17 15:31:40.063 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf812c5ec0810ee359b568 m30999| Mon Dec 17 15:31:40.063 [Balancer] *** start balancing round m30999| Mon Dec 17 15:31:40.063 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30999:1355776300:1804289383 (sleeping for 30000ms) m30999| Mon Dec 17 15:31:40.063 [Balancer] no collections to balance m30999| Mon Dec 17 15:31:40.063 [Balancer] no need to move any chunk m30999| Mon Dec 17 15:31:40.063 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:31:40.065 [LockPinger] cluster localhost:30000 pinged successfully at Mon Dec 17 15:31:40 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1355776300:1804289383', sleeping for 30000ms m30000| Mon Dec 17 15:31:40.060 [initandlisten] connection accepted from 127.0.0.1:39834 #5 (5 connections now open) m30000| Mon Dec 17 15:31:40.061 [conn5] build index config.locks { _id: 1 } m30000| Mon Dec 17 15:31:40.062 [conn5] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:40.064 [conn4] build index config.lockpings { _id: 1 } m30000| Mon Dec 17 15:31:40.064 [conn4] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:40.065 [conn4] build index config.lockpings { ping: new Date(1) } m30000| Mon Dec 17 15:31:40.065 [conn4] build index done. scanned 1 total records. 0 secs m30999| Mon Dec 17 15:31:40.066 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. ShardingTest undefined going to add shard : localhost:30000 m30999| Mon Dec 17 15:31:40.071 [mongosMain] connection accepted from 127.0.0.1:52346 #1 (1 connection now open) m30999| Mon Dec 17 15:31:40.071 [conn1] couldn't find database [admin] in config db m30000| Mon Dec 17 15:31:40.072 [conn3] build index config.databases { _id: 1 } m30000| Mon Dec 17 15:31:40.072 [conn3] build index done. scanned 0 total records. 0 secs m30999| Mon Dec 17 15:31:40.072 [conn1] put [admin] on: config:localhost:30000 m30999| Mon Dec 17 15:31:40.074 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Mon Dec 17 15:31:40.075 [conn1] creating new connection to:localhost:30001 m30999| Mon Dec 17 15:31:40.075 BackgroundJob starting: ConnectBG m30999| Mon Dec 17 15:31:40.075 [conn1] connected connection! m30001| Mon Dec 17 15:31:40.075 [initandlisten] connection accepted from 127.0.0.1:42508 #2 (2 connections now open) m30999| Mon Dec 17 15:31:40.076 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } m30999| Mon Dec 17 15:31:40.077 [conn1] couldn't find database [test] in config db m30999| Mon Dec 17 15:31:40.078 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 32 writeLock: 0 m30999| Mon Dec 17 15:31:40.079 [conn1] put [test] on: shard0001:localhost:30001 m30999| Mon Dec 17 15:31:40.079 [conn1] enabling sharding on: test m30001| Mon Dec 17 15:31:40.082 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.ns, filling with zeroes... m30001| Mon Dec 17 15:31:40.530 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.ns, size: 16MB, took 0.447 secs m30001| Mon Dec 17 15:31:40.537 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.0, filling with zeroes... m30000| Mon Dec 17 15:31:40.979 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/config.1, size: 32MB, took 0.926 secs m30001| Mon Dec 17 15:31:41.246 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.0, size: 16MB, took 0.709 secs m30001| Mon Dec 17 15:31:41.248 [conn2] build index test.foo { _id: 1 } m30001| Mon Dec 17 15:31:41.249 [conn2] build index done. scanned 0 total records. 0 secs m30001| Mon Dec 17 15:31:41.249 [conn2] info: creating collection test.foo on add index m30001| Mon Dec 17 15:31:41.249 [conn2] build index test.foo { a: 1.0 } m30001| Mon Dec 17 15:31:41.249 [conn2] build index done. scanned 0 total records. 0 secs m30001| Mon Dec 17 15:31:41.249 [conn2] insert test.system.indexes keyUpdates:0 locks(micros) w:1167645 1167ms m30001| Mon Dec 17 15:31:41.249 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.1, filling with zeroes... m30999| Mon Dec 17 15:31:41.249 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { a: 1.0 } } m30999| Mon Dec 17 15:31:41.250 [conn1] enable sharding on: test.foo with shard key: { a: 1.0 } m30999| Mon Dec 17 15:31:41.250 [conn1] going to create 1 chunk(s) for: test.foo using new epoch 50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:41.251 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||50cf812d5ec0810ee359b569 based on: (empty) m30999| Mon Dec 17 15:31:41.252 [conn1] creating new connection to:localhost:30000 m30999| Mon Dec 17 15:31:41.253 BackgroundJob starting: ConnectBG m30999| Mon Dec 17 15:31:41.253 [conn1] connected connection! m30999| Mon Dec 17 15:31:41.253 [conn1] creating WriteBackListener for: localhost:30000 serverID: 50cf812c5ec0810ee359b567 m30999| Mon Dec 17 15:31:41.253 [conn1] initializing shard connection to localhost:30000 m30999| Mon Dec 17 15:31:41.253 [conn1] resetting shard version of test.foo on localhost:30000, version is zero m30999| Mon Dec 17 15:31:41.253 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0000", shardHost: "localhost:30000" } 0x91767f8 2 m30999| Mon Dec 17 15:31:41.253 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Mon Dec 17 15:31:41.253 [conn1] creating new connection to:localhost:30001 m30999| Mon Dec 17 15:31:41.253 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Mon Dec 17 15:31:41.253 BackgroundJob starting: ConnectBG m30999| Mon Dec 17 15:31:41.254 [conn1] connected connection! m30999| Mon Dec 17 15:31:41.254 [conn1] creating WriteBackListener for: localhost:30001 serverID: 50cf812c5ec0810ee359b567 m30999| Mon Dec 17 15:31:41.254 [conn1] initializing shard connection to localhost:30001 m30999| Mon Dec 17 15:31:41.254 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 2 m30999| Mon Dec 17 15:31:41.254 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 } m30999| Mon Dec 17 15:31:41.254 BackgroundJob starting: WriteBackListener-localhost:30001 m30001| Mon Dec 17 15:31:41.254 [initandlisten] connection accepted from 127.0.0.1:42511 #3 (3 connections now open) ShardingTest test.foo-a_MinKey 1000|0 { "a" : { "$MinKey" : true } } -> { "a" : { "$MaxKey" : true } } shard0001 test.foo ---- Iteration 0: saving new batch of 30000 documents ---- ========> Saved total of 0 documents ========> Saved total of 1000 documents m30000| Mon Dec 17 15:31:41.252 [conn3] build index config.collections { _id: 1 } m30000| Mon Dec 17 15:31:41.252 [conn3] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:41.253 [initandlisten] connection accepted from 127.0.0.1:39838 #6 (6 connections now open) m30000| Mon Dec 17 15:31:41.255 [initandlisten] connection accepted from 127.0.0.1:39840 #7 (7 connections now open) ========> Saved total of 2000 documents m30000| Mon Dec 17 15:31:41.379 [initandlisten] connection accepted from 127.0.0.1:39842 #8 (8 connections now open) m30000| Mon Dec 17 15:31:41.383 [conn7] build index config.changelog { _id: 1 } m30000| Mon Dec 17 15:31:41.383 [conn7] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:41.384 [initandlisten] connection accepted from 127.0.0.1:39843 #9 (9 connections now open) ========> Saved total of 3000 documents m30999| Mon Dec 17 15:31:41.254 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 2 m30999| Mon Dec 17 15:31:41.255 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Mon Dec 17 15:31:41.273 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { a: MinKey }max: { a: MaxKey } dataWritten: 4089 splitThreshold: 921 m30999| Mon Dec 17 15:31:41.273 [conn1] creating new connection to:localhost:30001 m30999| Mon Dec 17 15:31:41.313 BackgroundJob starting: ConnectBG m30999| Mon Dec 17 15:31:41.313 [conn1] connected connection! m30999| Mon Dec 17 15:31:41.323 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Mon Dec 17 15:31:41.378 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { a: MinKey }max: { a: MaxKey } dataWritten: 1076 splitThreshold: 921 m30999| Mon Dec 17 15:31:41.378 [conn1] chunk not full enough to trigger auto-split { a: 641.9174838811159 } m30999| Mon Dec 17 15:31:41.378 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { a: MinKey }max: { a: MaxKey } dataWritten: 1076 splitThreshold: 921 m30999| Mon Dec 17 15:31:41.384 [conn1] creating new connection to:localhost:30000 m30999| Mon Dec 17 15:31:41.384 BackgroundJob starting: ConnectBG m30999| Mon Dec 17 15:31:41.384 [conn1] connected connection! m30999| Mon Dec 17 15:31:41.385 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||50cf812d5ec0810ee359b569 based on: 1|0||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:41.385 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { a: MinKey }max: { a: MaxKey } on: { a: 211.6570973303169 } (splitThreshold 921) m30999| Mon Dec 17 15:31:41.385 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 3 m30999| Mon Dec 17 15:31:41.385 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:41.393 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { a: 211.6570973303169 }max: { a: MaxKey } dataWritten: 94473 splitThreshold: 471859 m30999| Mon Dec 17 15:31:41.393 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Mon Dec 17 15:31:41.402 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { a: 211.6570973303169 }max: { a: MaxKey } dataWritten: 94688 splitThreshold: 471859 m30999| Mon Dec 17 15:31:41.402 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Mon Dec 17 15:31:41.410 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { a: 211.6570973303169 }max: { a: MaxKey } dataWritten: 94688 splitThreshold: 471859 m30999| Mon Dec 17 15:31:41.410 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Mon Dec 17 15:31:41.413 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|1||000000000000000000000000min: { a: MinKey }max: { a: 211.6570973303169 } dataWritten: 94473 splitThreshold: 471859 m30999| Mon Dec 17 15:31:41.413 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Mon Dec 17 15:31:41.418 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { a: 211.6570973303169 }max: { a: MaxKey } dataWritten: 94688 splitThreshold: 471859 m30999| Mon Dec 17 15:31:41.418 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Mon Dec 17 15:31:41.426 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { a: 211.6570973303169 }max: { a: MaxKey } dataWritten: 94688 splitThreshold: 471859 m30999| Mon Dec 17 15:31:41.427 [conn1] chunk not full enough to trigger auto-split { a: 611.3905636593699 } m30999| Mon Dec 17 15:31:41.434 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { a: 211.6570973303169 }max: { a: MaxKey } dataWritten: 94688 splitThreshold: 471859 m30999| Mon Dec 17 15:31:41.438 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||50cf812d5ec0810ee359b569 based on: 1|2||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:41.438 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { a: 211.6570973303169 }max: { a: MaxKey } on: { a: 999.9956642277539 } (splitThreshold 471859) (migrate suggested) m30999| Mon Dec 17 15:31:41.440 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 64 writeLock: 0 m30999| Mon Dec 17 15:31:41.440 [conn1] recently split chunk: { min: { a: 999.9956642277539 }, max: { a: MaxKey } } already in the best shard: shard0001:localhost:30001 m30999| Mon Dec 17 15:31:41.440 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 4 m30999| Mon Dec 17 15:31:41.440 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:41.457 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|3||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 999.9956642277539 } dataWritten: 210681 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:41.457 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Mon Dec 17 15:31:41.475 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|3||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 999.9956642277539 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:41.476 [conn1] chunk not full enough to trigger auto-split { a: 598.1645786669105 } m30999| Mon Dec 17 15:31:41.493 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|3||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 999.9956642277539 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:41.497 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||50cf812d5ec0810ee359b569 based on: 1|4||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:41.498 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|3||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 999.9956642277539 } on: { a: 538.5234889108688 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:41.498 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 5 m30999| Mon Dec 17 15:31:41.498 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:41.569 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 999.9956642277539 } dataWritten: 210681 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:41.570 [conn1] chunk not full enough to trigger auto-split { a: 791.9598673470318 } m30999| Mon Dec 17 15:31:41.581 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|5||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 538.5234889108688 } dataWritten: 210681 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:41.582 [conn1] chunk not full enough to trigger auto-split { a: 438.3108075708151 } m30001| Mon Dec 17 15:31:41.254 [conn3] no current chunk manager found for this shard, will initialize m30001| Mon Dec 17 15:31:41.313 [initandlisten] connection accepted from 127.0.0.1:42513 #4 (4 connections now open) m30001| Mon Dec 17 15:31:41.323 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Mon Dec 17 15:31:41.323 [conn4] chunk is larger than 1024 bytes because of key { a: 641.9174838811159 } m30001| Mon Dec 17 15:31:41.378 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Mon Dec 17 15:31:41.378 [conn4] chunk is larger than 1024 bytes because of key { a: 211.6570973303169 } m30001| Mon Dec 17 15:31:41.378 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Mon Dec 17 15:31:41.378 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Mon Dec 17 15:31:41.378 [conn4] chunk is larger than 1024 bytes because of key { a: 211.6570973303169 } m30001| Mon Dec 17 15:31:41.379 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: MinKey }, max: { a: MaxKey }, from: "shard0001", splitKeys: [ { a: 211.6570973303169 } ], shardId: "test.foo-a_MinKey", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:41.381 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812dc94e4981dc6c1aee m30001| Mon Dec 17 15:31:41.382 [conn4] splitChunk accepted at version 1|0||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:41.383 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:41-0", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776301383), what: "split", ns: "test.foo", details: { before: { min: { a: MinKey }, max: { a: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: MinKey }, max: { a: 211.6570973303169 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 211.6570973303169 }, max: { a: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:41.384 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:41.426 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : MaxKey } m30001| Mon Dec 17 15:31:41.434 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : MaxKey } m30001| Mon Dec 17 15:31:41.435 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 211.6570973303169 } -->> { : MaxKey } m30001| Mon Dec 17 15:31:41.435 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 211.6570973303169 }, max: { a: MaxKey }, from: "shard0001", splitKeys: [ { a: 999.9956642277539 } ], shardId: "test.foo-a_211.6570973303169", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:41.436 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812dc94e4981dc6c1aef m30001| Mon Dec 17 15:31:41.437 [conn4] splitChunk accepted at version 1|2||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:41.437 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:41-1", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776301437), what: "split", ns: "test.foo", details: { before: { min: { a: 211.6570973303169 }, max: { a: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 211.6570973303169 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 999.9956642277539 }, max: { a: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:41.437 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:41.475 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:41.493 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:41.494 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 211.6570973303169 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:41.494 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 211.6570973303169 }, max: { a: 999.9956642277539 }, from: "shard0001", splitKeys: [ { a: 538.5234889108688 } ], shardId: "test.foo-a_211.6570973303169", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:41.495 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812dc94e4981dc6c1af0 m30001| Mon Dec 17 15:31:41.495 [conn4] splitChunk accepted at version 1|4||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:41.496 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:41-2", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776301496), what: "split", ns: "test.foo", details: { before: { min: { a: 211.6570973303169 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 211.6570973303169 }, max: { a: 538.5234889108688 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 538.5234889108688 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:41.496 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:41.497 [LockPinger] creating distributed lock ping thread for localhost:30000 and process domU-12-31-39-01-70-B4:30001:1355776301:242898411 (sleeping for 30000ms) m30001| Mon Dec 17 15:31:41.569 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:41.581 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 538.5234889108688 } m30999| Mon Dec 17 15:31:41.614 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|1||000000000000000000000000min: { a: MinKey }max: { a: 211.6570973303169 } dataWritten: 189161 splitThreshold: 943718 m30999| Mon Dec 17 15:31:41.615 [conn1] chunk not full enough to trigger auto-split { a: 211.4892567042261 } m30999| Mon Dec 17 15:31:41.615 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 999.9956642277539 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:41.619 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 1|8||50cf812d5ec0810ee359b569 based on: 1|6||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:41.619 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 999.9956642277539 } on: { a: 738.9611077960581 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:41.619 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|8, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 6 m30999| Mon Dec 17 15:31:41.619 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30001| Mon Dec 17 15:31:41.614 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:41.615 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:41.616 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 538.5234889108688 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:41.616 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 538.5234889108688 }, max: { a: 999.9956642277539 }, from: "shard0001", splitKeys: [ { a: 738.9611077960581 } ], shardId: "test.foo-a_538.5234889108688", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:41.617 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812dc94e4981dc6c1af1 m30001| Mon Dec 17 15:31:41.618 [conn4] splitChunk accepted at version 1|6||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:41.618 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:41-3", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776301618), what: "split", ns: "test.foo", details: { before: { min: { a: 538.5234889108688 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 538.5234889108688 }, max: { a: 738.9611077960581 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 738.9611077960581 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:41.618 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. ========> Saved total of 4000 documents ========> Saved total of 5000 documents m30999| Mon Dec 17 15:31:41.685 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 999.9956642277539 } dataWritten: 210681 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:41.686 [conn1] chunk not full enough to trigger auto-split { a: 889.0441681724042 } m30999| Mon Dec 17 15:31:41.688 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|5||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 538.5234889108688 } dataWritten: 210681 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:41.691 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 1|10||50cf812d5ec0810ee359b569 based on: 1|8||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:41.692 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|5||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 538.5234889108688 } on: { a: 364.6896595600992 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:41.692 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|10, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 7 m30999| Mon Dec 17 15:31:41.692 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30001| Mon Dec 17 15:31:41.685 [conn4] request split points lookup for chunk test.foo { : 738.9611077960581 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:41.688 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:41.688 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 211.6570973303169 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:41.688 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 211.6570973303169 }, max: { a: 538.5234889108688 }, from: "shard0001", splitKeys: [ { a: 364.6896595600992 } ], shardId: "test.foo-a_211.6570973303169", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:41.689 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812dc94e4981dc6c1af2 m30001| Mon Dec 17 15:31:41.690 [conn4] splitChunk accepted at version 1|8||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:41.690 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:41-4", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776301690), what: "split", ns: "test.foo", details: { before: { min: { a: 211.6570973303169 }, max: { a: 538.5234889108688 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 211.6570973303169 }, max: { a: 364.6896595600992 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 364.6896595600992 }, max: { a: 538.5234889108688 }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:41.691 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:41.761 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 999.9956642277539 } dataWritten: 210681 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:41.761 [conn4] request split points lookup for chunk test.foo { : 738.9611077960581 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:41.762 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 738.9611077960581 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:41.762 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 738.9611077960581 }, max: { a: 999.9956642277539 }, from: "shard0001", splitKeys: [ { a: 859.3603172339499 } ], shardId: "test.foo-a_738.9611077960581", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:41.763 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812dc94e4981dc6c1af3 m30001| Mon Dec 17 15:31:41.764 [conn4] splitChunk accepted at version 1|10||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:41.764 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:41-5", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776301764), what: "split", ns: "test.foo", details: { before: { min: { a: 738.9611077960581 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 738.9611077960581 }, max: { a: 859.3603172339499 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 859.3603172339499 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:41.765 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:41.765 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 1|12||50cf812d5ec0810ee359b569 based on: 1|10||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:41.766 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 999.9956642277539 } on: { a: 859.3603172339499 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:41.766 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|12, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 8 m30999| Mon Dec 17 15:31:41.766 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } ========> Saved total of 6000 documents ========> Saved total of 7000 documents m30001| Mon Dec 17 15:31:41.848 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:41.848 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:41.849 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: MinKey }, max: { a: 211.6570973303169 }, from: "shard0001", splitKeys: [ { a: 0.3993422724306583 } ], shardId: "test.foo-a_MinKey", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:41.850 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812dc94e4981dc6c1af4 m30001| Mon Dec 17 15:31:41.850 [conn4] splitChunk accepted at version 1|12||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:41.851 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:41-6", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776301851), what: "split", ns: "test.foo", details: { before: { min: { a: MinKey }, max: { a: 211.6570973303169 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: MinKey }, max: { a: 0.3993422724306583 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 0.3993422724306583 }, max: { a: 211.6570973303169 }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:41.851 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:41.934 [conn4] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:41.935 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 0.3993422724306583 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:41.935 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 0.3993422724306583 }, max: { a: 211.6570973303169 }, from: "shard0001", splitKeys: [ { a: 89.16067937389016 } ], shardId: "test.foo-a_0.3993422724306583", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:41.936 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812dc94e4981dc6c1af5 m30001| Mon Dec 17 15:31:41.937 [conn4] splitChunk accepted at version 1|14||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:41.937 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:41-7", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776301937), what: "split", ns: "test.foo", details: { before: { min: { a: 0.3993422724306583 }, max: { a: 211.6570973303169 }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 0.3993422724306583 }, max: { a: 89.16067937389016 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 89.16067937389016 }, max: { a: 211.6570973303169 }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:41.938 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:41.848 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|1||000000000000000000000000min: { a: MinKey }max: { a: 211.6570973303169 } dataWritten: 189161 splitThreshold: 943718 m30999| Mon Dec 17 15:31:41.852 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 1|14||50cf812d5ec0810ee359b569 based on: 1|12||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:41.852 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|1||000000000000000000000000min: { a: MinKey }max: { a: 211.6570973303169 } on: { a: 0.3993422724306583 } (splitThreshold 943718) m30999| Mon Dec 17 15:31:41.852 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|14, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 9 m30999| Mon Dec 17 15:31:41.852 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:41.934 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 211.6570973303169 } dataWritten: 210681 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:41.938 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 1|16||50cf812d5ec0810ee359b569 based on: 1|14||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:41.938 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 211.6570973303169 } on: { a: 89.16067937389016 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:41.939 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|16, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 10 m30999| Mon Dec 17 15:31:41.939 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:42.033 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|7||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 738.9611077960581 } dataWritten: 210681 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.037 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 1|18||50cf812d5ec0810ee359b569 based on: 1|16||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:42.037 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|7||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 738.9611077960581 } on: { a: 609.4723071437329 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:42.037 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|18, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 11 m30999| Mon Dec 17 15:31:42.037 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:42.038 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 738.9611077960581 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.039 [conn1] chunk not full enough to trigger auto-split { a: 684.1955254785717 } m30999| Mon Dec 17 15:31:42.040 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|9||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 364.6896595600992 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.043 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 1|20||50cf812d5ec0810ee359b569 based on: 1|18||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:42.044 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|9||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 364.6896595600992 } on: { a: 285.7821767684072 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:42.044 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|20, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 12 m30999| Mon Dec 17 15:31:42.044 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:42.045 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 999.9956642277539 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.046 [conn1] chunk not full enough to trigger auto-split { a: 934.5013136044145 } m30999| Mon Dec 17 15:31:42.046 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 211.6570973303169 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.047 [conn1] chunk not full enough to trigger auto-split { a: 164.4444467965513 } m30999| Mon Dec 17 15:31:42.047 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 538.5234889108688 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.033 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:42.034 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 538.5234889108688 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:42.034 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 538.5234889108688 }, max: { a: 738.9611077960581 }, from: "shard0001", splitKeys: [ { a: 609.4723071437329 } ], shardId: "test.foo-a_538.5234889108688", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:42.035 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812ec94e4981dc6c1af6 m30001| Mon Dec 17 15:31:42.035 [conn4] splitChunk accepted at version 1|16||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:42.036 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:42-8", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776302036), what: "split", ns: "test.foo", details: { before: { min: { a: 538.5234889108688 }, max: { a: 738.9611077960581 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 538.5234889108688 }, max: { a: 609.4723071437329 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 609.4723071437329 }, max: { a: 738.9611077960581 }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:42.036 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:42.038 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:42.040 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:42.040 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 211.6570973303169 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:42.041 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 211.6570973303169 }, max: { a: 364.6896595600992 }, from: "shard0001", splitKeys: [ { a: 285.7821767684072 } ], shardId: "test.foo-a_211.6570973303169", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:42.041 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812ec94e4981dc6c1af7 m30001| Mon Dec 17 15:31:42.042 [conn4] splitChunk accepted at version 1|18||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:42.042 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:42-9", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776302042), what: "split", ns: "test.foo", details: { before: { min: { a: 211.6570973303169 }, max: { a: 364.6896595600992 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 211.6570973303169 }, max: { a: 285.7821767684072 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 285.7821767684072 }, max: { a: 364.6896595600992 }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:42.043 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:42.045 [conn4] request split points lookup for chunk test.foo { : 859.3603172339499 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:42.046 [conn4] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:42.047 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:42.048 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 364.6896595600992 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:42.048 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 364.6896595600992 }, max: { a: 538.5234889108688 }, from: "shard0001", splitKeys: [ { a: 439.6139404270798 } ], shardId: "test.foo-a_364.6896595600992", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:42.049 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812ec94e4981dc6c1af8 m30001| Mon Dec 17 15:31:42.049 [conn4] splitChunk accepted at version 1|20||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:42.050 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:42-10", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776302050), what: "split", ns: "test.foo", details: { before: { min: { a: 364.6896595600992 }, max: { a: 538.5234889108688 }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 364.6896595600992 }, max: { a: 439.6139404270798 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 439.6139404270798 }, max: { a: 538.5234889108688 }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:42.050 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:42.052 [conn4] request split points lookup for chunk test.foo { : 738.9611077960581 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:42.054 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:42.055 [conn4] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 89.16067937389016 } m30001| Mon Dec 17 15:31:42.056 [conn4] request split points lookup for chunk test.foo { : 859.3603172339499 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:42.058 [conn4] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:42.059 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:42.060 [conn4] request split points lookup for chunk test.foo { : 285.7821767684072 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:42.061 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:42.062 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 609.4723071437329 } m30001| Mon Dec 17 15:31:42.063 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 439.6139404270798 } m30999| Mon Dec 17 15:31:42.051 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 1|22||50cf812d5ec0810ee359b569 based on: 1|20||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:42.051 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 538.5234889108688 } on: { a: 439.6139404270798 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:42.051 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|22, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 13 m30999| Mon Dec 17 15:31:42.051 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:42.052 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|11||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 859.3603172339499 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.052 [conn1] chunk not full enough to trigger auto-split { a: 814.1750178765506 } m30999| Mon Dec 17 15:31:42.054 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|19||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 285.7821767684072 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.054 [conn1] chunk not full enough to trigger auto-split { a: 285.3783881291747 } m30999| Mon Dec 17 15:31:42.055 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|15||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 89.16067937389016 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.055 [conn1] chunk not full enough to trigger auto-split { a: 71.27808476798236 } m30999| Mon Dec 17 15:31:42.056 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 999.9956642277539 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.057 [conn1] chunk not full enough to trigger auto-split { a: 933.4767102263868 } m30999| Mon Dec 17 15:31:42.057 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 211.6570973303169 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.058 [conn1] chunk not full enough to trigger auto-split { a: 163.9400366693735 } ========> Saved total of 8000 documents m30999| Mon Dec 17 15:31:42.059 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 538.5234889108688 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.059 [conn1] chunk not full enough to trigger auto-split { a: 513.4560749866068 } m30999| Mon Dec 17 15:31:42.060 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { a: 285.7821767684072 }max: { a: 364.6896595600992 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.061 [conn1] chunk not full enough to trigger auto-split { a: 355.8390513062477 } m30999| Mon Dec 17 15:31:42.061 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 738.9611077960581 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.062 [conn1] chunk not full enough to trigger auto-split { a: 683.5661989171058 } m30999| Mon Dec 17 15:31:42.062 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|17||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 609.4723071437329 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.063 [conn1] chunk not full enough to trigger auto-split { a: 609.072465216741 } m30999| Mon Dec 17 15:31:42.063 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 439.6139404270798 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.064 [conn1] chunk not full enough to trigger auto-split { a: 439.5047740545124 } m30999| Mon Dec 17 15:31:42.144 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|13||000000000000000000000000min: { a: MinKey }max: { a: 0.3993422724306583 } dataWritten: 207635 splitThreshold: 943718 m30999| Mon Dec 17 15:31:42.144 [conn1] chunk not full enough to trigger auto-split no split entry m30001| Mon Dec 17 15:31:42.144 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 0.3993422724306583 } m30001| Mon Dec 17 15:31:42.178 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.1, size: 32MB, took 0.928 secs m30999| Mon Dec 17 15:31:42.225 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|11||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 859.3603172339499 } dataWritten: 209820 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.225 [conn4] request split points lookup for chunk test.foo { : 738.9611077960581 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:42.226 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 738.9611077960581 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:42.226 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 738.9611077960581 }, max: { a: 859.3603172339499 }, from: "shard0001", splitKeys: [ { a: 800.5099997390062 } ], shardId: "test.foo-a_738.9611077960581", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:42.227 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812ec94e4981dc6c1af9 m30001| Mon Dec 17 15:31:42.227 [conn4] splitChunk accepted at version 1|22||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:42.228 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:42-11", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776302228), what: "split", ns: "test.foo", details: { before: { min: { a: 738.9611077960581 }, max: { a: 859.3603172339499 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 738.9611077960581 }, max: { a: 800.5099997390062 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 800.5099997390062 }, max: { a: 859.3603172339499 }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:42.228 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:42.230 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:42.232 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 738.9611077960581 } m30999| Mon Dec 17 15:31:42.229 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 1|24||50cf812d5ec0810ee359b569 based on: 1|22||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:42.229 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|11||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 859.3603172339499 } on: { a: 800.5099997390062 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:42.229 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|24, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 14 m30999| Mon Dec 17 15:31:42.229 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:42.230 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 439.6139404270798 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.231 [conn1] chunk not full enough to trigger auto-split { a: 423.9307593088597 } m30999| Mon Dec 17 15:31:42.232 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 738.9611077960581 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.235 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 1|26||50cf812d5ec0810ee359b569 based on: 1|24||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:42.236 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 738.9611077960581 } on: { a: 672.8275574278086 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:42.236 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|26, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 15 m30999| Mon Dec 17 15:31:42.236 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30001| Mon Dec 17 15:31:42.232 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 609.4723071437329 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:42.233 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 609.4723071437329 }, max: { a: 738.9611077960581 }, from: "shard0001", splitKeys: [ { a: 672.8275574278086 } ], shardId: "test.foo-a_609.4723071437329", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:42.233 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812ec94e4981dc6c1afa m30001| Mon Dec 17 15:31:42.234 [conn4] splitChunk accepted at version 1|24||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:42.234 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:42-12", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776302234), what: "split", ns: "test.foo", details: { before: { min: { a: 609.4723071437329 }, max: { a: 738.9611077960581 }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 609.4723071437329 }, max: { a: 672.8275574278086 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 672.8275574278086 }, max: { a: 738.9611077960581 }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:42.235 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:42.237 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 439.6139404270798 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.237 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 439.6139404270798 } m30999| Mon Dec 17 15:31:42.238 [conn1] chunk not full enough to trigger auto-split { a: 423.7206419929862 } m30999| Mon Dec 17 15:31:42.238 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { a: 800.5099997390062 }max: { a: 859.3603172339499 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.238 [conn4] request split points lookup for chunk test.foo { : 800.5099997390062 } -->> { : 859.3603172339499 } m30999| Mon Dec 17 15:31:42.238 [conn1] chunk not full enough to trigger auto-split { a: 852.8323580976576 } m30999| Mon Dec 17 15:31:42.239 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 999.9956642277539 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.239 [conn4] request split points lookup for chunk test.foo { : 859.3603172339499 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:42.240 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 859.3603172339499 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:42.240 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 859.3603172339499 }, max: { a: 999.9956642277539 }, from: "shard0001", splitKeys: [ { a: 922.822616994381 } ], shardId: "test.foo-a_859.3603172339499", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:42.241 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812ec94e4981dc6c1afb m30001| Mon Dec 17 15:31:42.242 [conn4] splitChunk accepted at version 1|26||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:42.242 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:42-13", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776302242), what: "split", ns: "test.foo", details: { before: { min: { a: 859.3603172339499 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 859.3603172339499 }, max: { a: 922.822616994381 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 922.822616994381 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:42.242 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:42.243 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 1|28||50cf812d5ec0810ee359b569 based on: 1|26||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:42.243 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 999.9956642277539 } on: { a: 922.822616994381 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:42.243 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|28, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 16 m30999| Mon Dec 17 15:31:42.243 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:42.245 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 538.5234889108688 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.245 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 538.5234889108688 } m30999| Mon Dec 17 15:31:42.246 [conn1] chunk not full enough to trigger auto-split { a: 500.1391454134136 } m30999| Mon Dec 17 15:31:42.246 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 211.6570973303169 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.246 [conn4] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:42.247 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 89.16067937389016 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:42.247 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 89.16067937389016 }, max: { a: 211.6570973303169 }, from: "shard0001", splitKeys: [ { a: 152.16144034639 } ], shardId: "test.foo-a_89.16067937389016", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:42.248 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812ec94e4981dc6c1afc m30001| Mon Dec 17 15:31:42.248 [conn4] splitChunk accepted at version 1|28||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:42.249 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:42-14", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776302249), what: "split", ns: "test.foo", details: { before: { min: { a: 89.16067937389016 }, max: { a: 211.6570973303169 }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 89.16067937389016 }, max: { a: 152.16144034639 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 152.16144034639 }, max: { a: 211.6570973303169 }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:42.249 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:42.250 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 1|30||50cf812d5ec0810ee359b569 based on: 1|28||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:42.250 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 211.6570973303169 } on: { a: 152.16144034639 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:42.250 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|30, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 17 m30999| Mon Dec 17 15:31:42.250 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:42.252 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { a: 672.8275574278086 }max: { a: 738.9611077960581 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.252 [conn4] request split points lookup for chunk test.foo { : 672.8275574278086 } -->> { : 738.9611077960581 } m30999| Mon Dec 17 15:31:42.252 [conn1] chunk not full enough to trigger auto-split { a: 732.5854708906263 } m30999| Mon Dec 17 15:31:42.253 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 439.6139404270798 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.253 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 439.6139404270798 } m30999| Mon Dec 17 15:31:42.254 [conn1] chunk not full enough to trigger auto-split { a: 423.2909141574055 } m30999| Mon Dec 17 15:31:42.254 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 538.5234889108688 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.254 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 538.5234889108688 } m30999| Mon Dec 17 15:31:42.255 [conn1] chunk not full enough to trigger auto-split { a: 499.7235860209912 } m30999| Mon Dec 17 15:31:42.255 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|29||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 152.16144034639 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.255 [conn4] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 152.16144034639 } m30999| Mon Dec 17 15:31:42.255 [conn1] chunk not full enough to trigger auto-split { a: 151.8732234835625 } m30999| Mon Dec 17 15:31:42.256 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|17||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 609.4723071437329 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.256 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 609.4723071437329 } m30999| Mon Dec 17 15:31:42.256 [conn1] chunk not full enough to trigger auto-split { a: 596.0452295839787 } m30999| Mon Dec 17 15:31:42.257 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { a: 922.822616994381 }max: { a: 999.9956642277539 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.257 [conn4] request split points lookup for chunk test.foo { : 922.822616994381 } -->> { : 999.9956642277539 } m30999| Mon Dec 17 15:31:42.257 [conn1] chunk not full enough to trigger auto-split { a: 983.2947214599699 } m30999| Mon Dec 17 15:31:42.258 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|19||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 285.7821767684072 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.258 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 285.7821767684072 } m30999| Mon Dec 17 15:31:42.258 [conn1] chunk not full enough to trigger auto-split { a: 270.609522704035 } m30999| Mon Dec 17 15:31:42.259 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|15||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 89.16067937389016 } dataWritten: 209787 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.259 [conn4] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 89.16067937389016 } m30999| Mon Dec 17 15:31:42.259 [conn1] chunk not full enough to trigger auto-split { a: 59.66903013177216 } ========> Saved total of 9000 documents ========> Saved total of 10000 documents m30999| Mon Dec 17 15:31:42.261 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { a: 152.16144034639 }max: { a: 211.6570973303169 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.261 [conn1] chunk not full enough to trigger auto-split { a: 207.2356226854026 } m30999| Mon Dec 17 15:31:42.262 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|27||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 922.822616994381 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.262 [conn1] chunk not full enough to trigger auto-split { a: 922.5563595537096 } m30999| Mon Dec 17 15:31:42.263 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { a: 285.7821767684072 }max: { a: 364.6896595600992 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.263 [conn1] chunk not full enough to trigger auto-split { a: 339.9921678937972 } m30999| Mon Dec 17 15:31:42.265 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|25||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 672.8275574278086 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.265 [conn1] chunk not full enough to trigger auto-split { a: 672.0906142145395 } m30999| Mon Dec 17 15:31:42.266 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { a: 800.5099997390062 }max: { a: 859.3603172339499 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.266 [conn1] chunk not full enough to trigger auto-split { a: 852.3911754600704 } m30999| Mon Dec 17 15:31:42.268 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|23||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 800.5099997390062 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.268 [conn1] chunk not full enough to trigger auto-split { a: 799.5733672287315 } m30999| Mon Dec 17 15:31:42.315 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|13||000000000000000000000000min: { a: MinKey }max: { a: 0.3993422724306583 } dataWritten: 207635 splitThreshold: 943718 m30999| Mon Dec 17 15:31:42.315 [conn1] chunk not full enough to trigger auto-split no split entry m30001| Mon Dec 17 15:31:42.261 [conn4] request split points lookup for chunk test.foo { : 152.16144034639 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:42.262 [conn4] request split points lookup for chunk test.foo { : 859.3603172339499 } -->> { : 922.822616994381 } m30001| Mon Dec 17 15:31:42.263 [conn4] request split points lookup for chunk test.foo { : 285.7821767684072 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:42.265 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 672.8275574278086 } m30001| Mon Dec 17 15:31:42.266 [conn4] request split points lookup for chunk test.foo { : 800.5099997390062 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:42.268 [conn4] request split points lookup for chunk test.foo { : 738.9611077960581 } -->> { : 800.5099997390062 } m30001| Mon Dec 17 15:31:42.315 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 0.3993422724306583 } m30999| Mon Dec 17 15:31:42.436 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 538.5234889108688 } dataWritten: 209820 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:42.437 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 538.5234889108688 } m30999| Mon Dec 17 15:31:42.437 [conn1] chunk not full enough to trigger auto-split { a: 488.6163100600243 } ========> Saved total of 11000 documents m30999| Mon Dec 17 15:31:42.487 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { a: 285.7821767684072 }max: { a: 364.6896595600992 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.488 [conn1] chunk not full enough to trigger auto-split { a: 327.6310036890209 } m30999| Mon Dec 17 15:31:42.510 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|15||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 89.16067937389016 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.511 [conn1] chunk not full enough to trigger auto-split { a: 45.39303039200604 } m30999| Mon Dec 17 15:31:42.512 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 439.6139404270798 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.513 [conn1] chunk not full enough to trigger auto-split { a: 406.7156249657273 } m30999| Mon Dec 17 15:31:42.522 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { a: 922.822616994381 }max: { a: 999.9956642277539 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.522 [conn1] chunk not full enough to trigger auto-split { a: 968.1905473116785 } m30999| Mon Dec 17 15:31:42.545 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|19||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 285.7821767684072 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.546 [conn1] chunk not full enough to trigger auto-split { a: 254.3130712583661 } m30999| Mon Dec 17 15:31:42.548 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|17||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 609.4723071437329 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.549 [conn1] chunk not full enough to trigger auto-split { a: 580.6024512276053 } m30999| Mon Dec 17 15:31:42.562 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|29||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 152.16144034639 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.562 [conn1] chunk not full enough to trigger auto-split { a: 133.6418162100017 } m30999| Mon Dec 17 15:31:42.565 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { a: 672.8275574278086 }max: { a: 738.9611077960581 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.565 [conn1] chunk not full enough to trigger auto-split { a: 716.2248154636472 } m30999| Mon Dec 17 15:31:42.566 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|25||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 672.8275574278086 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.566 [conn1] chunk not full enough to trigger auto-split { a: 654.8045969102532 } m30999| Mon Dec 17 15:31:42.603 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|23||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 800.5099997390062 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.604 [conn1] chunk not full enough to trigger auto-split { a: 781.8832034245133 } m30999| Mon Dec 17 15:31:42.609 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|27||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 922.822616994381 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.610 [conn1] chunk not full enough to trigger auto-split { a: 902.9837928246707 } m30999| Mon Dec 17 15:31:42.639 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { a: 800.5099997390062 }max: { a: 859.3603172339499 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.640 [conn1] chunk not full enough to trigger auto-split { a: 838.9011472463608 } m30999| Mon Dec 17 15:31:42.643 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { a: 152.16144034639 }max: { a: 211.6570973303169 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.644 [conn1] chunk not full enough to trigger auto-split { a: 192.0649698004127 } ========> Saved total of 12000 documents ========> Saved total of 13000 documents m30001| Mon Dec 17 15:31:42.487 [conn4] request split points lookup for chunk test.foo { : 285.7821767684072 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:42.510 [conn4] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 89.16067937389016 } m30001| Mon Dec 17 15:31:42.512 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.2, filling with zeroes... m30001| Mon Dec 17 15:31:42.512 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:42.522 [conn4] request split points lookup for chunk test.foo { : 922.822616994381 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:42.545 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:42.548 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 609.4723071437329 } m30001| Mon Dec 17 15:31:42.562 [conn4] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 152.16144034639 } m30001| Mon Dec 17 15:31:42.565 [conn4] request split points lookup for chunk test.foo { : 672.8275574278086 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:42.566 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 672.8275574278086 } m30001| Mon Dec 17 15:31:42.603 [conn4] request split points lookup for chunk test.foo { : 738.9611077960581 } -->> { : 800.5099997390062 } m30001| Mon Dec 17 15:31:42.609 [conn4] request split points lookup for chunk test.foo { : 859.3603172339499 } -->> { : 922.822616994381 } m30001| Mon Dec 17 15:31:42.639 [conn4] request split points lookup for chunk test.foo { : 800.5099997390062 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:42.643 [conn4] request split points lookup for chunk test.foo { : 152.16144034639 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:42.722 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:42.723 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 439.6139404270798 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:42.771 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 439.6139404270798 }, max: { a: 538.5234889108688 }, from: "shard0001", splitKeys: [ { a: 480.0211163237691 } ], shardId: "test.foo-a_439.6139404270798", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:42.818 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812ec94e4981dc6c1afd m30001| Mon Dec 17 15:31:42.819 [conn4] splitChunk accepted at version 1|30||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:42.820 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:42-15", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776302820), what: "split", ns: "test.foo", details: { before: { min: { a: 439.6139404270798 }, max: { a: 538.5234889108688 }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 439.6139404270798 }, max: { a: 480.0211163237691 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 480.0211163237691 }, max: { a: 538.5234889108688 }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:42.820 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:42.823 [conn4] request split points lookup for chunk test.foo { : 480.0211163237691 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:42.824 [conn4] request split points lookup for chunk test.foo { : 285.7821767684072 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:42.825 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 285.7821767684072 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:42.825 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 285.7821767684072 }, max: { a: 364.6896595600992 }, from: "shard0001", splitKeys: [ { a: 323.8981119357049 } ], shardId: "test.foo-a_285.7821767684072", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:42.826 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812ec94e4981dc6c1afe m30001| Mon Dec 17 15:31:42.827 [conn4] splitChunk accepted at version 1|32||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:42.827 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:42-16", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776302827), what: "split", ns: "test.foo", details: { before: { min: { a: 285.7821767684072 }, max: { a: 364.6896595600992 }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 285.7821767684072 }, max: { a: 323.8981119357049 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 323.8981119357049 }, max: { a: 364.6896595600992 }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:42.827 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:42.830 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 609.4723071437329 } m30001| Mon Dec 17 15:31:42.831 [conn4] request split points lookup for chunk test.foo { : 800.5099997390062 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:42.832 [conn4] request split points lookup for chunk test.foo { : 922.822616994381 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:42.834 [conn4] request split points lookup for chunk test.foo { : 859.3603172339499 } -->> { : 922.822616994381 } m30001| Mon Dec 17 15:31:42.835 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:42.836 [conn4] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 152.16144034639 } m30001| Mon Dec 17 15:31:42.837 [conn4] request split points lookup for chunk test.foo { : 152.16144034639 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:42.838 [conn4] request split points lookup for chunk test.foo { : 480.0211163237691 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:42.839 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 672.8275574278086 } m30001| Mon Dec 17 15:31:42.841 [conn4] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 89.16067937389016 } m30001| Mon Dec 17 15:31:42.841 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 0.3993422724306583 } -->> { : 89.16067937389016 } m30001| Mon Dec 17 15:31:42.842 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 0.3993422724306583 }, max: { a: 89.16067937389016 }, from: "shard0001", splitKeys: [ { a: 40.64535931684077 } ], shardId: "test.foo-a_0.3993422724306583", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:42.842 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812ec94e4981dc6c1aff m30001| Mon Dec 17 15:31:42.843 [conn4] splitChunk accepted at version 1|34||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:42.843 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:42-17", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776302843), what: "split", ns: "test.foo", details: { before: { min: { a: 0.3993422724306583 }, max: { a: 89.16067937389016 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 0.3993422724306583 }, max: { a: 40.64535931684077 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 40.64535931684077 }, max: { a: 89.16067937389016 }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:42.844 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:42.847 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 609.4723071437329 } m30001| Mon Dec 17 15:31:42.848 [conn4] request split points lookup for chunk test.foo { : 285.7821767684072 } -->> { : 323.8981119357049 } m30001| Mon Dec 17 15:31:42.849 [conn4] request split points lookup for chunk test.foo { : 738.9611077960581 } -->> { : 800.5099997390062 } m30001| Mon Dec 17 15:31:42.850 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:42.851 [conn4] request split points lookup for chunk test.foo { : 859.3603172339499 } -->> { : 922.822616994381 } m30001| Mon Dec 17 15:31:42.852 [conn4] request split points lookup for chunk test.foo { : 672.8275574278086 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:42.853 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:42.855 [conn4] request split points lookup for chunk test.foo { : 40.64535931684077 } -->> { : 89.16067937389016 } m30001| Mon Dec 17 15:31:42.856 [conn4] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 152.16144034639 } m30001| Mon Dec 17 15:31:42.857 [conn4] request split points lookup for chunk test.foo { : 922.822616994381 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:42.858 [conn4] request split points lookup for chunk test.foo { : 152.16144034639 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:42.859 [conn4] request split points lookup for chunk test.foo { : 323.8981119357049 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:42.861 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 672.8275574278086 } m30001| Mon Dec 17 15:31:42.862 [conn4] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 40.64535931684077 } m30001| Mon Dec 17 15:31:42.865 [conn4] request split points lookup for chunk test.foo { : 800.5099997390062 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:42.866 [conn4] request split points lookup for chunk test.foo { : 480.0211163237691 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:42.868 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 480.0211163237691 } ========> Saved total of 14000 documents ========> Saved total of 15000 documents m30999| Mon Dec 17 15:31:42.722 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 538.5234889108688 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.821 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 1|32||50cf812d5ec0810ee359b569 based on: 1|30||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:42.821 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 538.5234889108688 } on: { a: 480.0211163237691 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:42.821 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|32, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 18 m30999| Mon Dec 17 15:31:42.821 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:42.822 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { a: 480.0211163237691 }max: { a: 538.5234889108688 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.823 [conn1] chunk not full enough to trigger auto-split { a: 519.102887250483 } m30999| Mon Dec 17 15:31:42.824 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { a: 285.7821767684072 }max: { a: 364.6896595600992 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.828 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 19 version: 1|34||50cf812d5ec0810ee359b569 based on: 1|32||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:42.828 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { a: 285.7821767684072 }max: { a: 364.6896595600992 } on: { a: 323.8981119357049 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:42.828 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|34, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 19 m30999| Mon Dec 17 15:31:42.828 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:42.830 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|17||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 609.4723071437329 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.831 [conn1] chunk not full enough to trigger auto-split { a: 577.5434838142246 } m30999| Mon Dec 17 15:31:42.831 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { a: 800.5099997390062 }max: { a: 859.3603172339499 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.832 [conn1] chunk not full enough to trigger auto-split { a: 837.9802114795893 } m30999| Mon Dec 17 15:31:42.832 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { a: 922.822616994381 }max: { a: 999.9956642277539 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.833 [conn1] chunk not full enough to trigger auto-split { a: 961.870280560106 } m30999| Mon Dec 17 15:31:42.834 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|27||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 922.822616994381 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.834 [conn1] chunk not full enough to trigger auto-split { a: 900.6475717760623 } m30999| Mon Dec 17 15:31:42.835 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 439.6139404270798 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.835 [conn1] chunk not full enough to trigger auto-split { a: 402.4835014715791 } m30999| Mon Dec 17 15:31:42.836 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|29||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 152.16144034639 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.836 [conn1] chunk not full enough to trigger auto-split { a: 130.8323324192315 } m30999| Mon Dec 17 15:31:42.837 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { a: 152.16144034639 }max: { a: 211.6570973303169 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.837 [conn1] chunk not full enough to trigger auto-split { a: 190.9332803916186 } m30999| Mon Dec 17 15:31:42.838 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { a: 480.0211163237691 }max: { a: 538.5234889108688 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.838 [conn1] chunk not full enough to trigger auto-split { a: 518.9910258632153 } m30999| Mon Dec 17 15:31:42.839 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|25||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 672.8275574278086 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.840 [conn1] chunk not full enough to trigger auto-split { a: 651.796908583492 } m30999| Mon Dec 17 15:31:42.840 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|15||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 89.16067937389016 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.844 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 20 version: 1|36||50cf812d5ec0810ee359b569 based on: 1|34||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:42.845 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|15||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 89.16067937389016 } on: { a: 40.64535931684077 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:42.845 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|36, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 20 m30999| Mon Dec 17 15:31:42.845 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:42.846 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|17||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 609.4723071437329 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.847 [conn1] chunk not full enough to trigger auto-split { a: 577.389384387061 } m30999| Mon Dec 17 15:31:42.848 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|33||000000000000000000000000min: { a: 285.7821767684072 }max: { a: 323.8981119357049 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.848 [conn1] chunk not full enough to trigger auto-split { a: 323.7252545077354 } m30999| Mon Dec 17 15:31:42.848 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|23||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 800.5099997390062 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.849 [conn1] chunk not full enough to trigger auto-split { a: 780.2691576071084 } m30999| Mon Dec 17 15:31:42.850 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 439.6139404270798 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.851 [conn1] chunk not full enough to trigger auto-split { a: 402.0311622880399 } m30999| Mon Dec 17 15:31:42.851 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|27||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 922.822616994381 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.852 [conn1] chunk not full enough to trigger auto-split { a: 900.5956777837127 } m30999| Mon Dec 17 15:31:42.852 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { a: 672.8275574278086 }max: { a: 738.9611077960581 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.853 [conn1] chunk not full enough to trigger auto-split { a: 713.3843537885696 } m30999| Mon Dec 17 15:31:42.853 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|19||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 285.7821767684072 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.854 [conn1] chunk not full enough to trigger auto-split { a: 251.4602127484977 } m30999| Mon Dec 17 15:31:42.854 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { a: 40.64535931684077 }max: { a: 89.16067937389016 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.855 [conn1] chunk not full enough to trigger auto-split { a: 82.55857625044882 } m30999| Mon Dec 17 15:31:42.856 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|29||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 152.16144034639 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.856 [conn1] chunk not full enough to trigger auto-split { a: 130.8118677698076 } m30999| Mon Dec 17 15:31:42.857 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { a: 922.822616994381 }max: { a: 999.9956642277539 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.857 [conn1] chunk not full enough to trigger auto-split { a: 961.4515723660588 } m30999| Mon Dec 17 15:31:42.858 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { a: 152.16144034639 }max: { a: 211.6570973303169 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.859 [conn1] chunk not full enough to trigger auto-split { a: 190.2301511727273 } m30999| Mon Dec 17 15:31:42.859 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { a: 323.8981119357049 }max: { a: 364.6896595600992 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.859 [conn1] chunk not full enough to trigger auto-split { a: 364.2386787105352 } m30999| Mon Dec 17 15:31:42.861 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|25||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 672.8275574278086 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.861 [conn1] chunk not full enough to trigger auto-split { a: 651.3371714390814 } m30999| Mon Dec 17 15:31:42.862 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|35||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 40.64535931684077 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.862 [conn1] chunk not full enough to trigger auto-split { a: 40.50270980224013 } m30999| Mon Dec 17 15:31:42.865 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { a: 800.5099997390062 }max: { a: 859.3603172339499 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.866 [conn1] chunk not full enough to trigger auto-split { a: 837.966408347711 } m30999| Mon Dec 17 15:31:42.866 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { a: 480.0211163237691 }max: { a: 538.5234889108688 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.867 [conn1] chunk not full enough to trigger auto-split { a: 518.8678125850856 } m30999| Mon Dec 17 15:31:42.868 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 480.0211163237691 } dataWritten: 209787 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:42.869 [conn1] chunk not full enough to trigger auto-split { a: 479.677811730653 } m30999| Mon Dec 17 15:31:43.043 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|19||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 285.7821767684072 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.047 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 21 version: 1|38||50cf812d5ec0810ee359b569 based on: 1|36||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.048 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|19||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 285.7821767684072 } on: { a: 244.1017532255501 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.048 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|38, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 21 m30999| Mon Dec 17 15:31:43.048 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.048 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 439.6139404270798 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.052 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 22 version: 1|40||50cf812d5ec0810ee359b569 based on: 1|38||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.052 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 439.6139404270798 } on: { a: 395.6566429696977 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.052 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|40, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 22 m30999| Mon Dec 17 15:31:43.052 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.053 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { a: 800.5099997390062 }max: { a: 859.3603172339499 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.054 [conn1] chunk not full enough to trigger auto-split { a: 832.1996259037405 } m30999| Mon Dec 17 15:31:43.054 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { a: 922.822616994381 }max: { a: 999.9956642277539 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.058 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 23 version: 1|42||50cf812d5ec0810ee359b569 based on: 1|40||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.058 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { a: 922.822616994381 }max: { a: 999.9956642277539 } on: { a: 954.3487632181495 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.058 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|42, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 23 m30999| Mon Dec 17 15:31:43.058 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.059 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|37||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 244.1017532255501 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.059 [conn1] chunk not full enough to trigger auto-split { a: 243.9334958326072 } m30999| Mon Dec 17 15:31:43.060 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 480.0211163237691 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.061 [conn1] chunk not full enough to trigger auto-split { a: 473.2317680027336 } m30999| Mon Dec 17 15:31:43.061 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { a: 244.1017532255501 }max: { a: 285.7821767684072 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.062 [conn1] chunk not full enough to trigger auto-split { a: 278.1604356132448 } m30999| Mon Dec 17 15:31:43.062 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|23||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 800.5099997390062 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.063 [conn1] chunk not full enough to trigger auto-split { a: 771.1914598476142 } m30999| Mon Dec 17 15:31:43.063 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|25||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 672.8275574278086 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.064 [conn1] chunk not full enough to trigger auto-split { a: 644.042810657993 } m30999| Mon Dec 17 15:31:43.065 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { a: 152.16144034639 }max: { a: 211.6570973303169 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.065 [conn1] chunk not full enough to trigger auto-split { a: 182.7473742887378 } m30999| Mon Dec 17 15:31:43.066 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { a: 672.8275574278086 }max: { a: 738.9611077960581 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.067 [conn1] chunk not full enough to trigger auto-split { a: 706.5783380530775 } m30999| Mon Dec 17 15:31:43.067 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { a: 395.6566429696977 }max: { a: 439.6139404270798 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.068 [conn1] chunk not full enough to trigger auto-split { a: 429.0330037474632 } m30999| Mon Dec 17 15:31:43.068 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|39||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 395.6566429696977 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.069 [conn1] chunk not full enough to trigger auto-split { a: 395.6454149447381 } m30999| Mon Dec 17 15:31:43.069 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { a: 954.3487632181495 }max: { a: 999.9956642277539 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.069 [conn1] chunk not full enough to trigger auto-split { a: 989.4234172534198 } m30999| Mon Dec 17 15:31:43.070 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|33||000000000000000000000000min: { a: 285.7821767684072 }max: { a: 323.8981119357049 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.071 [conn1] chunk not full enough to trigger auto-split { a: 316.9665145687759 } m30999| Mon Dec 17 15:31:43.071 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|29||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 152.16144034639 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.072 [conn1] chunk not full enough to trigger auto-split { a: 124.1038762964308 } m30999| Mon Dec 17 15:31:43.072 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { a: 480.0211163237691 }max: { a: 538.5234889108688 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.073 [conn1] chunk not full enough to trigger auto-split { a: 512.2824460268021 } m30999| Mon Dec 17 15:31:43.074 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|17||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 609.4723071437329 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.077 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 24 version: 1|44||50cf812d5ec0810ee359b569 based on: 1|42||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.078 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|17||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 609.4723071437329 } on: { a: 571.1331805214286 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.078 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|44, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 24 m30999| Mon Dec 17 15:31:43.078 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.078 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|27||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 922.822616994381 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.079 [conn1] chunk not full enough to trigger auto-split { a: 892.9090437013656 } m30999| Mon Dec 17 15:31:43.079 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { a: 571.1331805214286 }max: { a: 609.4723071437329 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.080 [conn1] chunk not full enough to trigger auto-split { a: 601.7805358860642 } m30999| Mon Dec 17 15:31:43.081 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { a: 244.1017532255501 }max: { a: 285.7821767684072 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.081 [conn1] chunk not full enough to trigger auto-split { a: 277.9746693558991 } m30999| Mon Dec 17 15:31:43.082 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|41||000000000000000000000000min: { a: 922.822616994381 }max: { a: 954.3487632181495 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.082 [conn1] chunk not full enough to trigger auto-split { a: 954.1610202286392 } m30999| Mon Dec 17 15:31:43.083 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|25||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 672.8275574278086 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.083 [conn1] chunk not full enough to trigger auto-split { a: 643.7892750836909 } m30999| Mon Dec 17 15:31:43.084 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { a: 954.3487632181495 }max: { a: 999.9956642277539 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.085 [conn1] chunk not full enough to trigger auto-split { a: 989.2678991891444 } m30999| Mon Dec 17 15:31:43.085 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { a: 40.64535931684077 }max: { a: 89.16067937389016 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.086 [conn1] chunk not full enough to trigger auto-split { a: 75.53770230151713 } m30999| Mon Dec 17 15:31:43.086 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { a: 672.8275574278086 }max: { a: 738.9611077960581 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.087 [conn1] chunk not full enough to trigger auto-split { a: 706.4106951002032 } m30999| Mon Dec 17 15:31:43.087 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { a: 480.0211163237691 }max: { a: 538.5234889108688 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.088 [conn1] chunk not full enough to trigger auto-split { a: 512.2824460268021 } m30999| Mon Dec 17 15:31:43.088 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { a: 152.16144034639 }max: { a: 211.6570973303169 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.089 [conn1] chunk not full enough to trigger auto-split { a: 182.7473742887378 } m30999| Mon Dec 17 15:31:43.089 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|35||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 40.64535931684077 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.090 [conn1] chunk not full enough to trigger auto-split { a: 33.72849710285664 } m30999| Mon Dec 17 15:31:43.090 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|29||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 152.16144034639 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.091 [conn1] chunk not full enough to trigger auto-split { a: 124.0145633928478 } m30999| Mon Dec 17 15:31:43.092 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|23||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 800.5099997390062 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.092 [conn1] chunk not full enough to trigger auto-split { a: 771.1578027810901 } m30999| Mon Dec 17 15:31:43.093 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 480.0211163237691 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.094 [conn1] chunk not full enough to trigger auto-split { a: 473.0953800026327 } m30999| Mon Dec 17 15:31:43.094 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { a: 323.8981119357049 }max: { a: 364.6896595600992 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.095 [conn1] chunk not full enough to trigger auto-split { a: 354.1802626568824 } m30999| Mon Dec 17 15:31:43.097 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|33||000000000000000000000000min: { a: 285.7821767684072 }max: { a: 323.8981119357049 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.097 [conn1] chunk not full enough to trigger auto-split { a: 316.9665145687759 } m30999| Mon Dec 17 15:31:43.099 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|39||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 395.6566429696977 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.099 [conn1] chunk not full enough to trigger auto-split { a: 395.2888562344015 } m30999| Mon Dec 17 15:31:43.100 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|43||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 571.1331805214286 } dataWritten: 210766 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:43.043 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:43.044 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 211.6570973303169 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:43.044 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 211.6570973303169 }, max: { a: 285.7821767684072 }, from: "shard0001", splitKeys: [ { a: 244.1017532255501 } ], shardId: "test.foo-a_211.6570973303169", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.045 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b00 m30001| Mon Dec 17 15:31:43.046 [conn4] splitChunk accepted at version 1|36||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.046 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-18", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303046), what: "split", ns: "test.foo", details: { before: { min: { a: 211.6570973303169 }, max: { a: 285.7821767684072 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 211.6570973303169 }, max: { a: 244.1017532255501 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 244.1017532255501 }, max: { a: 285.7821767684072 }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.047 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:43.048 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:43.049 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 364.6896595600992 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:43.049 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 364.6896595600992 }, max: { a: 439.6139404270798 }, from: "shard0001", splitKeys: [ { a: 395.6566429696977 } ], shardId: "test.foo-a_364.6896595600992", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.050 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b01 m30001| Mon Dec 17 15:31:43.050 [conn4] splitChunk accepted at version 1|38||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.051 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-19", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303051), what: "split", ns: "test.foo", details: { before: { min: { a: 364.6896595600992 }, max: { a: 439.6139404270798 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 364.6896595600992 }, max: { a: 395.6566429696977 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 395.6566429696977 }, max: { a: 439.6139404270798 }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.051 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:43.053 [conn4] request split points lookup for chunk test.foo { : 800.5099997390062 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:43.054 [conn4] request split points lookup for chunk test.foo { : 922.822616994381 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:43.055 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 922.822616994381 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:43.055 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 922.822616994381 }, max: { a: 999.9956642277539 }, from: "shard0001", splitKeys: [ { a: 954.3487632181495 } ], shardId: "test.foo-a_922.822616994381", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.056 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b02 m30001| Mon Dec 17 15:31:43.057 [conn4] splitChunk accepted at version 1|40||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.057 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-20", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303057), what: "split", ns: "test.foo", details: { before: { min: { a: 922.822616994381 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 922.822616994381 }, max: { a: 954.3487632181495 }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 954.3487632181495 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.057 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:43.059 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 244.1017532255501 } m30001| Mon Dec 17 15:31:43.060 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 480.0211163237691 } m30001| Mon Dec 17 15:31:43.061 [conn4] request split points lookup for chunk test.foo { : 244.1017532255501 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:43.062 [conn4] request split points lookup for chunk test.foo { : 738.9611077960581 } -->> { : 800.5099997390062 } m30001| Mon Dec 17 15:31:43.063 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 672.8275574278086 } m30001| Mon Dec 17 15:31:43.065 [conn4] request split points lookup for chunk test.foo { : 152.16144034639 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:43.066 [conn4] request split points lookup for chunk test.foo { : 672.8275574278086 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:43.067 [conn4] request split points lookup for chunk test.foo { : 395.6566429696977 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:43.068 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 395.6566429696977 } m30001| Mon Dec 17 15:31:43.069 [conn4] request split points lookup for chunk test.foo { : 954.3487632181495 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:43.070 [conn4] request split points lookup for chunk test.foo { : 285.7821767684072 } -->> { : 323.8981119357049 } m30001| Mon Dec 17 15:31:43.071 [conn4] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 152.16144034639 } m30001| Mon Dec 17 15:31:43.072 [conn4] request split points lookup for chunk test.foo { : 480.0211163237691 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:43.074 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 609.4723071437329 } m30001| Mon Dec 17 15:31:43.074 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 538.5234889108688 } -->> { : 609.4723071437329 } m30001| Mon Dec 17 15:31:43.074 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 538.5234889108688 }, max: { a: 609.4723071437329 }, from: "shard0001", splitKeys: [ { a: 571.1331805214286 } ], shardId: "test.foo-a_538.5234889108688", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.075 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b03 m30001| Mon Dec 17 15:31:43.076 [conn4] splitChunk accepted at version 1|42||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.076 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-21", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303076), what: "split", ns: "test.foo", details: { before: { min: { a: 538.5234889108688 }, max: { a: 609.4723071437329 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 538.5234889108688 }, max: { a: 571.1331805214286 }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 571.1331805214286 }, max: { a: 609.4723071437329 }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.077 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:43.078 [conn4] request split points lookup for chunk test.foo { : 859.3603172339499 } -->> { : 922.822616994381 } m30001| Mon Dec 17 15:31:43.079 [conn4] request split points lookup for chunk test.foo { : 571.1331805214286 } -->> { : 609.4723071437329 } m30001| Mon Dec 17 15:31:43.081 [conn4] request split points lookup for chunk test.foo { : 244.1017532255501 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:43.082 [conn4] request split points lookup for chunk test.foo { : 922.822616994381 } -->> { : 954.3487632181495 } m30001| Mon Dec 17 15:31:43.083 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 672.8275574278086 } m30001| Mon Dec 17 15:31:43.084 [conn4] request split points lookup for chunk test.foo { : 954.3487632181495 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:43.085 [conn4] request split points lookup for chunk test.foo { : 40.64535931684077 } -->> { : 89.16067937389016 } m30001| Mon Dec 17 15:31:43.086 [conn4] request split points lookup for chunk test.foo { : 672.8275574278086 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:43.087 [conn4] request split points lookup for chunk test.foo { : 480.0211163237691 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:43.088 [conn4] request split points lookup for chunk test.foo { : 152.16144034639 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:43.089 [conn4] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 40.64535931684077 } m30001| Mon Dec 17 15:31:43.091 [conn4] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 152.16144034639 } m30001| Mon Dec 17 15:31:43.092 [conn4] request split points lookup for chunk test.foo { : 738.9611077960581 } -->> { : 800.5099997390062 } m30001| Mon Dec 17 15:31:43.093 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 480.0211163237691 } m30001| Mon Dec 17 15:31:43.094 [conn4] request split points lookup for chunk test.foo { : 323.8981119357049 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:43.097 [conn4] request split points lookup for chunk test.foo { : 285.7821767684072 } -->> { : 323.8981119357049 } m30001| Mon Dec 17 15:31:43.099 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 395.6566429696977 } m30001| Mon Dec 17 15:31:43.100 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 571.1331805214286 } m30999| Mon Dec 17 15:31:43.101 [conn1] chunk not full enough to trigger auto-split { a: 571.0307229310274 } m30999| Mon Dec 17 15:31:43.102 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { a: 800.5099997390062 }max: { a: 859.3603172339499 } dataWritten: 210766 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:43.102 [conn4] request split points lookup for chunk test.foo { : 800.5099997390062 } -->> { : 859.3603172339499 } m30999| Mon Dec 17 15:31:43.103 [conn1] chunk not full enough to trigger auto-split { a: 832.0040162652731 } m30999| Mon Dec 17 15:31:43.103 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { a: 395.6566429696977 }max: { a: 439.6139404270798 } dataWritten: 210766 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:43.103 [conn4] request split points lookup for chunk test.foo { : 395.6566429696977 } -->> { : 439.6139404270798 } m30999| Mon Dec 17 15:31:43.104 [conn1] chunk not full enough to trigger auto-split { a: 428.9772142656147 } m30999| Mon Dec 17 15:31:43.113 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|37||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 244.1017532255501 } dataWritten: 210766 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:43.113 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 244.1017532255501 } m30999| Mon Dec 17 15:31:43.113 [conn1] chunk not full enough to trigger auto-split { a: 243.4487782884389 } ========> Saved total of 16000 documents ========> Saved total of 17000 documents ========> Saved total of 18000 documents m30999| Mon Dec 17 15:31:43.252 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|13||000000000000000000000000min: { a: MinKey }max: { a: 0.3993422724306583 } dataWritten: 209690 splitThreshold: 943718 m30999| Mon Dec 17 15:31:43.252 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Mon Dec 17 15:31:43.297 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|25||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 672.8275574278086 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.301 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 25 version: 1|46||50cf812d5ec0810ee359b569 based on: 1|44||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.301 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|25||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 672.8275574278086 } on: { a: 637.662521796301 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.301 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|46, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 25 m30999| Mon Dec 17 15:31:43.301 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.302 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { a: 672.8275574278086 }max: { a: 738.9611077960581 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.306 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 26 version: 1|48||50cf812d5ec0810ee359b569 based on: 1|46||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.306 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { a: 672.8275574278086 }max: { a: 738.9611077960581 } on: { a: 702.2782645653933 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.306 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|48, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 26 m30999| Mon Dec 17 15:31:43.306 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.307 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { a: 395.6566429696977 }max: { a: 439.6139404270798 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.308 [conn1] chunk not full enough to trigger auto-split { a: 424.7937966138124 } m30999| Mon Dec 17 15:31:43.308 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { a: 800.5099997390062 }max: { a: 859.3603172339499 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.312 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 27 version: 1|50||50cf812d5ec0810ee359b569 based on: 1|48||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.312 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { a: 800.5099997390062 }max: { a: 859.3603172339499 } on: { a: 826.7396320588887 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.312 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|50, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 27 m30999| Mon Dec 17 15:31:43.312 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.313 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { a: 395.6566429696977 }max: { a: 439.6139404270798 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.314 [conn1] chunk not full enough to trigger auto-split { a: 424.4915875606239 } m30999| Mon Dec 17 15:31:43.314 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { a: 244.1017532255501 }max: { a: 285.7821767684072 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.315 [conn1] chunk not full enough to trigger auto-split { a: 272.0148875378072 } m30999| Mon Dec 17 15:31:43.315 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|33||000000000000000000000000min: { a: 285.7821767684072 }max: { a: 323.8981119357049 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.316 [conn1] chunk not full enough to trigger auto-split { a: 312.9595229402184 } m30999| Mon Dec 17 15:31:43.316 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|27||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 922.822616994381 } dataWritten: 210766 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:43.252 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 0.3993422724306583 } m30001| Mon Dec 17 15:31:43.297 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 672.8275574278086 } m30001| Mon Dec 17 15:31:43.297 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 609.4723071437329 } -->> { : 672.8275574278086 } m30001| Mon Dec 17 15:31:43.298 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 609.4723071437329 }, max: { a: 672.8275574278086 }, from: "shard0001", splitKeys: [ { a: 637.662521796301 } ], shardId: "test.foo-a_609.4723071437329", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.298 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b04 m30001| Mon Dec 17 15:31:43.299 [conn4] splitChunk accepted at version 1|44||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.300 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-22", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303300), what: "split", ns: "test.foo", details: { before: { min: { a: 609.4723071437329 }, max: { a: 672.8275574278086 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 609.4723071437329 }, max: { a: 637.662521796301 }, lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 637.662521796301 }, max: { a: 672.8275574278086 }, lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.300 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:43.302 [conn4] request split points lookup for chunk test.foo { : 672.8275574278086 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:43.303 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 672.8275574278086 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:43.303 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 672.8275574278086 }, max: { a: 738.9611077960581 }, from: "shard0001", splitKeys: [ { a: 702.2782645653933 } ], shardId: "test.foo-a_672.8275574278086", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.304 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b05 m30001| Mon Dec 17 15:31:43.304 [conn4] splitChunk accepted at version 1|46||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.305 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-23", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303305), what: "split", ns: "test.foo", details: { before: { min: { a: 672.8275574278086 }, max: { a: 738.9611077960581 }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 672.8275574278086 }, max: { a: 702.2782645653933 }, lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 702.2782645653933 }, max: { a: 738.9611077960581 }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.305 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:43.307 [conn4] request split points lookup for chunk test.foo { : 395.6566429696977 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:43.308 [conn4] request split points lookup for chunk test.foo { : 800.5099997390062 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:43.309 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 800.5099997390062 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:43.309 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 800.5099997390062 }, max: { a: 859.3603172339499 }, from: "shard0001", splitKeys: [ { a: 826.7396320588887 } ], shardId: "test.foo-a_800.5099997390062", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.310 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b06 m30001| Mon Dec 17 15:31:43.310 [conn4] splitChunk accepted at version 1|48||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.311 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-24", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303311), what: "split", ns: "test.foo", details: { before: { min: { a: 800.5099997390062 }, max: { a: 859.3603172339499 }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 800.5099997390062 }, max: { a: 826.7396320588887 }, lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 826.7396320588887 }, max: { a: 859.3603172339499 }, lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.311 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:43.313 [conn4] request split points lookup for chunk test.foo { : 395.6566429696977 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:43.314 [conn4] request split points lookup for chunk test.foo { : 244.1017532255501 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:43.315 [conn4] request split points lookup for chunk test.foo { : 285.7821767684072 } -->> { : 323.8981119357049 } m30001| Mon Dec 17 15:31:43.316 [conn4] request split points lookup for chunk test.foo { : 859.3603172339499 } -->> { : 922.822616994381 } m30001| Mon Dec 17 15:31:43.317 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 859.3603172339499 } -->> { : 922.822616994381 } m30001| Mon Dec 17 15:31:43.317 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 859.3603172339499 }, max: { a: 922.822616994381 }, from: "shard0001", splitKeys: [ { a: 885.969014139846 } ], shardId: "test.foo-a_859.3603172339499", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.318 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b07 m30001| Mon Dec 17 15:31:43.318 [conn4] splitChunk accepted at version 1|50||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.319 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-25", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303319), what: "split", ns: "test.foo", details: { before: { min: { a: 859.3603172339499 }, max: { a: 922.822616994381 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 859.3603172339499 }, max: { a: 885.969014139846 }, lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 885.969014139846 }, max: { a: 922.822616994381 }, lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.319 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:43.320 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 28 version: 1|52||50cf812d5ec0810ee359b569 based on: 1|50||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.320 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|27||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 922.822616994381 } on: { a: 885.969014139846 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.320 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|52, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 28 m30999| Mon Dec 17 15:31:43.320 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.321 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { a: 954.3487632181495 }max: { a: 999.9956642277539 } dataWritten: 210766 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:43.321 [conn4] request split points lookup for chunk test.foo { : 954.3487632181495 } -->> { : 999.9956642277539 } m30999| Mon Dec 17 15:31:43.322 [conn1] chunk not full enough to trigger auto-split { a: 984.2353065032512 } m30999| Mon Dec 17 15:31:43.322 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { a: 395.6566429696977 }max: { a: 439.6139404270798 } dataWritten: 210766 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:43.322 [conn4] request split points lookup for chunk test.foo { : 395.6566429696977 } -->> { : 439.6139404270798 } m30999| Mon Dec 17 15:31:43.323 [conn1] chunk not full enough to trigger auto-split { a: 424.4355172850192 } m30999| Mon Dec 17 15:31:43.323 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|23||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 800.5099997390062 } dataWritten: 210766 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:43.323 [conn4] request split points lookup for chunk test.foo { : 738.9611077960581 } -->> { : 800.5099997390062 } m30001| Mon Dec 17 15:31:43.324 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 738.9611077960581 } -->> { : 800.5099997390062 } m30001| Mon Dec 17 15:31:43.324 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 738.9611077960581 }, max: { a: 800.5099997390062 }, from: "shard0001", splitKeys: [ { a: 764.6060811821371 } ], shardId: "test.foo-a_738.9611077960581", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.325 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b08 m30001| Mon Dec 17 15:31:43.326 [conn4] splitChunk accepted at version 1|52||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.326 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-26", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303326), what: "split", ns: "test.foo", details: { before: { min: { a: 738.9611077960581 }, max: { a: 800.5099997390062 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 738.9611077960581 }, max: { a: 764.6060811821371 }, lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 764.6060811821371 }, max: { a: 800.5099997390062 }, lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.326 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:43.327 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 29 version: 1|54||50cf812d5ec0810ee359b569 based on: 1|52||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.327 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|23||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 800.5099997390062 } on: { a: 764.6060811821371 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.327 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|54, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 29 m30999| Mon Dec 17 15:31:43.327 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.328 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|41||000000000000000000000000min: { a: 922.822616994381 }max: { a: 954.3487632181495 } dataWritten: 210766 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:43.328 [conn4] request split points lookup for chunk test.foo { : 922.822616994381 } -->> { : 954.3487632181495 } m30999| Mon Dec 17 15:31:43.329 [conn1] chunk not full enough to trigger auto-split { a: 949.2622332181782 } m30999| Mon Dec 17 15:31:43.329 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { a: 323.8981119357049 }max: { a: 364.6896595600992 } dataWritten: 210766 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:43.329 [conn4] request split points lookup for chunk test.foo { : 323.8981119357049 } -->> { : 364.6896595600992 } m30999| Mon Dec 17 15:31:43.330 [conn1] chunk not full enough to trigger auto-split { a: 350.2196813933551 } m30999| Mon Dec 17 15:31:43.331 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { a: 480.0211163237691 }max: { a: 538.5234889108688 } dataWritten: 210766 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:43.331 [conn4] request split points lookup for chunk test.foo { : 480.0211163237691 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:43.331 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 480.0211163237691 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:43.331 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 480.0211163237691 }, max: { a: 538.5234889108688 }, from: "shard0001", splitKeys: [ { a: 508.1451514270157 } ], shardId: "test.foo-a_480.0211163237691", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.332 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b09 m30001| Mon Dec 17 15:31:43.333 [conn4] splitChunk accepted at version 1|54||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.333 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-27", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303333), what: "split", ns: "test.foo", details: { before: { min: { a: 480.0211163237691 }, max: { a: 538.5234889108688 }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 480.0211163237691 }, max: { a: 508.1451514270157 }, lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 508.1451514270157 }, max: { a: 538.5234889108688 }, lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.334 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:43.334 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 30 version: 1|56||50cf812d5ec0810ee359b569 based on: 1|54||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.335 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { a: 480.0211163237691 }max: { a: 538.5234889108688 } on: { a: 508.1451514270157 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.335 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|56, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 30 m30999| Mon Dec 17 15:31:43.335 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.336 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { a: 702.2782645653933 }max: { a: 738.9611077960581 } dataWritten: 210766 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:43.336 [conn4] request split points lookup for chunk test.foo { : 702.2782645653933 } -->> { : 738.9611077960581 } m30999| Mon Dec 17 15:31:43.336 [conn1] chunk not full enough to trigger auto-split { a: 730.6921340059489 } m30999| Mon Dec 17 15:31:43.337 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 480.0211163237691 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.337 [conn1] chunk not full enough to trigger auto-split { a: 468.4412297792733 } m30999| Mon Dec 17 15:31:43.338 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { a: 244.1017532255501 }max: { a: 285.7821767684072 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.339 [conn1] chunk not full enough to trigger auto-split { a: 271.7124233022332 } m30999| Mon Dec 17 15:31:43.339 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|35||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 40.64535931684077 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.340 [conn1] chunk not full enough to trigger auto-split { a: 27.90627791546285 } m30999| Mon Dec 17 15:31:43.340 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|55||000000000000000000000000min: { a: 480.0211163237691 }max: { a: 508.1451514270157 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.341 [conn1] chunk not full enough to trigger auto-split { a: 508.0924609210342 } m30999| Mon Dec 17 15:31:43.341 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { a: 152.16144034639 }max: { a: 211.6570973303169 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.345 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 31 version: 1|58||50cf812d5ec0810ee359b569 based on: 1|56||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.345 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { a: 152.16144034639 }max: { a: 211.6570973303169 } on: { a: 178.156032692641 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.345 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|58, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 31 m30999| Mon Dec 17 15:31:43.345 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.345 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { a: 40.64535931684077 }max: { a: 89.16067937389016 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.346 [conn1] chunk not full enough to trigger auto-split { a: 70.29163325205445 } m30999| Mon Dec 17 15:31:43.346 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { a: 244.1017532255501 }max: { a: 285.7821767684072 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.347 [conn1] chunk not full enough to trigger auto-split { a: 271.7086486518383 } m30999| Mon Dec 17 15:31:43.347 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|55||000000000000000000000000min: { a: 480.0211163237691 }max: { a: 508.1451514270157 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.348 [conn1] chunk not full enough to trigger auto-split { a: 508.0216866917908 } m30999| Mon Dec 17 15:31:43.349 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|57||000000000000000000000000min: { a: 152.16144034639 }max: { a: 178.156032692641 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.349 [conn1] chunk not full enough to trigger auto-split { a: 178.1060700304806 } m30999| Mon Dec 17 15:31:43.349 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|43||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 571.1331805214286 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.350 [conn1] chunk not full enough to trigger auto-split { a: 565.9682261757553 } m30999| Mon Dec 17 15:31:43.350 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|47||000000000000000000000000min: { a: 672.8275574278086 }max: { a: 702.2782645653933 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.351 [conn1] chunk not full enough to trigger auto-split { a: 702.1239765454084 } m30999| Mon Dec 17 15:31:43.351 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { a: 885.969014139846 }max: { a: 922.822616994381 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.352 [conn1] chunk not full enough to trigger auto-split { a: 916.3314928300679 } m30999| Mon Dec 17 15:31:43.352 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { a: 954.3487632181495 }max: { a: 999.9956642277539 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.353 [conn1] chunk not full enough to trigger auto-split { a: 983.9725082274526 } m30999| Mon Dec 17 15:31:43.354 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|54||000000000000000000000000min: { a: 764.6060811821371 }max: { a: 800.5099997390062 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.354 [conn1] chunk not full enough to trigger auto-split { a: 794.4833093788475 } m30999| Mon Dec 17 15:31:43.354 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 480.0211163237691 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.355 [conn1] chunk not full enough to trigger auto-split { a: 468.4412297792733 } m30999| Mon Dec 17 15:31:43.356 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { a: 826.7396320588887 }max: { a: 859.3603172339499 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.356 [conn1] chunk not full enough to trigger auto-split { a: 852.1266910247505 } m30999| Mon Dec 17 15:31:43.357 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { a: 323.8981119357049 }max: { a: 364.6896595600992 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.357 [conn1] chunk not full enough to trigger auto-split { a: 350.1538264099509 } m30999| Mon Dec 17 15:31:43.358 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|58||000000000000000000000000min: { a: 178.156032692641 }max: { a: 211.6570973303169 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.359 [conn1] chunk not full enough to trigger auto-split { a: 205.0583518575877 } m30999| Mon Dec 17 15:31:43.359 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|37||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 244.1017532255501 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.360 [conn1] chunk not full enough to trigger auto-split { a: 238.6642997153103 } m30999| Mon Dec 17 15:31:43.361 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|41||000000000000000000000000min: { a: 922.822616994381 }max: { a: 954.3487632181495 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.361 [conn1] chunk not full enough to trigger auto-split { a: 948.8958783913404 } m30999| Mon Dec 17 15:31:43.361 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|35||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 40.64535931684077 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.362 [conn1] chunk not full enough to trigger auto-split { a: 27.76701981201768 } m30999| Mon Dec 17 15:31:43.362 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|45||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 637.662521796301 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.363 [conn1] chunk not full enough to trigger auto-split { a: 637.6050137914717 } m30999| Mon Dec 17 15:31:43.363 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { a: 395.6566429696977 }max: { a: 439.6139404270798 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.364 [conn1] chunk not full enough to trigger auto-split { a: 424.2217224091291 } m30999| Mon Dec 17 15:31:43.365 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|33||000000000000000000000000min: { a: 285.7821767684072 }max: { a: 323.8981119357049 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.365 [conn1] chunk not full enough to trigger auto-split { a: 312.7486414741725 } m30999| Mon Dec 17 15:31:43.366 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|39||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 395.6566429696977 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.367 [conn1] chunk not full enough to trigger auto-split { a: 390.9588716924191 } m30999| Mon Dec 17 15:31:43.367 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|56||000000000000000000000000min: { a: 508.1451514270157 }max: { a: 538.5234889108688 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.368 [conn1] chunk not full enough to trigger auto-split { a: 537.7082077320665 } m30999| Mon Dec 17 15:31:43.368 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|29||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 152.16144034639 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.372 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 32 version: 1|60||50cf812d5ec0810ee359b569 based on: 1|58||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.372 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|29||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 152.16144034639 } on: { a: 119.0328269731253 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.372 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|60, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 32 m30999| Mon Dec 17 15:31:43.372 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.373 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|35||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 40.64535931684077 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.374 [conn1] chunk not full enough to trigger auto-split { a: 27.74294163100421 } m30999| Mon Dec 17 15:31:43.375 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { a: 40.64535931684077 }max: { a: 89.16067937389016 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.375 [conn1] chunk not full enough to trigger auto-split { a: 70.03847952000797 } m30999| Mon Dec 17 15:31:43.376 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|57||000000000000000000000000min: { a: 152.16144034639 }max: { a: 178.156032692641 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.376 [conn1] chunk not full enough to trigger auto-split { a: 177.6353567838669 } m30999| Mon Dec 17 15:31:43.377 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { a: 637.662521796301 }max: { a: 672.8275574278086 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.378 [conn1] chunk not full enough to trigger auto-split { a: 662.0056536048651 } m30999| Mon Dec 17 15:31:43.378 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|58||000000000000000000000000min: { a: 178.156032692641 }max: { a: 211.6570973303169 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.378 [conn1] chunk not full enough to trigger auto-split { a: 205.021336209029 } m30999| Mon Dec 17 15:31:43.379 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 480.0211163237691 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.380 [conn1] chunk not full enough to trigger auto-split { a: 468.0319302715361 } m30999| Mon Dec 17 15:31:43.380 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|37||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 244.1017532255501 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.381 [conn1] chunk not full enough to trigger auto-split { a: 238.6088469065726 } m30999| Mon Dec 17 15:31:43.381 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|33||000000000000000000000000min: { a: 285.7821767684072 }max: { a: 323.8981119357049 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.382 [conn1] chunk not full enough to trigger auto-split { a: 312.6601120457053 } m30999| Mon Dec 17 15:31:43.382 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { a: 885.969014139846 }max: { a: 922.822616994381 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.383 [conn1] chunk not full enough to trigger auto-split { a: 916.0153430420905 } m30999| Mon Dec 17 15:31:43.383 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { a: 954.3487632181495 }max: { a: 999.9956642277539 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.384 [conn1] chunk not full enough to trigger auto-split { a: 983.7813216727227 } m30999| Mon Dec 17 15:31:43.384 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { a: 323.8981119357049 }max: { a: 364.6896595600992 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.385 [conn1] chunk not full enough to trigger auto-split { a: 350.1538264099509 } m30999| Mon Dec 17 15:31:43.385 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { a: 244.1017532255501 }max: { a: 285.7821767684072 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.386 [conn1] chunk not full enough to trigger auto-split { a: 271.434081485495 } m30999| Mon Dec 17 15:31:43.386 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|59||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 119.0328269731253 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.387 [conn1] chunk not full enough to trigger auto-split { a: 118.8634547870606 } m30999| Mon Dec 17 15:31:43.387 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { a: 571.1331805214286 }max: { a: 609.4723071437329 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.388 [conn1] chunk not full enough to trigger auto-split { a: 597.1667130943388 } m30999| Mon Dec 17 15:31:43.388 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { a: 826.7396320588887 }max: { a: 859.3603172339499 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.389 [conn1] chunk not full enough to trigger auto-split { a: 851.9377838820219 } m30999| Mon Dec 17 15:31:43.391 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|41||000000000000000000000000min: { a: 922.822616994381 }max: { a: 954.3487632181495 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.391 [conn1] chunk not full enough to trigger auto-split { a: 948.7419037614018 } m30999| Mon Dec 17 15:31:43.391 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|54||000000000000000000000000min: { a: 764.6060811821371 }max: { a: 800.5099997390062 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.392 [conn1] chunk not full enough to trigger auto-split { a: 794.4189589470625 } m30999| Mon Dec 17 15:31:43.392 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|47||000000000000000000000000min: { a: 672.8275574278086 }max: { a: 702.2782645653933 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.393 [conn1] chunk not full enough to trigger auto-split { a: 701.6421095468104 } m30999| Mon Dec 17 15:31:43.393 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|60||000000000000000000000000min: { a: 119.0328269731253 }max: { a: 152.16144034639 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.394 [conn1] chunk not full enough to trigger auto-split { a: 145.6084125675261 } m30999| Mon Dec 17 15:31:43.395 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|51||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 885.969014139846 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.395 [conn1] chunk not full enough to trigger auto-split { a: 885.6506743468344 } m30999| Mon Dec 17 15:31:43.396 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { a: 395.6566429696977 }max: { a: 439.6139404270798 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.397 [conn1] chunk not full enough to trigger auto-split { a: 424.0610061679035 } m30999| Mon Dec 17 15:31:43.397 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|49||000000000000000000000000min: { a: 800.5099997390062 }max: { a: 826.7396320588887 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.398 [conn1] chunk not full enough to trigger auto-split { a: 826.5771353617311 } m30999| Mon Dec 17 15:31:43.399 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|56||000000000000000000000000min: { a: 508.1451514270157 }max: { a: 538.5234889108688 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.399 [conn1] chunk not full enough to trigger auto-split { a: 537.4333269428462 } m30999| Mon Dec 17 15:31:43.399 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|39||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 395.6566429696977 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.400 [conn1] chunk not full enough to trigger auto-split { a: 390.711814397946 } m30999| Mon Dec 17 15:31:43.402 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|43||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 571.1331805214286 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.402 [conn1] chunk not full enough to trigger auto-split { a: 565.792151959613 } m30999| Mon Dec 17 15:31:43.403 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|45||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 637.662521796301 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.403 [conn1] chunk not full enough to trigger auto-split { a: 637.590128229931 } m30999| Mon Dec 17 15:31:43.403 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|53||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 764.6060811821371 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.404 [conn1] chunk not full enough to trigger auto-split { a: 764.3781576771289 } m30999| Mon Dec 17 15:31:43.405 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|55||000000000000000000000000min: { a: 480.0211163237691 }max: { a: 508.1451514270157 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.405 [conn1] chunk not full enough to trigger auto-split { a: 507.8807375393808 } m30999| Mon Dec 17 15:31:43.407 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { a: 702.2782645653933 }max: { a: 738.9611077960581 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.407 [conn1] chunk not full enough to trigger auto-split { a: 730.3614162374288 } m30001| Mon Dec 17 15:31:43.337 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 480.0211163237691 } m30001| Mon Dec 17 15:31:43.338 [conn4] request split points lookup for chunk test.foo { : 244.1017532255501 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:43.339 [conn4] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 40.64535931684077 } m30001| Mon Dec 17 15:31:43.340 [conn4] request split points lookup for chunk test.foo { : 480.0211163237691 } -->> { : 508.1451514270157 } m30001| Mon Dec 17 15:31:43.341 [conn4] request split points lookup for chunk test.foo { : 152.16144034639 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:43.342 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 152.16144034639 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:43.342 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 152.16144034639 }, max: { a: 211.6570973303169 }, from: "shard0001", splitKeys: [ { a: 178.156032692641 } ], shardId: "test.foo-a_152.16144034639", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.342 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b0a m30001| Mon Dec 17 15:31:43.343 [conn4] splitChunk accepted at version 1|56||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.344 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-28", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303344), what: "split", ns: "test.foo", details: { before: { min: { a: 152.16144034639 }, max: { a: 211.6570973303169 }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 152.16144034639 }, max: { a: 178.156032692641 }, lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 178.156032692641 }, max: { a: 211.6570973303169 }, lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.344 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:43.346 [conn4] request split points lookup for chunk test.foo { : 40.64535931684077 } -->> { : 89.16067937389016 } m30001| Mon Dec 17 15:31:43.346 [conn4] request split points lookup for chunk test.foo { : 244.1017532255501 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:43.348 [conn4] request split points lookup for chunk test.foo { : 480.0211163237691 } -->> { : 508.1451514270157 } m30001| Mon Dec 17 15:31:43.349 [conn4] request split points lookup for chunk test.foo { : 152.16144034639 } -->> { : 178.156032692641 } m30001| Mon Dec 17 15:31:43.350 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 571.1331805214286 } m30001| Mon Dec 17 15:31:43.350 [conn4] request split points lookup for chunk test.foo { : 672.8275574278086 } -->> { : 702.2782645653933 } m30001| Mon Dec 17 15:31:43.351 [conn4] request split points lookup for chunk test.foo { : 885.969014139846 } -->> { : 922.822616994381 } m30001| Mon Dec 17 15:31:43.352 [conn4] request split points lookup for chunk test.foo { : 954.3487632181495 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:43.354 [conn4] request split points lookup for chunk test.foo { : 764.6060811821371 } -->> { : 800.5099997390062 } m30001| Mon Dec 17 15:31:43.355 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 480.0211163237691 } m30001| Mon Dec 17 15:31:43.356 [conn4] request split points lookup for chunk test.foo { : 826.7396320588887 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:43.357 [conn4] request split points lookup for chunk test.foo { : 323.8981119357049 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:43.358 [conn4] request split points lookup for chunk test.foo { : 178.156032692641 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:43.360 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 244.1017532255501 } m30001| Mon Dec 17 15:31:43.361 [conn4] request split points lookup for chunk test.foo { : 922.822616994381 } -->> { : 954.3487632181495 } m30001| Mon Dec 17 15:31:43.362 [conn4] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 40.64535931684077 } m30001| Mon Dec 17 15:31:43.362 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 637.662521796301 } m30001| Mon Dec 17 15:31:43.363 [conn4] request split points lookup for chunk test.foo { : 395.6566429696977 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:43.365 [conn4] request split points lookup for chunk test.foo { : 285.7821767684072 } -->> { : 323.8981119357049 } m30001| Mon Dec 17 15:31:43.366 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 395.6566429696977 } m30001| Mon Dec 17 15:31:43.367 [conn4] request split points lookup for chunk test.foo { : 508.1451514270157 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:43.368 [conn4] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 152.16144034639 } m30001| Mon Dec 17 15:31:43.369 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 89.16067937389016 } -->> { : 152.16144034639 } m30001| Mon Dec 17 15:31:43.369 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 89.16067937389016 }, max: { a: 152.16144034639 }, from: "shard0001", splitKeys: [ { a: 119.0328269731253 } ], shardId: "test.foo-a_89.16067937389016", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.370 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b0b m30001| Mon Dec 17 15:31:43.371 [conn4] splitChunk accepted at version 1|58||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.371 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-29", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303371), what: "split", ns: "test.foo", details: { before: { min: { a: 89.16067937389016 }, max: { a: 152.16144034639 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 89.16067937389016 }, max: { a: 119.0328269731253 }, lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 119.0328269731253 }, max: { a: 152.16144034639 }, lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.371 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:43.373 [conn4] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 40.64535931684077 } m30001| Mon Dec 17 15:31:43.375 [conn4] request split points lookup for chunk test.foo { : 40.64535931684077 } -->> { : 89.16067937389016 } m30001| Mon Dec 17 15:31:43.376 [conn4] request split points lookup for chunk test.foo { : 152.16144034639 } -->> { : 178.156032692641 } m30001| Mon Dec 17 15:31:43.377 [conn4] request split points lookup for chunk test.foo { : 637.662521796301 } -->> { : 672.8275574278086 } m30001| Mon Dec 17 15:31:43.378 [conn4] request split points lookup for chunk test.foo { : 178.156032692641 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:43.379 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 480.0211163237691 } m30001| Mon Dec 17 15:31:43.380 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 244.1017532255501 } m30001| Mon Dec 17 15:31:43.381 [conn4] request split points lookup for chunk test.foo { : 285.7821767684072 } -->> { : 323.8981119357049 } m30001| Mon Dec 17 15:31:43.382 [conn4] request split points lookup for chunk test.foo { : 885.969014139846 } -->> { : 922.822616994381 } m30001| Mon Dec 17 15:31:43.383 [conn4] request split points lookup for chunk test.foo { : 954.3487632181495 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:43.384 [conn4] request split points lookup for chunk test.foo { : 323.8981119357049 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:43.385 [conn4] request split points lookup for chunk test.foo { : 244.1017532255501 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:43.386 [conn4] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 119.0328269731253 } m30001| Mon Dec 17 15:31:43.387 [conn4] request split points lookup for chunk test.foo { : 571.1331805214286 } -->> { : 609.4723071437329 } m30001| Mon Dec 17 15:31:43.388 [conn4] request split points lookup for chunk test.foo { : 826.7396320588887 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:43.391 [conn4] request split points lookup for chunk test.foo { : 922.822616994381 } -->> { : 954.3487632181495 } m30001| Mon Dec 17 15:31:43.391 [conn4] request split points lookup for chunk test.foo { : 764.6060811821371 } -->> { : 800.5099997390062 } m30001| Mon Dec 17 15:31:43.392 [conn4] request split points lookup for chunk test.foo { : 672.8275574278086 } -->> { : 702.2782645653933 } m30001| Mon Dec 17 15:31:43.394 [conn4] request split points lookup for chunk test.foo { : 119.0328269731253 } -->> { : 152.16144034639 } m30001| Mon Dec 17 15:31:43.395 [conn4] request split points lookup for chunk test.foo { : 859.3603172339499 } -->> { : 885.969014139846 } m30001| Mon Dec 17 15:31:43.396 [conn4] request split points lookup for chunk test.foo { : 395.6566429696977 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:43.397 [conn4] request split points lookup for chunk test.foo { : 800.5099997390062 } -->> { : 826.7396320588887 } m30001| Mon Dec 17 15:31:43.399 [conn4] request split points lookup for chunk test.foo { : 508.1451514270157 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:43.399 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 395.6566429696977 } m30001| Mon Dec 17 15:31:43.402 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 571.1331805214286 } m30001| Mon Dec 17 15:31:43.403 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 637.662521796301 } m30001| Mon Dec 17 15:31:43.404 [conn4] request split points lookup for chunk test.foo { : 738.9611077960581 } -->> { : 764.6060811821371 } m30001| Mon Dec 17 15:31:43.405 [conn4] request split points lookup for chunk test.foo { : 480.0211163237691 } -->> { : 508.1451514270157 } m30001| Mon Dec 17 15:31:43.407 [conn4] request split points lookup for chunk test.foo { : 702.2782645653933 } -->> { : 738.9611077960581 } ========> Saved total of 19000 documents ========> Saved total of 20000 documents ========> Saved total of 21000 documents m30999| Mon Dec 17 15:31:43.597 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|13||000000000000000000000000min: { a: MinKey }max: { a: 0.3993422724306583 } dataWritten: 209690 splitThreshold: 943718 m30999| Mon Dec 17 15:31:43.598 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Mon Dec 17 15:31:43.684 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { a: 40.64535931684077 }max: { a: 89.16067937389016 } dataWritten: 209820 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.689 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 33 version: 1|62||50cf812d5ec0810ee359b569 based on: 1|60||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:43.689 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { a: 40.64535931684077 }max: { a: 89.16067937389016 } on: { a: 62.87552835419774 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:43.689 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|62, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 33 m30999| Mon Dec 17 15:31:43.689 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:43.690 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 480.0211163237691 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.691 [conn1] chunk not full enough to trigger auto-split { a: 463.3205556310713 } m30999| Mon Dec 17 15:31:43.691 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { a: 395.6566429696977 }max: { a: 439.6139404270798 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.692 [conn1] chunk not full enough to trigger auto-split { a: 418.686585733667 } m30999| Mon Dec 17 15:31:43.693 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|49||000000000000000000000000min: { a: 800.5099997390062 }max: { a: 826.7396320588887 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.693 [conn1] chunk not full enough to trigger auto-split { a: 821.9272890128195 } m30999| Mon Dec 17 15:31:43.694 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { a: 954.3487632181495 }max: { a: 999.9956642277539 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.694 [conn1] chunk not full enough to trigger auto-split { a: 978.7038296926767 } m30999| Mon Dec 17 15:31:43.695 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { a: 885.969014139846 }max: { a: 922.822616994381 } dataWritten: 210766 splitThreshold: 1048576 ========> Saved total of 22000 documents m30999| Mon Dec 17 15:31:43.696 [conn1] chunk not full enough to trigger auto-split { a: 910.4589426424354 } m30999| Mon Dec 17 15:31:43.696 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { a: 571.1331805214286 }max: { a: 609.4723071437329 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.697 [conn1] chunk not full enough to trigger auto-split { a: 593.4454007074237 } m30999| Mon Dec 17 15:31:43.697 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { a: 244.1017532255501 }max: { a: 285.7821767684072 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.698 [conn1] chunk not full enough to trigger auto-split { a: 266.4486668072641 } m30999| Mon Dec 17 15:31:43.698 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|55||000000000000000000000000min: { a: 480.0211163237691 }max: { a: 508.1451514270157 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.699 [conn1] chunk not full enough to trigger auto-split { a: 502.7972389943898 } m30999| Mon Dec 17 15:31:43.699 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|35||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 40.64535931684077 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.700 [conn1] chunk not full enough to trigger auto-split { a: 23.24274159036577 } m30999| Mon Dec 17 15:31:43.700 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|54||000000000000000000000000min: { a: 764.6060811821371 }max: { a: 800.5099997390062 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.701 [conn1] chunk not full enough to trigger auto-split { a: 788.2632962428033 } m30999| Mon Dec 17 15:31:43.702 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { a: 323.8981119357049 }max: { a: 364.6896595600992 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.702 [conn1] chunk not full enough to trigger auto-split { a: 345.7441690843552 } m30999| Mon Dec 17 15:31:43.703 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|47||000000000000000000000000min: { a: 672.8275574278086 }max: { a: 702.2782645653933 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.703 [conn1] chunk not full enough to trigger auto-split { a: 695.9427946712822 } m30999| Mon Dec 17 15:31:43.703 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { a: 637.662521796301 }max: { a: 672.8275574278086 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.704 [conn1] chunk not full enough to trigger auto-split { a: 659.1708860360086 } m30999| Mon Dec 17 15:31:43.705 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|59||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 119.0328269731253 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.705 [conn1] chunk not full enough to trigger auto-split { a: 113.3942198939621 } m30999| Mon Dec 17 15:31:43.706 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|43||000000000000000000000000min: { a: 538.5234889108688 }max: { a: 571.1331805214286 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.707 [conn1] chunk not full enough to trigger auto-split { a: 561.0091292764992 } m30999| Mon Dec 17 15:31:43.707 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { a: 702.2782645653933 }max: { a: 738.9611077960581 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.708 [conn1] chunk not full enough to trigger auto-split { a: 726.302721304819 } m30999| Mon Dec 17 15:31:43.708 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|60||000000000000000000000000min: { a: 119.0328269731253 }max: { a: 152.16144034639 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.709 [conn1] chunk not full enough to trigger auto-split { a: 140.4399054590613 } m30999| Mon Dec 17 15:31:43.709 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|33||000000000000000000000000min: { a: 285.7821767684072 }max: { a: 323.8981119357049 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.710 [conn1] chunk not full enough to trigger auto-split { a: 306.9888558238745 } m30999| Mon Dec 17 15:31:43.710 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|53||000000000000000000000000min: { a: 738.9611077960581 }max: { a: 764.6060811821371 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.711 [conn1] chunk not full enough to trigger auto-split { a: 759.5634858589619 } m30999| Mon Dec 17 15:31:43.711 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|41||000000000000000000000000min: { a: 922.822616994381 }max: { a: 954.3487632181495 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.712 [conn1] chunk not full enough to trigger auto-split { a: 944.3302690051496 } m30999| Mon Dec 17 15:31:43.712 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|58||000000000000000000000000min: { a: 178.156032692641 }max: { a: 211.6570973303169 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.713 [conn1] chunk not full enough to trigger auto-split { a: 199.9168577603996 } m30999| Mon Dec 17 15:31:43.714 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|56||000000000000000000000000min: { a: 508.1451514270157 }max: { a: 538.5234889108688 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.745 [conn1] chunk not full enough to trigger auto-split { a: 531.4077397342771 } m30999| Mon Dec 17 15:31:43.746 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|45||000000000000000000000000min: { a: 609.4723071437329 }max: { a: 637.662521796301 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.746 [conn1] chunk not full enough to trigger auto-split { a: 632.4911226984113 } m30999| Mon Dec 17 15:31:43.748 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { a: 826.7396320588887 }max: { a: 859.3603172339499 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.749 [conn1] chunk not full enough to trigger auto-split { a: 848.7833132967353 } m30999| Mon Dec 17 15:31:43.750 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|37||000000000000000000000000min: { a: 211.6570973303169 }max: { a: 244.1017532255501 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.751 [conn1] chunk not full enough to trigger auto-split { a: 234.3662036582828 } m30999| Mon Dec 17 15:31:43.752 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|61||000000000000000000000000min: { a: 40.64535931684077 }max: { a: 62.87552835419774 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.752 [conn1] chunk not full enough to trigger auto-split { a: 62.8099839668721 } m30999| Mon Dec 17 15:31:43.753 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|39||000000000000000000000000min: { a: 364.6896595600992 }max: { a: 395.6566429696977 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.754 [conn1] chunk not full enough to trigger auto-split { a: 386.3285044208169 } m30999| Mon Dec 17 15:31:43.756 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|57||000000000000000000000000min: { a: 152.16144034639 }max: { a: 178.156032692641 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.757 [conn1] chunk not full enough to trigger auto-split { a: 173.4352721832693 } m30999| Mon Dec 17 15:31:43.759 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|62||000000000000000000000000min: { a: 62.87552835419774 }max: { a: 89.16067937389016 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.760 [conn1] chunk not full enough to trigger auto-split { a: 86.72543824650347 } m30999| Mon Dec 17 15:31:43.765 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|51||000000000000000000000000min: { a: 859.3603172339499 }max: { a: 885.969014139846 } dataWritten: 210766 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:43.766 [conn1] chunk not full enough to trigger auto-split { a: 882.4825275223702 } m30999| Mon Dec 17 15:31:43.772 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|13||000000000000000000000000min: { a: MinKey }max: { a: 0.3993422724306583 } dataWritten: 209690 splitThreshold: 943718 m30999| Mon Dec 17 15:31:43.772 [conn1] chunk not full enough to trigger auto-split no split entry ========> Saved total of 23000 documents m30001| Mon Dec 17 15:31:43.598 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 0.3993422724306583 } m30001| Mon Dec 17 15:31:43.685 [conn4] request split points lookup for chunk test.foo { : 40.64535931684077 } -->> { : 89.16067937389016 } m30001| Mon Dec 17 15:31:43.685 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 40.64535931684077 } -->> { : 89.16067937389016 } m30001| Mon Dec 17 15:31:43.685 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 40.64535931684077 }, max: { a: 89.16067937389016 }, from: "shard0001", splitKeys: [ { a: 62.87552835419774 } ], shardId: "test.foo-a_40.64535931684077", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:43.686 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf812fc94e4981dc6c1b0c m30001| Mon Dec 17 15:31:43.687 [conn4] splitChunk accepted at version 1|60||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:43.688 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:43-30", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776303688), what: "split", ns: "test.foo", details: { before: { min: { a: 40.64535931684077 }, max: { a: 89.16067937389016 }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 40.64535931684077 }, max: { a: 62.87552835419774 }, lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 62.87552835419774 }, max: { a: 89.16067937389016 }, lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:43.688 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:43.690 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 480.0211163237691 } m30001| Mon Dec 17 15:31:43.691 [conn4] request split points lookup for chunk test.foo { : 395.6566429696977 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:43.693 [conn4] request split points lookup for chunk test.foo { : 800.5099997390062 } -->> { : 826.7396320588887 } m30001| Mon Dec 17 15:31:43.694 [conn4] request split points lookup for chunk test.foo { : 954.3487632181495 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:43.695 [conn4] request split points lookup for chunk test.foo { : 885.969014139846 } -->> { : 922.822616994381 } m30001| Mon Dec 17 15:31:43.696 [conn4] request split points lookup for chunk test.foo { : 571.1331805214286 } -->> { : 609.4723071437329 } m30001| Mon Dec 17 15:31:43.697 [conn4] request split points lookup for chunk test.foo { : 244.1017532255501 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:43.698 [conn4] request split points lookup for chunk test.foo { : 480.0211163237691 } -->> { : 508.1451514270157 } m30001| Mon Dec 17 15:31:43.699 [conn4] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 40.64535931684077 } m30001| Mon Dec 17 15:31:43.700 [conn4] request split points lookup for chunk test.foo { : 764.6060811821371 } -->> { : 800.5099997390062 } m30001| Mon Dec 17 15:31:43.702 [conn4] request split points lookup for chunk test.foo { : 323.8981119357049 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:43.703 [conn4] request split points lookup for chunk test.foo { : 672.8275574278086 } -->> { : 702.2782645653933 } m30001| Mon Dec 17 15:31:43.704 [conn4] request split points lookup for chunk test.foo { : 637.662521796301 } -->> { : 672.8275574278086 } m30001| Mon Dec 17 15:31:43.705 [conn4] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 119.0328269731253 } m30001| Mon Dec 17 15:31:43.706 [conn4] request split points lookup for chunk test.foo { : 538.5234889108688 } -->> { : 571.1331805214286 } m30001| Mon Dec 17 15:31:43.707 [conn4] request split points lookup for chunk test.foo { : 702.2782645653933 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:43.708 [conn4] request split points lookup for chunk test.foo { : 119.0328269731253 } -->> { : 152.16144034639 } m30001| Mon Dec 17 15:31:43.709 [conn4] request split points lookup for chunk test.foo { : 285.7821767684072 } -->> { : 323.8981119357049 } m30001| Mon Dec 17 15:31:43.710 [conn4] request split points lookup for chunk test.foo { : 738.9611077960581 } -->> { : 764.6060811821371 } m30001| Mon Dec 17 15:31:43.711 [conn4] request split points lookup for chunk test.foo { : 922.822616994381 } -->> { : 954.3487632181495 } m30001| Mon Dec 17 15:31:43.712 [conn4] request split points lookup for chunk test.foo { : 178.156032692641 } -->> { : 211.6570973303169 } m30001| Mon Dec 17 15:31:43.714 [conn4] request split points lookup for chunk test.foo { : 508.1451514270157 } -->> { : 538.5234889108688 } m30001| Mon Dec 17 15:31:43.746 [conn4] request split points lookup for chunk test.foo { : 609.4723071437329 } -->> { : 637.662521796301 } m30001| Mon Dec 17 15:31:43.748 [conn4] request split points lookup for chunk test.foo { : 826.7396320588887 } -->> { : 859.3603172339499 } m30001| Mon Dec 17 15:31:43.750 [conn4] request split points lookup for chunk test.foo { : 211.6570973303169 } -->> { : 244.1017532255501 } m30001| Mon Dec 17 15:31:43.752 [conn4] request split points lookup for chunk test.foo { : 40.64535931684077 } -->> { : 62.87552835419774 } m30001| Mon Dec 17 15:31:43.753 [conn4] request split points lookup for chunk test.foo { : 364.6896595600992 } -->> { : 395.6566429696977 } m30001| Mon Dec 17 15:31:43.756 [conn4] request split points lookup for chunk test.foo { : 152.16144034639 } -->> { : 178.156032692641 } m30001| Mon Dec 17 15:31:43.759 [conn4] request split points lookup for chunk test.foo { : 62.87552835419774 } -->> { : 89.16067937389016 } m30001| Mon Dec 17 15:31:43.765 [conn4] request split points lookup for chunk test.foo { : 859.3603172339499 } -->> { : 885.969014139846 } m30001| Mon Dec 17 15:31:43.772 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : 0.3993422724306583 } m30001| Mon Dec 17 15:31:44.091 [conn3] insert test.foo keyUpdates:0 locks(micros) w:268411 268ms m30001| Mon Dec 17 15:31:44.134 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.2, size: 64MB, took 1.621 secs ========> Saved total of 24000 documents ========> Saved total of 25000 documents ========> Saved total of 26000 documents m30999| Mon Dec 17 15:31:44.467 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { a: 954.3487632181495 }max: { a: 999.9956642277539 } dataWritten: 209820 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:44.467 [conn4] request split points lookup for chunk test.foo { : 954.3487632181495 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:44.468 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 954.3487632181495 } -->> { : 999.9956642277539 } m30001| Mon Dec 17 15:31:44.468 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 954.3487632181495 }, max: { a: 999.9956642277539 }, from: "shard0001", splitKeys: [ { a: 973.9556647837162 } ], shardId: "test.foo-a_954.3487632181495", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:44.469 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8130c94e4981dc6c1b0d m30001| Mon Dec 17 15:31:44.470 [conn4] splitChunk accepted at version 1|62||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:44.470 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:44-31", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776304470), what: "split", ns: "test.foo", details: { before: { min: { a: 954.3487632181495 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 954.3487632181495 }, max: { a: 973.9556647837162 }, lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 973.9556647837162 }, max: { a: 999.9956642277539 }, lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:44.471 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:44.471 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 34 version: 1|64||50cf812d5ec0810ee359b569 based on: 1|62||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:44.472 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { a: 954.3487632181495 }max: { a: 999.9956642277539 } on: { a: 973.9556647837162 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:44.472 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|64, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 34 m30999| Mon Dec 17 15:31:44.472 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } ========> Saved total of 27000 documents ========> Saved total of 28000 documents ========> Saved total of 29000 documents m30999| Mon Dec 17 15:31:44.727 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { a: 702.2782645653933 }max: { a: 738.9611077960581 } dataWritten: 209858 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:44.728 [conn4] request split points lookup for chunk test.foo { : 702.2782645653933 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:44.728 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 702.2782645653933 } -->> { : 738.9611077960581 } m30001| Mon Dec 17 15:31:44.729 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 702.2782645653933 }, max: { a: 738.9611077960581 }, from: "shard0001", splitKeys: [ { a: 718.4353433549404 } ], shardId: "test.foo-a_702.2782645653933", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:44.730 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8130c94e4981dc6c1b0e m30001| Mon Dec 17 15:31:44.730 [conn4] splitChunk accepted at version 1|64||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:44.731 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:44-32", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776304731), what: "split", ns: "test.foo", details: { before: { min: { a: 702.2782645653933 }, max: { a: 738.9611077960581 }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 702.2782645653933 }, max: { a: 718.4353433549404 }, lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 718.4353433549404 }, max: { a: 738.9611077960581 }, lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:44.731 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:44.732 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 35 version: 1|66||50cf812d5ec0810ee359b569 based on: 1|64||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:44.732 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { a: 702.2782645653933 }max: { a: 738.9611077960581 } on: { a: 718.4353433549404 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:44.732 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|66, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 35 ========> Finished saving total of 30000 documents m30999| Mon Dec 17 15:31:44.733 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } ---- No errors on insert batch. ---- ---- Setup OK: count matches (30000) -- Starting MapReduce ---- m30001| Mon Dec 17 15:31:45.720 [conn3] CMD: drop test.tmp.mr.foo_0_inc m30001| Mon Dec 17 15:31:45.720 [conn3] build index test.tmp.mr.foo_0_inc { 0: 1 } m30001| Mon Dec 17 15:31:45.721 [conn3] build index done. scanned 0 total records. 0 secs m30001| Mon Dec 17 15:31:45.721 [conn3] CMD: drop test.tmp.mr.foo_0 m30001| Mon Dec 17 15:31:45.721 [conn3] build index test.tmp.mr.foo_0 { _id: 1 } m30001| Mon Dec 17 15:31:45.722 [conn3] build index done. scanned 0 total records. 0 secs m30001| Mon Dec 17 15:31:45.940 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.3, filling with zeroes... m30999| Mon Dec 17 15:31:46.119 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:31:46.119 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:31:46.120 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:31:46 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf81325ec0810ee359b56a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf812c5ec0810ee359b568" } } m30999| Mon Dec 17 15:31:46.150 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81325ec0810ee359b56a m30999| Mon Dec 17 15:31:46.150 [Balancer] *** start balancing round m30999| Mon Dec 17 15:31:46.251 [Balancer] shard0001 has more chunks me:34 best: shard0000:0 m30999| Mon Dec 17 15:31:46.251 [Balancer] collection : test.foo m30999| Mon Dec 17 15:31:46.251 [Balancer] donor : shard0001 chunks on 34 m30999| Mon Dec 17 15:31:46.251 [Balancer] receiver : shard0000 chunks on 0 m30999| Mon Dec 17 15:31:46.251 [Balancer] threshold : 4 m30999| Mon Dec 17 15:31:46.251 [Balancer] ns: test.foo going to move { _id: "test.foo-a_MinKey", lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:31:46.251 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|13||000000000000000000000000min: { a: MinKey }max: { a: 0.3993422724306583 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:31:46.289 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: MinKey }, max: { a: 0.3993422724306583 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_MinKey", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:31:46.330 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8132c94e4981dc6c1b0f m30001| Mon Dec 17 15:31:46.330 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:46-33", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776306330), what: "moveChunk.start", ns: "test.foo", details: { min: { a: MinKey }, max: { a: 0.3993422724306583 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:31:46.379 [conn4] moveChunk request accepted at version 1|66||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:46.379 [conn4] moveChunk number of documents: 13 m30001| Mon Dec 17 15:31:46.409 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:31:46.249 [conn3] build index config.tags { _id: 1 } m30000| Mon Dec 17 15:31:46.250 [conn3] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:46.250 [conn3] info: creating collection config.tags on add index m30000| Mon Dec 17 15:31:46.250 [conn3] build index config.tags { ns: 1, min: 1 } m30000| Mon Dec 17 15:31:46.250 [conn3] build index done. scanned 0 total records. 0 secs m30001| Mon Dec 17 15:31:46.409 [initandlisten] connection accepted from 127.0.0.1:42528 #5 (5 connections now open) m30001| Mon Dec 17 15:31:46.419 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:46.429 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:46.449 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:31:46.410 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/test.ns, filling with zeroes... m30001| Mon Dec 17 15:31:46.469 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:46.505 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:46.573 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:46.705 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:46.965 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:47.481 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:48.005 [conn3] 23200/30000 77% m30001| Mon Dec 17 15:31:48.509 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:48.838 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.3, size: 128MB, took 2.897 secs m30001| Mon Dec 17 15:31:49.468 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.4, filling with zeroes... m30001| Mon Dec 17 15:31:49.539 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:31:49.831 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/test.ns, size: 16MB, took 3.42 secs m30000| Mon Dec 17 15:31:49.831 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/test.0, filling with zeroes... m30001| Mon Dec 17 15:31:50.037 [conn3] CMD: drop test.tmp.mrs.foo_1355776305_0 m30001| Mon Dec 17 15:31:50.039 [conn3] CMD: drop test.tmp.mr.foo_0 m30001| Mon Dec 17 15:31:50.039 [conn3] request split points lookup for chunk test.tmp.mrs.foo_1355776305_0 { : MinKey } -->> { : MaxKey } m30001| Mon Dec 17 15:31:50.060 [conn3] CMD: drop test.tmp.mr.foo_0 m30001| Mon Dec 17 15:31:50.060 [conn3] CMD: drop test.tmp.mr.foo_0_inc m30001| Mon Dec 17 15:31:50.065 [conn3] command test.$cmd command: { mapreduce: "foo", map: function map2() { emit(this._id, {count: 1, y: this.y}); }, reduce: function reduce2(key, values) { return values[0]; }, out: "tmp.mrs.foo_1355776305_0", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 30301 locks(micros) W:1834 r:4799873 w:1179742 reslen:1818 4384ms m30999| Mon Dec 17 15:31:50.065 [conn1] MR with sharded output, NS=test.mrShardedOut m30999| Mon Dec 17 15:31:50.065 [conn1] enable sharding on: test.mrShardedOut with shard key: { _id: 1 } m30999| Mon Dec 17 15:31:50.066 [conn1] going to create 65 chunk(s) for: test.mrShardedOut using new epoch 50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:31:50.094 [conn1] ChunkManager: time to load chunks for test.mrShardedOut: 1ms sequenceNumber: 36 version: 1|64||50cf81365ec0810ee359b56b based on: (empty) m30000| Mon Dec 17 15:31:50.456 [conn4] update config.collections query: { _id: "test.mrShardedOut" } update: { _id: "test.mrShardedOut", lastmod: new Date(1355776310), dropped: false, key: { _id: 1 }, unique: true, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b') } nscanned:0 idhack:1 nupdated:1 upsert:1 keyUpdates:0 locks(micros) w:330771 330ms m30999| Mon Dec 17 15:31:50.459 [conn1] resetting shard version of test.mrShardedOut on localhost:30000, version is zero m30999| Mon Dec 17 15:31:50.459 [conn1] setShardVersion shard0000 localhost:30000 test.mrShardedOut { setShardVersion: "test.mrShardedOut", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0000", shardHost: "localhost:30000" } 0x91767f8 36 m30001| Mon Dec 17 15:31:50.565 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:51.593 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:52.621 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:53.649 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:31:53.793 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/test.0, size: 16MB, took 3.961 secs m30000| Mon Dec 17 15:31:53.794 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/test.1, filling with zeroes... m30000| Mon Dec 17 15:31:53.796 [migrateThread] build index test.foo { _id: 1 } m30000| Mon Dec 17 15:31:53.796 [migrateThread] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:53.796 [migrateThread] info: creating collection test.foo on add index m30000| Mon Dec 17 15:31:53.797 [migrateThread] build index test.foo { a: 1.0 } m30000| Mon Dec 17 15:31:53.797 [migrateThread] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:31:53.798 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:31:53.798 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: MinKey } -> { a: 0.3993422724306583 } m30000| Mon Dec 17 15:31:53.799 [conn6] command admin.$cmd command: { setShardVersion: "test.mrShardedOut", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0000", shardHost: "localhost:30000", $auth: {} } ntoreturn:1 keyUpdates:0 locks(micros) W:5 reslen:86 3339ms m30999| Mon Dec 17 15:31:53.799 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Mon Dec 17 15:31:53.799 [conn1] setShardVersion shard0001 localhost:30001 test.mrShardedOut { setShardVersion: "test.mrShardedOut", configdb: "localhost:30000", version: Timestamp 1000|64, versionEpoch: ObjectId('50cf81365ec0810ee359b56b'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 36 m30999| Mon Dec 17 15:31:53.809 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.mrShardedOut", need_authoritative: true, errmsg: "first time for collection 'test.mrShardedOut'", ok: 0.0 } m30999| Mon Dec 17 15:31:53.810 [conn1] setShardVersion shard0001 localhost:30001 test.mrShardedOut { setShardVersion: "test.mrShardedOut", configdb: "localhost:30000", version: Timestamp 1000|64, versionEpoch: ObjectId('50cf81365ec0810ee359b56b'), serverID: ObjectId('50cf812c5ec0810ee359b567'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 36 m30001| Mon Dec 17 15:31:53.810 [conn3] no current chunk manager found for this shard, will initialize m30999| Mon Dec 17 15:31:53.812 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Mon Dec 17 15:31:53.812 [conn1] created new distributed lock for test.mrShardedOut on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Mon Dec 17 15:31:53.812 [conn1] trying to acquire new distributed lock for test.mrShardedOut on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:31:53.812 [conn1] inserting initial doc in config.locks for lock test.mrShardedOut m30999| Mon Dec 17 15:31:53.812 [conn1] about to acquire distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:31:53 2012" }, m30999| "why" : "mr-post-process", m30999| "ts" : { "$oid" : "50cf81395ec0810ee359b56c" } } m30999| { "_id" : "test.mrShardedOut", m30999| "state" : 0 } m30999| Mon Dec 17 15:31:53.813 [conn1] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81395ec0810ee359b56c m30001| Mon Dec 17 15:31:53.814 [conn3] CMD: drop test.tmp.mr.foo_1 m30001| Mon Dec 17 15:31:53.814 [conn3] build index test.tmp.mr.foo_1 { _id: 1 } m30001| Mon Dec 17 15:31:53.814 [conn3] build index done. scanned 0 total records. 0 secs m30001| Mon Dec 17 15:31:53.816 [conn3] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|66||50cf812d5ec0810ee359b569 based on: (empty) m30001| Mon Dec 17 15:31:53.817 [conn3] ChunkManager: time to load chunks for test.mrShardedOut: 1ms sequenceNumber: 3 version: 1|64||50cf81365ec0810ee359b56b based on: (empty) m30000| Mon Dec 17 15:31:53.818 [initandlisten] connection accepted from 127.0.0.1:39862 #10 (10 connections now open) m30001| Mon Dec 17 15:31:53.818 [initandlisten] connection accepted from 127.0.0.1:42535 #6 (6 connections now open) m30001| Mon Dec 17 15:31:53.822 [initandlisten] connection accepted from 127.0.0.1:42536 #7 (7 connections now open) m30001| Mon Dec 17 15:31:54.434 [conn5] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:34 reslen:51 594ms m30001| Mon Dec 17 15:31:54.678 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "steady", counts: { cloned: 13, clonedBytes: 13988, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:31:54.679 [conn4] moveChunk setting version to: 2|0||50cf812d5ec0810ee359b569 m30000| Mon Dec 17 15:31:54.679 [initandlisten] connection accepted from 127.0.0.1:39866 #11 (11 connections now open) m30000| Mon Dec 17 15:31:54.679 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:31:54.681 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:31:54.685 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:31:54.689 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:31:54.689 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: MinKey } -> { a: 0.3993422724306583 } m30000| Mon Dec 17 15:31:54.690 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:54-0", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776314690), what: "moveChunk.to", ns: "test.foo", details: { min: { a: MinKey }, max: { a: 0.3993422724306583 }, step1 of 5: 7388, step2 of 5: 0, step3 of 5: 1, step4 of 5: 0, step5 of 5: 891 } } m30000| Mon Dec 17 15:31:54.690 [initandlisten] connection accepted from 127.0.0.1:39867 #12 (12 connections now open) m30001| Mon Dec 17 15:31:54.693 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: MinKey }, max: { a: 0.3993422724306583 }, shardKeyPattern: { a: 1.0 }, state: "done", counts: { cloned: 13, clonedBytes: 13988, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:31:54.693 [conn4] moveChunk updating self version to: 2|1||50cf812d5ec0810ee359b569 through { a: 0.3993422724306583 } -> { a: 40.64535931684077 } for collection 'test.foo' m30001| Mon Dec 17 15:31:54.694 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:54-34", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776314694), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: MinKey }, max: { a: 0.3993422724306583 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:31:54.694 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Mon Dec 17 15:31:54.694 [initandlisten] connection accepted from 127.0.0.1:39868 #13 (13 connections now open) m30001| Mon Dec 17 15:31:54.698 [conn4] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:31:54.698 [conn4] forking for cleanup of chunk data m30001| Mon Dec 17 15:31:54.698 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:31:54.698 [conn4] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:31:54.698 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:54.698 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:54-35", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776314698), what: "moveChunk.from", ns: "test.foo", details: { min: { a: MinKey }, max: { a: 0.3993422724306583 }, step1 of 6: 37, step2 of 6: 90, step3 of 6: 0, step4 of 6: 8298, step5 of 6: 19, step6 of 6: 0 } } m30001| Mon Dec 17 15:31:54.698 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: MinKey }, max: { a: 0.3993422724306583 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_MinKey", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:37 r:107 w:267 reslen:37 8446ms m30001| Mon Dec 17 15:31:54.698 [cleanupOldData-50cf813ac94e4981dc6c1b10] (start) waiting to cleanup test.foo from { a: MinKey } -> { a: 0.3993422724306583 }, # cursors remaining: 0 m30999| Mon Dec 17 15:31:54.699 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:31:54.699 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 37 version: 2|1||50cf812d5ec0810ee359b569 based on: 1|66||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:54.699 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:31:54.700 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:31:54.721 [cleanupOldData-50cf813ac94e4981dc6c1b10] waiting to remove documents for test.foo from { a: MinKey } -> { a: 0.3993422724306583 } m30001| Mon Dec 17 15:31:54.721 [cleanupOldData-50cf813ac94e4981dc6c1b10] moveChunk starting delete for: test.foo from { a: MinKey } -> { a: 0.3993422724306583 } m30999| Mon Dec 17 15:31:55.702 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:31:55.702 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:31:55.702 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:31:55 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf813b5ec0810ee359b56d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf81325ec0810ee359b56a" } } m30999| Mon Dec 17 15:31:55.703 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf813b5ec0810ee359b56d m30999| Mon Dec 17 15:31:55.703 [Balancer] *** start balancing round m30001| Mon Dec 17 15:31:56.203 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.4, size: 256MB, took 6.729 secs m30001| Mon Dec 17 15:31:56.204 [conn4] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:31 reslen:1925 500ms m30999| Mon Dec 17 15:31:56.205 [Balancer] shard0001 has more chunks me:33 best: shard0000:1 m30999| Mon Dec 17 15:31:56.205 [Balancer] collection : test.foo m30999| Mon Dec 17 15:31:56.205 [Balancer] donor : shard0001 chunks on 33 m30999| Mon Dec 17 15:31:56.205 [Balancer] receiver : shard0000 chunks on 1 m30999| Mon Dec 17 15:31:56.205 [Balancer] threshold : 2 m30999| Mon Dec 17 15:31:56.205 [Balancer] ns: test.foo going to move { _id: "test.foo-a_0.3993422724306583", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 0.3993422724306583 }, max: { a: 40.64535931684077 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:31:56.206 [Balancer] shard0001 has more chunks me:65 best: shard0000:0 m30999| Mon Dec 17 15:31:56.206 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:31:56.206 [Balancer] donor : shard0001 chunks on 65 m30999| Mon Dec 17 15:31:56.206 [Balancer] receiver : shard0000 chunks on 0 m30999| Mon Dec 17 15:31:56.206 [Balancer] threshold : 2 m30999| Mon Dec 17 15:31:56.206 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: MinKey }, max: { _id: ObjectId('50cf812d256383d556ab497c') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:31:56.206 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 40.64535931684077 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:31:56.209 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 0.3993422724306583 }, max: { a: 40.64535931684077 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_0.3993422724306583", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:31:56.210 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf813cc94e4981dc6c1b11 m30001| Mon Dec 17 15:31:56.210 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:56-36", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776316210), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 0.3993422724306583 }, max: { a: 40.64535931684077 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:31:56.211 [conn4] moveChunk request accepted at version 2|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:56.212 [conn4] can't move chunk of size (approximately) 1371152 because maximum size allowed to move is 1048576 ns: test.foo { a: 0.3993422724306583 } -> { a: 40.64535931684077 } m30001| Mon Dec 17 15:31:56.212 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:31:56.212 [conn4] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:31:56.213 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:31:56.213 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:56-37", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776316213), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 0.3993422724306583 }, max: { a: 40.64535931684077 }, step1 of 6: 2, step2 of 6: 1, note: "aborted" } } m30999| Mon Dec 17 15:31:56.214 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 1371152, errmsg: "chunk too big to move", ok: 0.0 } m30999| Mon Dec 17 15:31:56.214 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 1371152, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { a: 0.3993422724306583 } max: { a: 0.3993422724306583 } m30999| Mon Dec 17 15:31:56.214 [Balancer] forcing a split because migrate failed for size reasons m30001| Mon Dec 17 15:31:56.216 [conn4] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 40.64535931684077 } m30001| Mon Dec 17 15:31:56.217 [conn4] splitVector doing another cycle because of force, keyCount now: 603 m30001| Mon Dec 17 15:31:56.218 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 0.3993422724306583 }, max: { a: 40.64535931684077 }, from: "shard0001", splitKeys: [ { a: 21.16596954874694 } ], shardId: "test.foo-a_0.3993422724306583", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:56.222 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf813cc94e4981dc6c1b12 m30001| Mon Dec 17 15:31:56.223 [conn4] splitChunk accepted at version 2|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:56.224 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:56-38", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776316224), what: "split", ns: "test.foo", details: { before: { min: { a: 0.3993422724306583 }, max: { a: 40.64535931684077 }, lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 0.3993422724306583 }, max: { a: 21.16596954874694 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:56.224 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:56.225 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 38 version: 2|3||50cf812d5ec0810ee359b569 based on: 2|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:56.225 [Balancer] forced split results: { ok: 1.0 } m30999| Mon Dec 17 15:31:56.225 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: ObjectId('50cf812d256383d556ab497c') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:31:56.225 [conn4] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: ObjectId('50cf812d256383d556ab497c') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30000| Mon Dec 17 15:31:56.226 [initandlisten] connection accepted from 127.0.0.1:39870 #14 (14 connections now open) m30001| Mon Dec 17 15:31:56.226 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:56-39", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776316226), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: MinKey }, max: { _id: ObjectId('50cf812d256383d556ab497c') }, step1 of 6: 0, note: "aborted" } } m30999| Mon Dec 17 15:31:56.226 [Balancer] moveChunk result: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81395ec0810ee359b56c'), when: new Date(1355776313812), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }", ok: 0.0 } m30999| Mon Dec 17 15:31:56.227 [Balancer] balancer move failed: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81395ec0810ee359b56c'), when: new Date(1355776313812), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: MinKey }", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { _id: MinKey } max: { _id: MinKey } m30999| Mon Dec 17 15:31:56.227 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:31:56.227 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:31:56.230 [cleanupOldData-50cf813ac94e4981dc6c1b10] moveChunk deleted 13 documents for test.foo from { a: MinKey } -> { a: 0.3993422724306583 } m30000| Mon Dec 17 15:31:56.928 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/test.1, size: 32MB, took 3.133 secs m30001| Mon Dec 17 15:31:56.993 [conn3] CMD: drop test.mrShardedOut m30001| Mon Dec 17 15:31:56.994 [conn3] CMD: drop test.tmp.mr.foo_1 m30001| Mon Dec 17 15:31:56.994 [conn3] CMD: drop test.tmp.mr.foo_1 m30001| Mon Dec 17 15:31:56.994 [conn3] CMD: drop test.tmp.mr.foo_1 m30001| Mon Dec 17 15:31:56.994 [conn3] command test.$cmd command: { mapreduce.shardedfinish: { mapreduce: "foo", map: function map2() { emit(this._id, {count: 1, y: this.y}); }, reduce: function reduce2(key, values) { return values[0]; }, out: { replace: "mrShardedOut", sharded: true } }, inputDB: "test", shardedOutputCollection: "tmp.mrs.foo_1355776305_0", shards: { localhost:30001: { result: "tmp.mrs.foo_1355776305_0", splitKeys: [ { _id: ObjectId('50cf812d256383d556ab497c') }, { _id: ObjectId('50cf812d256383d556ab4b4a') }, { _id: ObjectId('50cf812d256383d556ab4d18') }, { _id: ObjectId('50cf812d256383d556ab4ee6') }, { _id: ObjectId('50cf812d256383d556ab50b4') }, { _id: ObjectId('50cf812d256383d556ab5282') }, { _id: ObjectId('50cf812d256383d556ab5450') }, { _id: ObjectId('50cf812d256383d556ab561e') }, { _id: ObjectId('50cf812d256383d556ab57ec') }, { _id: ObjectId('50cf812d256383d556ab59ba') }, { _id: ObjectId('50cf812d256383d556ab5b88') }, { _id: ObjectId('50cf812d256383d556ab5d56') }, { _id: ObjectId('50cf812d256383d556ab5f24') }, { _id: ObjectId('50cf812d256383d556ab60f2') }, { _id: ObjectId('50cf812d256383d556ab62c0') }, { _id: ObjectId('50cf812e256383d556ab648e') }, { _id: ObjectId('50cf812e256383d556ab665c') }, { _id: ObjectId('50cf812e256383d556ab682a') }, { _id: ObjectId('50cf812e256383d556ab69f8') }, { _id: ObjectId('50cf812e256383d556ab6bc6') }, { _id: ObjectId('50cf812e256383d556ab6d94') }, { _id: ObjectId('50cf812e256383d556ab6f62') }, { _id: ObjectId('50cf812e256383d556ab7130') }, { _id: ObjectId('50cf812e256383d556ab72fe') }, { _id: ObjectId('50cf812e256383d556ab74cc') }, { _id: ObjectId('50cf812e256383d556ab769a') }, { _id: ObjectId('50cf812e256383d556ab7868') }, { _id: ObjectId('50cf812e256383d556ab7a36') }, { _id: ObjectId('50cf812e256383d556ab7c04') }, { _id: ObjectId('50cf812e256383d556ab7dd2') }, { _id: ObjectId('50cf812e256383d556ab7fa0') }, { _id: ObjectId('50cf812f256383d556ab816e') }, { _id: ObjectId('50cf812f256383d556ab833c') }, { _id: ObjectId('50cf812f256383d556ab850a') }, { _id: ObjectId('50cf812f256383d556ab86d8') }, { _id: ObjectId('50cf812f256383d556ab88a6') }, { _id: ObjectId('50cf812f256383d556ab8a74') }, { _id: ObjectId('50cf812f256383d556ab8c42') }, { _id: ObjectId('50cf812f256383d556ab8e10') }, { _id: ObjectId('50cf812f256383d556ab8fde') }, { _id: ObjectId('50cf812f256383d556ab91ac') }, { _id: ObjectId('50cf812f256383d556ab937a') }, { _id: ObjectId('50cf812f256383d556ab9548') }, { _id: ObjectId('50cf812f256383d556ab9716') }, { _id: ObjectId('50cf812f256383d556ab98e4') }, { _id: ObjectId('50cf812f256383d556ab9ab2') }, { _id: ObjectId('50cf812f256383d556ab9c80') }, { _id: ObjectId('50cf812f256383d556ab9e4e') }, { _id: ObjectId('50cf812f256383d556aba01c') }, { _id: ObjectId('50cf8130256383d556aba1ea') }, { _id: ObjectId('50cf8130256383d556aba3b8') }, { _id: ObjectId('50cf8130256383d556aba586') }, { _id: ObjectId('50cf8130256383d556aba754') }, { _id: ObjectId('50cf8130256383d556aba922') }, { _id: ObjectId('50cf8130256383d556abaaf0') }, { _id: ObjectId('50cf8130256383d556abacbe') }, { _id: ObjectId('50cf8130256383d556abae8c') }, { _id: ObjectId('50cf8130256383d556abb05a') }, { _id: ObjectId('50cf8130256383d556abb228') }, { _id: ObjectId('50cf8130256383d556abb3f6') }, { _id: ObjectId('50cf8130256383d556abb5c4') }, { _id: ObjectId('50cf8130256383d556abb792') }, { _id: ObjectId('50cf8130256383d556abb960') }, { _id: ObjectId('50cf8130256383d556abbb2e') } ], timeMillis: 4380, counts: { input: 30000, emit: 30000, reduce: 0, output: 30000 }, ok: 1.0 } }, shardCounts: { localhost:30001: { input: 30000, emit: 30000, reduce: 0, output: 30000 } }, counts: { emit: 30000, input: 30000, output: 30000, reduce: 0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:1446 w:1175188 reslen:2382 3181ms m30999| Mon Dec 17 15:31:56.995 [conn1] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30999| Mon Dec 17 15:31:57.007 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: ObjectId('50cf812d256383d556ab497c') } dataWritten: 544425 splitThreshold: 943718 m30001| Mon Dec 17 15:31:57.008 [conn4] request split points lookup for chunk test.mrShardedOut { : MinKey } -->> { : ObjectId('50cf812d256383d556ab497c') } m30999| Mon Dec 17 15:31:57.008 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Mon Dec 17 15:31:57.008 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab497c') }max: { _id: ObjectId('50cf812d256383d556ab4b4a') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.008 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab497c') } -->> { : ObjectId('50cf812d256383d556ab4b4a') } m30999| Mon Dec 17 15:31:57.009 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab4b49') } m30999| Mon Dec 17 15:31:57.012 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab4b4a') }max: { _id: ObjectId('50cf812d256383d556ab4d18') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.012 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab4b4a') } -->> { : ObjectId('50cf812d256383d556ab4d18') } m30999| Mon Dec 17 15:31:57.012 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab4d17') } m30999| Mon Dec 17 15:31:57.012 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|3||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab4d18') }max: { _id: ObjectId('50cf812d256383d556ab4ee6') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.012 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab4d18') } -->> { : ObjectId('50cf812d256383d556ab4ee6') } m30999| Mon Dec 17 15:31:57.013 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab4ee5') } m30999| Mon Dec 17 15:31:57.016 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab4ee6') }max: { _id: ObjectId('50cf812d256383d556ab50b4') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.016 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab4ee6') } -->> { : ObjectId('50cf812d256383d556ab50b4') } m30999| Mon Dec 17 15:31:57.016 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab50b3') } m30999| Mon Dec 17 15:31:57.019 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|5||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab50b4') }max: { _id: ObjectId('50cf812d256383d556ab5282') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.020 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab50b4') } -->> { : ObjectId('50cf812d256383d556ab5282') } m30999| Mon Dec 17 15:31:57.020 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab5281') } m30999| Mon Dec 17 15:31:57.020 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab5282') }max: { _id: ObjectId('50cf812d256383d556ab5450') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.020 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab5282') } -->> { : ObjectId('50cf812d256383d556ab5450') } m30999| Mon Dec 17 15:31:57.021 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab544f') } m30999| Mon Dec 17 15:31:57.024 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|7||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab5450') }max: { _id: ObjectId('50cf812d256383d556ab561e') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.075 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab561d') } m30001| Mon Dec 17 15:31:57.024 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab5450') } -->> { : ObjectId('50cf812d256383d556ab561e') } m30999| Mon Dec 17 15:31:57.076 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab561e') }max: { _id: ObjectId('50cf812d256383d556ab57ec') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.076 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab57eb') } m30001| Mon Dec 17 15:31:57.076 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab561e') } -->> { : ObjectId('50cf812d256383d556ab57ec') } m30999| Mon Dec 17 15:31:57.079 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|9||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab57ec') }max: { _id: ObjectId('50cf812d256383d556ab59ba') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.079 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab57ec') } -->> { : ObjectId('50cf812d256383d556ab59ba') } m30999| Mon Dec 17 15:31:57.079 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab59b9') } m30999| Mon Dec 17 15:31:57.083 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab59ba') }max: { _id: ObjectId('50cf812d256383d556ab5b88') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.083 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab59ba') } -->> { : ObjectId('50cf812d256383d556ab5b88') } m30999| Mon Dec 17 15:31:57.083 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab5b87') } m30999| Mon Dec 17 15:31:57.083 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|11||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab5b88') }max: { _id: ObjectId('50cf812d256383d556ab5d56') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.084 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab5d55') } m30001| Mon Dec 17 15:31:57.083 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab5b88') } -->> { : ObjectId('50cf812d256383d556ab5d56') } m30999| Mon Dec 17 15:31:57.087 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab5d56') }max: { _id: ObjectId('50cf812d256383d556ab5f24') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.087 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab5d56') } -->> { : ObjectId('50cf812d256383d556ab5f24') } m30999| Mon Dec 17 15:31:57.087 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab5f23') } m30999| Mon Dec 17 15:31:57.087 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|13||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab5f24') }max: { _id: ObjectId('50cf812d256383d556ab60f2') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.088 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab60f1') } m30001| Mon Dec 17 15:31:57.087 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab5f24') } -->> { : ObjectId('50cf812d256383d556ab60f2') } m30999| Mon Dec 17 15:31:57.091 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab60f2') }max: { _id: ObjectId('50cf812d256383d556ab62c0') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.091 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab60f2') } -->> { : ObjectId('50cf812d256383d556ab62c0') } m30999| Mon Dec 17 15:31:57.091 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab62bf') } m30999| Mon Dec 17 15:31:57.094 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|15||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab62c0') }max: { _id: ObjectId('50cf812e256383d556ab648e') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.094 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab62c0') } -->> { : ObjectId('50cf812e256383d556ab648e') } m30999| Mon Dec 17 15:31:57.094 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab648d') } m30999| Mon Dec 17 15:31:57.095 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab648e') }max: { _id: ObjectId('50cf812e256383d556ab665c') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.095 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab665b') } m30001| Mon Dec 17 15:31:57.095 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab648e') } -->> { : ObjectId('50cf812e256383d556ab665c') } m30999| Mon Dec 17 15:31:57.098 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|17||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab665c') }max: { _id: ObjectId('50cf812e256383d556ab682a') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.098 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab665c') } -->> { : ObjectId('50cf812e256383d556ab682a') } m30999| Mon Dec 17 15:31:57.098 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab6829') } m30001| Mon Dec 17 15:31:57.099 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab682a') } -->> { : ObjectId('50cf812e256383d556ab69f8') } m30001| Mon Dec 17 15:31:57.099 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab69f8') } -->> { : ObjectId('50cf812e256383d556ab6bc6') } m30001| Mon Dec 17 15:31:57.100 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab6bc6') } -->> { : ObjectId('50cf812e256383d556ab6d94') } m30001| Mon Dec 17 15:31:57.101 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab6d94') } -->> { : ObjectId('50cf812e256383d556ab6f62') } m30001| Mon Dec 17 15:31:57.101 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab6f62') } -->> { : ObjectId('50cf812e256383d556ab7130') } m30001| Mon Dec 17 15:31:57.102 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab7130') } -->> { : ObjectId('50cf812e256383d556ab72fe') } m30001| Mon Dec 17 15:31:57.103 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab72fe') } -->> { : ObjectId('50cf812e256383d556ab74cc') } m30001| Mon Dec 17 15:31:57.103 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab74cc') } -->> { : ObjectId('50cf812e256383d556ab769a') } m30001| Mon Dec 17 15:31:57.104 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab769a') } -->> { : ObjectId('50cf812e256383d556ab7868') } m30001| Mon Dec 17 15:31:57.105 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab7868') } -->> { : ObjectId('50cf812e256383d556ab7a36') } m30001| Mon Dec 17 15:31:57.105 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab7a36') } -->> { : ObjectId('50cf812e256383d556ab7c04') } m30001| Mon Dec 17 15:31:57.106 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab7c04') } -->> { : ObjectId('50cf812e256383d556ab7dd2') } m30001| Mon Dec 17 15:31:57.107 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab7dd2') } -->> { : ObjectId('50cf812e256383d556ab7fa0') } m30001| Mon Dec 17 15:31:57.107 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab7fa0') } -->> { : ObjectId('50cf812f256383d556ab816e') } m30001| Mon Dec 17 15:31:57.108 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab816e') } -->> { : ObjectId('50cf812f256383d556ab833c') } m30001| Mon Dec 17 15:31:57.109 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab833c') } -->> { : ObjectId('50cf812f256383d556ab850a') } m30001| Mon Dec 17 15:31:57.109 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab850a') } -->> { : ObjectId('50cf812f256383d556ab86d8') } m30999| Mon Dec 17 15:31:57.099 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab682a') }max: { _id: ObjectId('50cf812e256383d556ab69f8') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.099 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab69f7') } m30999| Mon Dec 17 15:31:57.099 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|19||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab69f8') }max: { _id: ObjectId('50cf812e256383d556ab6bc6') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.100 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab6bc5') } m30999| Mon Dec 17 15:31:57.100 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab6bc6') }max: { _id: ObjectId('50cf812e256383d556ab6d94') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.100 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab6d93') } m30999| Mon Dec 17 15:31:57.101 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab6d94') }max: { _id: ObjectId('50cf812e256383d556ab6f62') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.101 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab6f61') } m30999| Mon Dec 17 15:31:57.101 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab6f62') }max: { _id: ObjectId('50cf812e256383d556ab7130') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.102 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab712f') } m30999| Mon Dec 17 15:31:57.102 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|23||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab7130') }max: { _id: ObjectId('50cf812e256383d556ab72fe') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.102 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab72fd') } m30999| Mon Dec 17 15:31:57.102 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab72fe') }max: { _id: ObjectId('50cf812e256383d556ab74cc') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.103 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab74cb') } m30999| Mon Dec 17 15:31:57.103 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|25||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab74cc') }max: { _id: ObjectId('50cf812e256383d556ab769a') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.104 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab7699') } m30999| Mon Dec 17 15:31:57.104 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab769a') }max: { _id: ObjectId('50cf812e256383d556ab7868') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.104 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab7867') } m30999| Mon Dec 17 15:31:57.104 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|27||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab7868') }max: { _id: ObjectId('50cf812e256383d556ab7a36') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.105 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab7a35') } m30999| Mon Dec 17 15:31:57.105 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab7a36') }max: { _id: ObjectId('50cf812e256383d556ab7c04') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.106 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab7c03') } m30999| Mon Dec 17 15:31:57.106 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|29||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab7c04') }max: { _id: ObjectId('50cf812e256383d556ab7dd2') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.106 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab7dd1') } m30999| Mon Dec 17 15:31:57.106 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab7dd2') }max: { _id: ObjectId('50cf812e256383d556ab7fa0') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.107 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab7f9f') } m30999| Mon Dec 17 15:31:57.107 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab7fa0') }max: { _id: ObjectId('50cf812f256383d556ab816e') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.108 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab816d') } m30999| Mon Dec 17 15:31:57.108 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab816e') }max: { _id: ObjectId('50cf812f256383d556ab833c') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.108 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab833b') } m30999| Mon Dec 17 15:31:57.108 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|33||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab833c') }max: { _id: ObjectId('50cf812f256383d556ab850a') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.109 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab8509') } m30999| Mon Dec 17 15:31:57.109 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab850a') }max: { _id: ObjectId('50cf812f256383d556ab86d8') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.110 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab86d7') } m30999| Mon Dec 17 15:31:57.110 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|35||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab86d8') }max: { _id: ObjectId('50cf812f256383d556ab88a6') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.110 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab86d8') } -->> { : ObjectId('50cf812f256383d556ab88a6') } m30001| Mon Dec 17 15:31:57.111 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab88a6') } -->> { : ObjectId('50cf812f256383d556ab8a74') } m30999| Mon Dec 17 15:31:57.110 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab88a5') } m30999| Mon Dec 17 15:31:57.110 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab88a6') }max: { _id: ObjectId('50cf812f256383d556ab8a74') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.111 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab8a73') } m30001| Mon Dec 17 15:31:57.111 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab8a74') } -->> { : ObjectId('50cf812f256383d556ab8c42') } m30001| Mon Dec 17 15:31:57.112 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab8c42') } -->> { : ObjectId('50cf812f256383d556ab8e10') } m30001| Mon Dec 17 15:31:57.113 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab8e10') } -->> { : ObjectId('50cf812f256383d556ab8fde') } m30001| Mon Dec 17 15:31:57.113 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab8fde') } -->> { : ObjectId('50cf812f256383d556ab91ac') } m30001| Mon Dec 17 15:31:57.114 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab91ac') } -->> { : ObjectId('50cf812f256383d556ab937a') } m30001| Mon Dec 17 15:31:57.115 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab937a') } -->> { : ObjectId('50cf812f256383d556ab9548') } m30999| Mon Dec 17 15:31:57.111 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|37||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab8a74') }max: { _id: ObjectId('50cf812f256383d556ab8c42') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.112 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab8c41') } m30999| Mon Dec 17 15:31:57.112 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab8c42') }max: { _id: ObjectId('50cf812f256383d556ab8e10') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.112 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab8e0f') } m30999| Mon Dec 17 15:31:57.113 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|39||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab8e10') }max: { _id: ObjectId('50cf812f256383d556ab8fde') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.113 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab8fdd') } m30999| Mon Dec 17 15:31:57.113 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab8fde') }max: { _id: ObjectId('50cf812f256383d556ab91ac') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.114 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab91ab') } m30999| Mon Dec 17 15:31:57.114 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|41||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab91ac') }max: { _id: ObjectId('50cf812f256383d556ab937a') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.114 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab9379') } m30999| Mon Dec 17 15:31:57.115 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab937a') }max: { _id: ObjectId('50cf812f256383d556ab9548') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.115 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab9547') } m30999| Mon Dec 17 15:31:57.115 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|43||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab9548') }max: { _id: ObjectId('50cf812f256383d556ab9716') } dataWritten: 545506 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:57.115 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab9548') } -->> { : ObjectId('50cf812f256383d556ab9716') } m30001| Mon Dec 17 15:31:57.116 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab9716') } -->> { : ObjectId('50cf812f256383d556ab98e4') } m30001| Mon Dec 17 15:31:57.117 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab98e4') } -->> { : ObjectId('50cf812f256383d556ab9ab2') } m30001| Mon Dec 17 15:31:57.117 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab9ab2') } -->> { : ObjectId('50cf812f256383d556ab9c80') } m30001| Mon Dec 17 15:31:57.118 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab9c80') } -->> { : ObjectId('50cf812f256383d556ab9e4e') } m30001| Mon Dec 17 15:31:57.119 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab9e4e') } -->> { : ObjectId('50cf812f256383d556aba01c') } m30001| Mon Dec 17 15:31:57.119 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556aba01c') } -->> { : ObjectId('50cf8130256383d556aba1ea') } m30001| Mon Dec 17 15:31:57.120 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556aba1ea') } -->> { : ObjectId('50cf8130256383d556aba3b8') } m30999| Mon Dec 17 15:31:57.116 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab9715') } m30999| Mon Dec 17 15:31:57.116 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab9716') }max: { _id: ObjectId('50cf812f256383d556ab98e4') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.116 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab98e3') } m30999| Mon Dec 17 15:31:57.117 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|45||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab98e4') }max: { _id: ObjectId('50cf812f256383d556ab9ab2') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.117 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab9ab1') } m30999| Mon Dec 17 15:31:57.117 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab9ab2') }max: { _id: ObjectId('50cf812f256383d556ab9c80') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.118 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab9c7f') } m30999| Mon Dec 17 15:31:57.118 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|47||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab9c80') }max: { _id: ObjectId('50cf812f256383d556ab9e4e') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.118 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab9e4d') } m30999| Mon Dec 17 15:31:57.119 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab9e4e') }max: { _id: ObjectId('50cf812f256383d556aba01c') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.119 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556aba01b') } m30999| Mon Dec 17 15:31:57.119 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|49||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556aba01c') }max: { _id: ObjectId('50cf8130256383d556aba1ea') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.120 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556aba1e9') } m30999| Mon Dec 17 15:31:57.120 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556aba1ea') }max: { _id: ObjectId('50cf8130256383d556aba3b8') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.120 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556aba3b7') } m30001| Mon Dec 17 15:31:57.121 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556aba3b8') } -->> { : ObjectId('50cf8130256383d556aba586') } m30001| Mon Dec 17 15:31:57.121 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556aba586') } -->> { : ObjectId('50cf8130256383d556aba754') } m30001| Mon Dec 17 15:31:57.122 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556aba754') } -->> { : ObjectId('50cf8130256383d556aba922') } m30001| Mon Dec 17 15:31:57.123 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556aba922') } -->> { : ObjectId('50cf8130256383d556abaaf0') } m30001| Mon Dec 17 15:31:57.123 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abaaf0') } -->> { : ObjectId('50cf8130256383d556abacbe') } m30001| Mon Dec 17 15:31:57.124 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abacbe') } -->> { : ObjectId('50cf8130256383d556abae8c') } m30001| Mon Dec 17 15:31:57.125 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abae8c') } -->> { : ObjectId('50cf8130256383d556abb05a') } m30001| Mon Dec 17 15:31:57.125 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abb05a') } -->> { : ObjectId('50cf8130256383d556abb228') } m30001| Mon Dec 17 15:31:57.126 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abb228') } -->> { : ObjectId('50cf8130256383d556abb3f6') } m30001| Mon Dec 17 15:31:57.127 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abb3f6') } -->> { : ObjectId('50cf8130256383d556abb5c4') } m30001| Mon Dec 17 15:31:57.127 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abb5c4') } -->> { : ObjectId('50cf8130256383d556abb792') } m30001| Mon Dec 17 15:31:57.128 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abb792') } -->> { : ObjectId('50cf8130256383d556abb960') } m30001| Mon Dec 17 15:31:57.129 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abb960') } -->> { : ObjectId('50cf8130256383d556abbb2e') } m30001| Mon Dec 17 15:31:57.129 [conn4] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abbb2e') } -->> { : MaxKey } m30001| Mon Dec 17 15:31:57.130 [conn4] CMD: drop test.tmp.mrs.foo_1355776305_0 ---- MapReduce results: ---- { "result" : "mrShardedOut", "counts" : { "input" : NumberLong(30000), "emit" : NumberLong(30000), "reduce" : NumberLong(0), "output" : NumberLong(30000) }, "timeMillis" : 11450, "timing" : { "shardProcessing" : 4385, "postProcessing" : 7065 }, "shardCounts" : { "localhost:30001" : { "input" : 30000, "emit" : 30000, "reduce" : 0, "output" : 30000 } }, "postProcessCounts" : { "localhost:30001" : { "input" : NumberLong(30000), "reduce" : NumberLong(0), "output" : NumberLong(30000) } }, "ok" : 1, } ---- Checking that all MapReduce output documents are in output collection ---- m30999| Mon Dec 17 15:31:57.121 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|51||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556aba3b8') }max: { _id: ObjectId('50cf8130256383d556aba586') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.121 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556aba585') } m30999| Mon Dec 17 15:31:57.121 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556aba586') }max: { _id: ObjectId('50cf8130256383d556aba754') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.122 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556aba753') } m30999| Mon Dec 17 15:31:57.122 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|53||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556aba754') }max: { _id: ObjectId('50cf8130256383d556aba922') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.123 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556aba921') } m30999| Mon Dec 17 15:31:57.123 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|54||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556aba922') }max: { _id: ObjectId('50cf8130256383d556abaaf0') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.123 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abaaef') } m30999| Mon Dec 17 15:31:57.123 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|55||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abaaf0') }max: { _id: ObjectId('50cf8130256383d556abacbe') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.124 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abacbd') } m30999| Mon Dec 17 15:31:57.124 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|56||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abacbe') }max: { _id: ObjectId('50cf8130256383d556abae8c') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.125 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abae8b') } m30999| Mon Dec 17 15:31:57.125 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|57||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abae8c') }max: { _id: ObjectId('50cf8130256383d556abb05a') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.125 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abb059') } m30999| Mon Dec 17 15:31:57.125 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|58||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abb05a') }max: { _id: ObjectId('50cf8130256383d556abb228') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.126 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abb227') } m30999| Mon Dec 17 15:31:57.126 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|59||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abb228') }max: { _id: ObjectId('50cf8130256383d556abb3f6') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.127 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abb3f5') } m30999| Mon Dec 17 15:31:57.127 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|60||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abb3f6') }max: { _id: ObjectId('50cf8130256383d556abb5c4') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.127 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abb5c3') } m30999| Mon Dec 17 15:31:57.127 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|61||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abb5c4') }max: { _id: ObjectId('50cf8130256383d556abb792') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.128 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abb791') } m30999| Mon Dec 17 15:31:57.128 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|62||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abb792') }max: { _id: ObjectId('50cf8130256383d556abb960') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.129 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abb95f') } m30999| Mon Dec 17 15:31:57.129 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|63||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abb960') }max: { _id: ObjectId('50cf8130256383d556abbb2e') } dataWritten: 545506 splitThreshold: 1048576 m30999| Mon Dec 17 15:31:57.129 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abbb2d') } m30999| Mon Dec 17 15:31:57.129 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|64||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abbb2e') }max: { _id: MaxKey } dataWritten: 514157 splitThreshold: 943718 m30999| Mon Dec 17 15:31:57.130 [conn1] chunk not full enough to trigger auto-split no split entry Number of chunks: 65 m30000| Mon Dec 17 15:31:57.992 [conn6] timeoutMs not support for v8 yet code: $reduce = function ( doc , out ){ out.nChunks++; } m30000| in gc m30000| in gc --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3 } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard0001" } test.foo shard key: { "a" : 1 } chunks: shard0000 1 shard0001 34 { "a" : { "$MinKey" : true } } -->> { "a" : 0.39934227243065834 } on : shard0000 { "t" : 2000, "i" : 0 } { "a" : 0.39934227243065834 } -->> { "a" : 21.165969548746943 } on : shard0001 { "t" : 2000, "i" : 2 } { "a" : 21.165969548746943 } -->> { "a" : 40.64535931684077 } on : shard0001 { "t" : 2000, "i" : 3 } { "a" : 40.64535931684077 } -->> { "a" : 62.87552835419774 } on : shard0001 { "t" : 1000, "i" : 61 } { "a" : 62.87552835419774 } -->> { "a" : 89.16067937389016 } on : shard0001 { "t" : 1000, "i" : 62 } { "a" : 89.16067937389016 } -->> { "a" : 119.03282697312534 } on : shard0001 { "t" : 1000, "i" : 59 } { "a" : 119.03282697312534 } -->> { "a" : 152.16144034639 } on : shard0001 { "t" : 1000, "i" : 60 } { "a" : 152.16144034639 } -->> { "a" : 178.15603269264102 } on : shard0001 { "t" : 1000, "i" : 57 } { "a" : 178.15603269264102 } -->> { "a" : 211.6570973303169 } on : shard0001 { "t" : 1000, "i" : 58 } { "a" : 211.6570973303169 } -->> { "a" : 244.10175322555006 } on : shard0001 { "t" : 1000, "i" : 37 } { "a" : 244.10175322555006 } -->> { "a" : 285.7821767684072 } on : shard0001 { "t" : 1000, "i" : 38 } { "a" : 285.7821767684072 } -->> { "a" : 323.89811193570495 } on : shard0001 { "t" : 1000, "i" : 33 } { "a" : 323.89811193570495 } -->> { "a" : 364.68965956009924 } on : shard0001 { "t" : 1000, "i" : 34 } { "a" : 364.68965956009924 } -->> { "a" : 395.6566429696977 } on : shard0001 { "t" : 1000, "i" : 39 } { "a" : 395.6566429696977 } -->> { "a" : 439.6139404270798 } on : shard0001 { "t" : 1000, "i" : 40 } { "a" : 439.6139404270798 } -->> { "a" : 480.0211163237691 } on : shard0001 { "t" : 1000, "i" : 31 } { "a" : 480.0211163237691 } -->> { "a" : 508.14515142701566 } on : shard0001 { "t" : 1000, "i" : 55 } { "a" : 508.14515142701566 } -->> { "a" : 538.5234889108688 } on : shard0001 { "t" : 1000, "i" : 56 } { "a" : 538.5234889108688 } -->> { "a" : 571.1331805214286 } on : shard0001 { "t" : 1000, "i" : 43 } { "a" : 571.1331805214286 } -->> { "a" : 609.4723071437329 } on : shard0001 { "t" : 1000, "i" : 44 } { "a" : 609.4723071437329 } -->> { "a" : 637.662521796301 } on : shard0001 { "t" : 1000, "i" : 45 } { "a" : 637.662521796301 } -->> { "a" : 672.8275574278086 } on : shard0001 { "t" : 1000, "i" : 46 } { "a" : 672.8275574278086 } -->> { "a" : 702.2782645653933 } on : shard0001 { "t" : 1000, "i" : 47 } { "a" : 702.2782645653933 } -->> { "a" : 718.4353433549404 } on : shard0001 { "t" : 1000, "i" : 65 } { "a" : 718.4353433549404 } -->> { "a" : 738.9611077960581 } on : shard0001 { "t" : 1000, "i" : 66 } { "a" : 738.9611077960581 } -->> { "a" : 764.6060811821371 } on : shard0001 { "t" : 1000, "i" : 53 } { "a" : 764.6060811821371 } -->> { "a" : 800.5099997390062 } on : shard0001 { "t" : 1000, "i" : 54 } { "a" : 800.5099997390062 } -->> { "a" : 826.7396320588887 } on : shard0001 { "t" : 1000, "i" : 49 } { "a" : 826.7396320588887 } -->> { "a" : 859.3603172339499 } on : shard0001 { "t" : 1000, "i" : 50 } { "a" : 859.3603172339499 } -->> { "a" : 885.969014139846 } on : shard0001 { "t" : 1000, "i" : 51 } { "a" : 885.969014139846 } -->> { "a" : 922.822616994381 } on : shard0001 { "t" : 1000, "i" : 52 } { "a" : 922.822616994381 } -->> { "a" : 954.3487632181495 } on : shard0001 { "t" : 1000, "i" : 41 } { "a" : 954.3487632181495 } -->> { "a" : 973.9556647837162 } on : shard0001 { "t" : 1000, "i" : 63 } { "a" : 973.9556647837162 } -->> { "a" : 999.9956642277539 } on : shard0001 { "t" : 1000, "i" : 64 } { "a" : 999.9956642277539 } -->> { "a" : { "$MaxKey" : true } } on : shard0001 { "t" : 1000, "i" : 4 } test.mrShardedOut shard key: { "_id" : 1 } chunks: shard0001 65 { "_id" : { "$MinKey" : true } } -->> { "_id" : ObjectId("50cf812d256383d556ab497c") } on : shard0001 { "t" : 1000, "i" : 0 } { "_id" : ObjectId("50cf812d256383d556ab497c") } -->> { "_id" : ObjectId("50cf812d256383d556ab4b4a") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("50cf812d256383d556ab4b4a") } -->> { "_id" : ObjectId("50cf812d256383d556ab4d18") } on : shard0001 { "t" : 1000, "i" : 2 } { "_id" : ObjectId("50cf812d256383d556ab4d18") } -->> { "_id" : ObjectId("50cf812d256383d556ab4ee6") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("50cf812d256383d556ab4ee6") } -->> { "_id" : ObjectId("50cf812d256383d556ab50b4") } on : shard0001 { "t" : 1000, "i" : 4 } { "_id" : ObjectId("50cf812d256383d556ab50b4") } -->> { "_id" : ObjectId("50cf812d256383d556ab5282") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("50cf812d256383d556ab5282") } -->> { "_id" : ObjectId("50cf812d256383d556ab5450") } on : shard0001 { "t" : 1000, "i" : 6 } { "_id" : ObjectId("50cf812d256383d556ab5450") } -->> { "_id" : ObjectId("50cf812d256383d556ab561e") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("50cf812d256383d556ab561e") } -->> { "_id" : ObjectId("50cf812d256383d556ab57ec") } on : shard0001 { "t" : 1000, "i" : 8 } { "_id" : ObjectId("50cf812d256383d556ab57ec") } -->> { "_id" : ObjectId("50cf812d256383d556ab59ba") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("50cf812d256383d556ab59ba") } -->> { "_id" : ObjectId("50cf812d256383d556ab5b88") } on : shard0001 { "t" : 1000, "i" : 10 } { "_id" : ObjectId("50cf812d256383d556ab5b88") } -->> { "_id" : ObjectId("50cf812d256383d556ab5d56") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("50cf812d256383d556ab5d56") } -->> { "_id" : ObjectId("50cf812d256383d556ab5f24") } on : shard0001 { "t" : 1000, "i" : 12 } { "_id" : ObjectId("50cf812d256383d556ab5f24") } -->> { "_id" : ObjectId("50cf812d256383d556ab60f2") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("50cf812d256383d556ab60f2") } -->> { "_id" : ObjectId("50cf812d256383d556ab62c0") } on : shard0001 { "t" : 1000, "i" : 14 } { "_id" : ObjectId("50cf812d256383d556ab62c0") } -->> { "_id" : ObjectId("50cf812e256383d556ab648e") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("50cf812e256383d556ab648e") } -->> { "_id" : ObjectId("50cf812e256383d556ab665c") } on : shard0001 { "t" : 1000, "i" : 16 } { "_id" : ObjectId("50cf812e256383d556ab665c") } -->> { "_id" : ObjectId("50cf812e256383d556ab682a") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("50cf812e256383d556ab682a") } -->> { "_id" : ObjectId("50cf812e256383d556ab69f8") } on : shard0001 { "t" : 1000, "i" : 18 } { "_id" : ObjectId("50cf812e256383d556ab69f8") } -->> { "_id" : ObjectId("50cf812e256383d556ab6bc6") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("50cf812e256383d556ab6bc6") } -->> { "_id" : ObjectId("50cf812e256383d556ab6d94") } on : shard0001 { "t" : 1000, "i" : 20 } { "_id" : ObjectId("50cf812e256383d556ab6d94") } -->> { "_id" : ObjectId("50cf812e256383d556ab6f62") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("50cf812e256383d556ab6f62") } -->> { "_id" : ObjectId("50cf812e256383d556ab7130") } on : shard0001 { "t" : 1000, "i" : 22 } { "_id" : ObjectId("50cf812e256383d556ab7130") } -->> { "_id" : ObjectId("50cf812e256383d556ab72fe") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("50cf812e256383d556ab72fe") } -->> { "_id" : ObjectId("50cf812e256383d556ab74cc") } on : shard0001 { "t" : 1000, "i" : 24 } { "_id" : ObjectId("50cf812e256383d556ab74cc") } -->> { "_id" : ObjectId("50cf812e256383d556ab769a") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("50cf812e256383d556ab769a") } -->> { "_id" : ObjectId("50cf812e256383d556ab7868") } on : shard0001 { "t" : 1000, "i" : 26 } { "_id" : ObjectId("50cf812e256383d556ab7868") } -->> { "_id" : ObjectId("50cf812e256383d556ab7a36") } on : shard0001 { "t" : 1000, "i" : 27 } { "_id" : ObjectId("50cf812e256383d556ab7a36") } -->> { "_id" : ObjectId("50cf812e256383d556ab7c04") } on : shard0001 { "t" : 1000, "i" : 28 } { "_id" : ObjectId("50cf812e256383d556ab7c04") } -->> { "_id" : ObjectId("50cf812e256383d556ab7dd2") } on : shard0001 { "t" : 1000, "i" : 29 } { "_id" : ObjectId("50cf812e256383d556ab7dd2") } -->> { "_id" : ObjectId("50cf812e256383d556ab7fa0") } on : shard0001 { "t" : 1000, "i" : 30 } { "_id" : ObjectId("50cf812e256383d556ab7fa0") } -->> { "_id" : ObjectId("50cf812f256383d556ab816e") } on : shard0001 { "t" : 1000, "i" : 31 } { "_id" : ObjectId("50cf812f256383d556ab816e") } -->> { "_id" : ObjectId("50cf812f256383d556ab833c") } on : shard0001 { "t" : 1000, "i" : 32 } { "_id" : ObjectId("50cf812f256383d556ab833c") } -->> { "_id" : ObjectId("50cf812f256383d556ab850a") } on : shard0001 { "t" : 1000, "i" : 33 } { "_id" : ObjectId("50cf812f256383d556ab850a") } -->> { "_id" : ObjectId("50cf812f256383d556ab86d8") } on : shard0001 { "t" : 1000, "i" : 34 } { "_id" : ObjectId("50cf812f256383d556ab86d8") } -->> { "_id" : ObjectId("50cf812f256383d556ab88a6") } on : shard0001 { "t" : 1000, "i" : 35 } { "_id" : ObjectId("50cf812f256383d556ab88a6") } -->> { "_id" : ObjectId("50cf812f256383d556ab8a74") } on : shard0001 { "t" : 1000, "i" : 36 } { "_id" : ObjectId("50cf812f256383d556ab8a74") } -->> { "_id" : ObjectId("50cf812f256383d556ab8c42") } on : shard0001 { "t" : 1000, "i" : 37 } { "_id" : ObjectId("50cf812f256383d556ab8c42") } -->> { "_id" : ObjectId("50cf812f256383d556ab8e10") } on : shard0001 { "t" : 1000, "i" : 38 } { "_id" : ObjectId("50cf812f256383d556ab8e10") } -->> { "_id" : ObjectId("50cf812f256383d556ab8fde") } on : shard0001 { "t" : 1000, "i" : 39 } { "_id" : ObjectId("50cf812f256383d556ab8fde") } -->> { "_id" : ObjectId("50cf812f256383d556ab91ac") } on : shard0001 { "t" : 1000, "i" : 40 } { "_id" : ObjectId("50cf812f256383d556ab91ac") } -->> { "_id" : ObjectId("50cf812f256383d556ab937a") } on : shard0001 { "t" : 1000, "i" : 41 } { "_id" : ObjectId("50cf812f256383d556ab937a") } -->> { "_id" : ObjectId("50cf812f256383d556ab9548") } on : shard0001 { "t" : 1000, "i" : 42 } { "_id" : ObjectId("50cf812f256383d556ab9548") } -->> { "_id" : ObjectId("50cf812f256383d556ab9716") } on : shard0001 { "t" : 1000, "i" : 43 } { "_id" : ObjectId("50cf812f256383d556ab9716") } -->> { "_id" : ObjectId("50cf812f256383d556ab98e4") } on : shard0001 { "t" : 1000, "i" : 44 } { "_id" : ObjectId("50cf812f256383d556ab98e4") } -->> { "_id" : ObjectId("50cf812f256383d556ab9ab2") } on : shard0001 { "t" : 1000, "i" : 45 } { "_id" : ObjectId("50cf812f256383d556ab9ab2") } -->> { "_id" : ObjectId("50cf812f256383d556ab9c80") } on : shard0001 { "t" : 1000, "i" : 46 } { "_id" : ObjectId("50cf812f256383d556ab9c80") } -->> { "_id" : ObjectId("50cf812f256383d556ab9e4e") } on : shard0001 { "t" : 1000, "i" : 47 } { "_id" : ObjectId("50cf812f256383d556ab9e4e") } -->> { "_id" : ObjectId("50cf812f256383d556aba01c") } on : shard0001 { "t" : 1000, "i" : 48 } { "_id" : ObjectId("50cf812f256383d556aba01c") } -->> { "_id" : ObjectId("50cf8130256383d556aba1ea") } on : shard0001 { "t" : 1000, "i" : 49 } { "_id" : ObjectId("50cf8130256383d556aba1ea") } -->> { "_id" : ObjectId("50cf8130256383d556aba3b8") } on : shard0001 { "t" : 1000, "i" : 50 } { "_id" : ObjectId("50cf8130256383d556aba3b8") } -->> { "_id" : ObjectId("50cf8130256383d556aba586") } on : shard0001 { "t" : 1000, "i" : 51 } { "_id" : ObjectId("50cf8130256383d556aba586") } -->> { "_id" : ObjectId("50cf8130256383d556aba754") } on : shard0001 { "t" : 1000, "i" : 52 } { "_id" : ObjectId("50cf8130256383d556aba754") } -->> { "_id" : ObjectId("50cf8130256383d556aba922") } on : shard0001 { "t" : 1000, "i" : 53 } { "_id" : ObjectId("50cf8130256383d556aba922") } -->> { "_id" : ObjectId("50cf8130256383d556abaaf0") } on : shard0001 { "t" : 1000, "i" : 54 } { "_id" : ObjectId("50cf8130256383d556abaaf0") } -->> { "_id" : ObjectId("50cf8130256383d556abacbe") } on : shard0001 { "t" : 1000, "i" : 55 } { "_id" : ObjectId("50cf8130256383d556abacbe") } -->> { "_id" : ObjectId("50cf8130256383d556abae8c") } on : shard0001 { "t" : 1000, "i" : 56 } { "_id" : ObjectId("50cf8130256383d556abae8c") } -->> { "_id" : ObjectId("50cf8130256383d556abb05a") } on : shard0001 { "t" : 1000, "i" : 57 } { "_id" : ObjectId("50cf8130256383d556abb05a") } -->> { "_id" : ObjectId("50cf8130256383d556abb228") } on : shard0001 { "t" : 1000, "i" : 58 } { "_id" : ObjectId("50cf8130256383d556abb228") } -->> { "_id" : ObjectId("50cf8130256383d556abb3f6") } on : shard0001 { "t" : 1000, "i" : 59 } { "_id" : ObjectId("50cf8130256383d556abb3f6") } -->> { "_id" : ObjectId("50cf8130256383d556abb5c4") } on : shard0001 { "t" : 1000, "i" : 60 } { "_id" : ObjectId("50cf8130256383d556abb5c4") } -->> { "_id" : ObjectId("50cf8130256383d556abb792") } on : shard0001 { "t" : 1000, "i" : 61 } { "_id" : ObjectId("50cf8130256383d556abb792") } -->> { "_id" : ObjectId("50cf8130256383d556abb960") } on : shard0001 { "t" : 1000, "i" : 62 } { "_id" : ObjectId("50cf8130256383d556abb960") } -->> { "_id" : ObjectId("50cf8130256383d556abbb2e") } on : shard0001 { "t" : 1000, "i" : 63 } { "_id" : ObjectId("50cf8130256383d556abbb2e") } -->> { "_id" : { "$MaxKey" : true } } on : shard0001 { "t" : 1000, "i" : 64 } ---- Checking chunk distribution ---- Number of chunks for shard shard0001: 65 ---- Iteration 1: saving new batch of 30000 documents ---- ========> Saved total of 30000 documents m30999| Mon Dec 17 15:31:58.035 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|3, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 38 m30999| Mon Dec 17 15:31:58.035 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } ========> Saved total of 31000 documents ========> Saved total of 32000 documents ========> Saved total of 33000 documents m30999| Mon Dec 17 15:31:58.436 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { a: 244.1017532255501 }max: { a: 285.7821767684072 } dataWritten: 210302 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:58.436 [conn4] request split points lookup for chunk test.foo { : 244.1017532255501 } -->> { : 285.7821767684072 } m30001| Mon Dec 17 15:31:58.437 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 244.1017532255501 } -->> { : 285.7821767684072 } ========> Saved total of 34000 documents m30001| Mon Dec 17 15:31:58.445 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 244.1017532255501 }, max: { a: 285.7821767684072 }, from: "shard0001", splitKeys: [ { a: 257.4645006097853 } ], shardId: "test.foo-a_244.1017532255501", configdb: "localhost:30000" } ========> Saved total of 35000 documents m30001| Mon Dec 17 15:31:58.474 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf813ec94e4981dc6c1b13 m30001| Mon Dec 17 15:31:58.475 [conn4] splitChunk accepted at version 2|3||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:58.475 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:58-40", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776318475), what: "split", ns: "test.foo", details: { before: { min: { a: 244.1017532255501 }, max: { a: 285.7821767684072 }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 244.1017532255501 }, max: { a: 257.4645006097853 }, lastmod: Timestamp 2000|4, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 257.4645006097853 }, max: { a: 285.7821767684072 }, lastmod: Timestamp 2000|5, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:58.476 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:58.477 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 39 version: 2|5||50cf812d5ec0810ee359b569 based on: 2|3||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:58.477 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { a: 244.1017532255501 }max: { a: 285.7821767684072 } on: { a: 257.4645006097853 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:58.477 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|5, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 39 m30999| Mon Dec 17 15:31:58.477 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|3, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } ========> Saved total of 36000 documents ========> Saved total of 37000 documents ========> Saved total of 38000 documents m30999| Mon Dec 17 15:31:58.734 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0000", shardHost: "localhost:30000" } 0x91767f8 39 ========> Saved total of 39000 documents m30999| Mon Dec 17 15:31:58.786 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, errmsg: "first time for collection 'test.foo'", ok: 0.0 } m30999| Mon Dec 17 15:31:58.786 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x91767f8 39 m30999| Mon Dec 17 15:31:58.787 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Mon Dec 17 15:31:58.830 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { a: 885.969014139846 }max: { a: 922.822616994381 } dataWritten: 210524 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:58.830 [conn4] request split points lookup for chunk test.foo { : 885.969014139846 } -->> { : 922.822616994381 } m30001| Mon Dec 17 15:31:58.831 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 885.969014139846 } -->> { : 922.822616994381 } m30001| Mon Dec 17 15:31:58.831 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 885.969014139846 }, max: { a: 922.822616994381 }, from: "shard0001", splitKeys: [ { a: 899.337342241779 } ], shardId: "test.foo-a_885.969014139846", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:58.835 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf813ec94e4981dc6c1b14 m30001| Mon Dec 17 15:31:58.836 [conn4] splitChunk accepted at version 2|5||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:58.837 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:58-41", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776318837), what: "split", ns: "test.foo", details: { before: { min: { a: 885.969014139846 }, max: { a: 922.822616994381 }, lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 885.969014139846 }, max: { a: 899.337342241779 }, lastmod: Timestamp 2000|6, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 899.337342241779 }, max: { a: 922.822616994381 }, lastmod: Timestamp 2000|7, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:58.837 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. ========> Saved total of 40000 documents m30999| Mon Dec 17 15:31:58.838 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 40 version: 2|7||50cf812d5ec0810ee359b569 based on: 2|5||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:58.838 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { a: 885.969014139846 }max: { a: 922.822616994381 } on: { a: 899.337342241779 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:58.838 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|7, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 40 m30999| Mon Dec 17 15:31:58.838 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|3, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:58.883 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0000", shardHost: "localhost:30000" } 0x91767f8 40 m30999| Mon Dec 17 15:31:58.897 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } ========> Saved total of 41000 documents ========> Saved total of 42000 documents ========> Saved total of 43000 documents m30000| Mon Dec 17 15:31:58.786 [conn6] no current chunk manager found for this shard, will initialize m30999| Mon Dec 17 15:31:59.119 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { a: 395.6566429696977 }max: { a: 439.6139404270798 } dataWritten: 210524 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:59.119 [conn4] request split points lookup for chunk test.foo { : 395.6566429696977 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:59.120 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 395.6566429696977 } -->> { : 439.6139404270798 } m30001| Mon Dec 17 15:31:59.122 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 395.6566429696977 }, max: { a: 439.6139404270798 }, from: "shard0001", splitKeys: [ { a: 407.1405665017664 } ], shardId: "test.foo-a_395.6566429696977", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:59.125 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf813fc94e4981dc6c1b15 m30001| Mon Dec 17 15:31:59.126 [conn4] splitChunk accepted at version 2|7||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:59.126 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:59-42", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776319126), what: "split", ns: "test.foo", details: { before: { min: { a: 395.6566429696977 }, max: { a: 439.6139404270798 }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 395.6566429696977 }, max: { a: 407.1405665017664 }, lastmod: Timestamp 2000|8, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 407.1405665017664 }, max: { a: 439.6139404270798 }, lastmod: Timestamp 2000|9, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:59.127 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:59.128 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 41 version: 2|9||50cf812d5ec0810ee359b569 based on: 2|7||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:59.131 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { a: 395.6566429696977 }max: { a: 439.6139404270798 } on: { a: 407.1405665017664 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:59.131 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|9, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 41 m30999| Mon Dec 17 15:31:59.131 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|3, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:31:59.178 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0000", shardHost: "localhost:30000" } 0x91767f8 41 ========> Saved total of 44000 documents m30999| Mon Dec 17 15:31:59.199 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } ========> Saved total of 45000 documents m30999| Mon Dec 17 15:31:59.430 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 480.0211163237691 } dataWritten: 210427 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:59.430 [conn4] request split points lookup for chunk test.foo { : 439.6139404270798 } -->> { : 480.0211163237691 } m30001| Mon Dec 17 15:31:59.431 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 439.6139404270798 } -->> { : 480.0211163237691 } m30001| Mon Dec 17 15:31:59.431 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 439.6139404270798 }, max: { a: 480.0211163237691 }, from: "shard0001", splitKeys: [ { a: 450.486421585083 } ], shardId: "test.foo-a_439.6139404270798", configdb: "localhost:30000" } m30001| Mon Dec 17 15:31:59.435 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf813fc94e4981dc6c1b16 m30001| Mon Dec 17 15:31:59.436 [conn4] splitChunk accepted at version 2|9||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:59.436 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:59-43", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776319436), what: "split", ns: "test.foo", details: { before: { min: { a: 439.6139404270798 }, max: { a: 480.0211163237691 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 439.6139404270798 }, max: { a: 450.486421585083 }, lastmod: Timestamp 2000|10, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 450.486421585083 }, max: { a: 480.0211163237691 }, lastmod: Timestamp 2000|11, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:59.437 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:59.438 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 42 version: 2|11||50cf812d5ec0810ee359b569 based on: 2|9||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:59.441 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { a: 439.6139404270798 }max: { a: 480.0211163237691 } on: { a: 450.486421585083 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:59.441 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|11, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 42 m30999| Mon Dec 17 15:31:59.441 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|3, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } ========> Saved total of 46000 documents ========> Saved total of 47000 documents ========> Saved total of 48000 documents ========> Saved total of 49000 documents ========> Saved total of 50000 documents m30999| Mon Dec 17 15:31:59.786 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { a: 323.8981119357049 }max: { a: 364.6896595600992 } dataWritten: 210427 splitThreshold: 1048576 m30001| Mon Dec 17 15:31:59.787 [conn4] request split points lookup for chunk test.foo { : 323.8981119357049 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:59.788 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 323.8981119357049 } -->> { : 364.6896595600992 } m30001| Mon Dec 17 15:31:59.788 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 323.8981119357049 }, max: { a: 364.6896595600992 }, from: "shard0001", splitKeys: [ { a: 333.1166377756745 } ], shardId: "test.foo-a_323.8981119357049", configdb: "localhost:30000" } ========> Saved total of 51000 documents m30001| Mon Dec 17 15:31:59.789 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf813fc94e4981dc6c1b17 m30001| Mon Dec 17 15:31:59.790 [conn4] splitChunk accepted at version 2|11||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:31:59.790 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:31:59-44", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776319790), what: "split", ns: "test.foo", details: { before: { min: { a: 323.8981119357049 }, max: { a: 364.6896595600992 }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 323.8981119357049 }, max: { a: 333.1166377756745 }, lastmod: Timestamp 2000|12, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 333.1166377756745 }, max: { a: 364.6896595600992 }, lastmod: Timestamp 2000|13, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:31:59.791 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:31:59.792 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 43 version: 2|13||50cf812d5ec0810ee359b569 based on: 2|11||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:31:59.792 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { a: 323.8981119357049 }max: { a: 364.6896595600992 } on: { a: 333.1166377756745 } (splitThreshold 1048576) m30999| Mon Dec 17 15:31:59.792 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|13, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 43 m30999| Mon Dec 17 15:31:59.792 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|3, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } ========> Saved total of 52000 documents ========> Saved total of 53000 documents m30999| Mon Dec 17 15:32:00.094 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|54||000000000000000000000000min: { a: 764.6060811821371 }max: { a: 800.5099997390062 } dataWritten: 210427 splitThreshold: 1048576 m30001| Mon Dec 17 15:32:00.094 [conn4] request split points lookup for chunk test.foo { : 764.6060811821371 } -->> { : 800.5099997390062 } m30001| Mon Dec 17 15:32:00.095 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 764.6060811821371 } -->> { : 800.5099997390062 } m30001| Mon Dec 17 15:32:00.095 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 764.6060811821371 }, max: { a: 800.5099997390062 }, from: "shard0001", splitKeys: [ { a: 773.1654124800116 } ], shardId: "test.foo-a_764.6060811821371", configdb: "localhost:30000" } m30999| Mon Dec 17 15:32:00.099 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 44 version: 2|15||50cf812d5ec0810ee359b569 based on: 2|13||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:00.099 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|54||000000000000000000000000min: { a: 764.6060811821371 }max: { a: 800.5099997390062 } on: { a: 773.1654124800116 } (splitThreshold 1048576) m30999| Mon Dec 17 15:32:00.099 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|15, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 44 m30999| Mon Dec 17 15:32:00.100 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|3, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30001| Mon Dec 17 15:32:00.096 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8140c94e4981dc6c1b18 m30001| Mon Dec 17 15:32:00.097 [conn4] splitChunk accepted at version 2|13||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:00.097 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:00-45", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776320097), what: "split", ns: "test.foo", details: { before: { min: { a: 764.6060811821371 }, max: { a: 800.5099997390062 }, lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 764.6060811821371 }, max: { a: 773.1654124800116 }, lastmod: Timestamp 2000|14, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 773.1654124800116 }, max: { a: 800.5099997390062 }, lastmod: Timestamp 2000|15, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:32:00.098 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:32:00.173 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0000", shardHost: "localhost:30000" } 0x91767f8 44 m30999| Mon Dec 17 15:32:00.173 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } ========> Saved total of 54000 documents ========> Saved total of 55000 documents ========> Saved total of 56000 documents ========> Saved total of 57000 documents m30999| Mon Dec 17 15:32:00.903 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { a: 571.1331805214286 }max: { a: 609.4723071437329 } dataWritten: 210253 splitThreshold: 1048576 m30001| Mon Dec 17 15:32:00.903 [conn4] request split points lookup for chunk test.foo { : 571.1331805214286 } -->> { : 609.4723071437329 } m30001| Mon Dec 17 15:32:00.904 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 571.1331805214286 } -->> { : 609.4723071437329 } m30001| Mon Dec 17 15:32:00.908 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 571.1331805214286 }, max: { a: 609.4723071437329 }, from: "shard0001", splitKeys: [ { a: 579.135412350297 } ], shardId: "test.foo-a_571.1331805214286", configdb: "localhost:30000" } m30999| Mon Dec 17 15:32:00.911 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 45 version: 2|17||50cf812d5ec0810ee359b569 based on: 2|15||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:00.909 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8140c94e4981dc6c1b19 m30001| Mon Dec 17 15:32:00.909 [conn4] splitChunk accepted at version 2|15||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:00.910 [conn4] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:00-46", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42513", time: new Date(1355776320910), what: "split", ns: "test.foo", details: { before: { min: { a: 571.1331805214286 }, max: { a: 609.4723071437329 }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 571.1331805214286 }, max: { a: 579.135412350297 }, lastmod: Timestamp 2000|16, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 579.135412350297 }, max: { a: 609.4723071437329 }, lastmod: Timestamp 2000|17, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:32:00.910 [conn4] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:32:00.912 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { a: 571.1331805214286 }max: { a: 609.4723071437329 } on: { a: 579.135412350297 } (splitThreshold 1048576) m30999| Mon Dec 17 15:32:00.912 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|17, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 45 m30999| Mon Dec 17 15:32:00.912 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|3, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } ========> Saved total of 58000 documents ========> Saved total of 59000 documents ========> Finished saving total of 60000 documents ---- No errors on insert batch. ---- m30999| Mon Dec 17 15:32:01.304 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0000", shardHost: "localhost:30000" } 0x91767f8 45 m30999| Mon Dec 17 15:32:01.304 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:32:02.230 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:02.238 [Balancer] creating new connection to:localhost:30001 m30999| Mon Dec 17 15:32:02.238 BackgroundJob starting: ConnectBG m30999| Mon Dec 17 15:32:02.239 [Balancer] connected connection! m30001| Mon Dec 17 15:32:02.239 [initandlisten] connection accepted from 127.0.0.1:42550 #8 (8 connections now open) m30999| Mon Dec 17 15:32:02.257 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:02.258 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:02 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf81425ec0810ee359b56e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf813b5ec0810ee359b56d" } } m30001| Mon Dec 17 15:32:02.264 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 0.3993422724306583 }, max: { a: 21.16596954874694 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_0.3993422724306583", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:02.265 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8142c94e4981dc6c1b1a m30001| Mon Dec 17 15:32:02.265 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:02-47", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776322265), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 0.3993422724306583 }, max: { a: 21.16596954874694 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:02.266 [conn8] moveChunk request accepted at version 2|17||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:02.267 [conn8] can't move chunk of size (approximately) 1375696 because maximum size allowed to move is 1048576 ns: test.foo { a: 0.3993422724306583 } -> { a: 21.16596954874694 } m30001| Mon Dec 17 15:32:02.267 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:02.267 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:02.268 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:02.268 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:02-48", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776322268), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 0.3993422724306583 }, max: { a: 21.16596954874694 }, step1 of 6: 0, step2 of 6: 1, note: "aborted" } } m30001| Mon Dec 17 15:32:02.268 [conn8] request split points lookup for chunk test.foo { : 0.3993422724306583 } -->> { : 21.16596954874694 } m30001| Mon Dec 17 15:32:02.269 [conn8] splitVector doing another cycle because of force, keyCount now: 605 m30001| Mon Dec 17 15:32:02.270 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 0.3993422724306583 }, max: { a: 21.16596954874694 }, from: "shard0001", splitKeys: [ { a: 10.46284288167953 } ], shardId: "test.foo-a_0.3993422724306583", configdb: "localhost:30000" } m30001| Mon Dec 17 15:32:02.271 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8142c94e4981dc6c1b1b m30001| Mon Dec 17 15:32:02.272 [conn8] splitChunk accepted at version 2|17||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:02.273 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:02-49", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776322272), what: "split", ns: "test.foo", details: { before: { min: { a: 0.3993422724306583 }, max: { a: 21.16596954874694 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, lastmod: Timestamp 2000|19, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:32:02.273 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:02.274 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: ObjectId('50cf812d256383d556ab497c') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:02.275 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8142c94e4981dc6c1b1c m30001| Mon Dec 17 15:32:02.275 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:02-50", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776322275), what: "moveChunk.start", ns: "test.mrShardedOut", details: { min: { _id: MinKey }, max: { _id: ObjectId('50cf812d256383d556ab497c') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:02.276 [conn8] moveChunk request accepted at version 1|64||50cf81365ec0810ee359b56b m30001| Mon Dec 17 15:32:02.276 [conn8] moveChunk number of documents: 461 m30999| Mon Dec 17 15:32:02.262 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81425ec0810ee359b56e m30999| Mon Dec 17 15:32:02.262 [Balancer] *** start balancing round m30999| Mon Dec 17 15:32:02.263 [Balancer] shard0001 has more chunks me:41 best: shard0000:1 m30999| Mon Dec 17 15:32:02.263 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:02.263 [Balancer] donor : shard0001 chunks on 41 m30999| Mon Dec 17 15:32:02.263 [Balancer] receiver : shard0000 chunks on 1 m30999| Mon Dec 17 15:32:02.263 [Balancer] threshold : 4 m30999| Mon Dec 17 15:32:02.264 [Balancer] ns: test.foo going to move { _id: "test.foo-a_0.3993422724306583", lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 0.3993422724306583 }, max: { a: 21.16596954874694 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:02.264 [Balancer] shard0001 has more chunks me:65 best: shard0000:0 m30999| Mon Dec 17 15:32:02.264 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:02.264 [Balancer] donor : shard0001 chunks on 65 m30999| Mon Dec 17 15:32:02.264 [Balancer] receiver : shard0000 chunks on 0 m30999| Mon Dec 17 15:32:02.264 [Balancer] threshold : 4 m30999| Mon Dec 17 15:32:02.264 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: MinKey }, max: { _id: ObjectId('50cf812d256383d556ab497c') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:02.264 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 2|2||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 21.16596954874694 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30999| Mon Dec 17 15:32:02.268 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 1375696, errmsg: "chunk too big to move", ok: 0.0 } m30999| Mon Dec 17 15:32:02.268 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 1375696, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { a: 0.3993422724306583 } max: { a: 0.3993422724306583 } m30999| Mon Dec 17 15:32:02.268 [Balancer] forcing a split because migrate failed for size reasons m30999| Mon Dec 17 15:32:02.274 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 46 version: 2|19||50cf812d5ec0810ee359b569 based on: 2|17||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:02.274 [Balancer] forced split results: { ok: 1.0 } m30999| Mon Dec 17 15:32:02.274 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: ObjectId('50cf812d256383d556ab497c') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:02.316 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('50cf812d256383d556ab497c') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 380, clonedBytes: 410780, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:02.277 [migrateThread] build index test.mrShardedOut { _id: 1 } m30000| Mon Dec 17 15:32:02.278 [migrateThread] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:32:02.278 [migrateThread] info: creating collection test.mrShardedOut on add index m30000| Mon Dec 17 15:32:02.320 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:02.320 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: MinKey } -> { _id: ObjectId('50cf812d256383d556ab497c') } m30001| Mon Dec 17 15:32:02.322 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('50cf812d256383d556ab497c') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 461, clonedBytes: 498341, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:02.322 [conn8] moveChunk setting version to: 2|0||50cf81365ec0810ee359b56b m30000| Mon Dec 17 15:32:02.322 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:02.352 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:02.352 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: MinKey } -> { _id: ObjectId('50cf812d256383d556ab497c') } m30000| Mon Dec 17 15:32:02.352 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:02-1", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776322352), what: "moveChunk.to", ns: "test.mrShardedOut", details: { min: { _id: MinKey }, max: { _id: ObjectId('50cf812d256383d556ab497c') }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 41, step4 of 5: 0, step5 of 5: 32 } } m30001| Mon Dec 17 15:32:02.354 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('50cf812d256383d556ab497c') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 461, clonedBytes: 498341, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:02.354 [conn8] moveChunk updating self version to: 2|1||50cf81365ec0810ee359b56b through { _id: ObjectId('50cf812d256383d556ab497c') } -> { _id: ObjectId('50cf812d256383d556ab4b4a') } for collection 'test.mrShardedOut' m30001| Mon Dec 17 15:32:02.355 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:02-51", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776322355), what: "moveChunk.commit", ns: "test.mrShardedOut", details: { min: { _id: MinKey }, max: { _id: ObjectId('50cf812d256383d556ab497c') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:02.355 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:02.355 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:02.355 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:02.355 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:02.355 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:02.355 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:02.356 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:02-52", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776322356), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: MinKey }, max: { _id: ObjectId('50cf812d256383d556ab497c') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 45, step5 of 6: 32, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:02.358 [cleanupOldData-50cf8142c94e4981dc6c1b1d] (start) waiting to cleanup test.mrShardedOut from { _id: MinKey } -> { _id: ObjectId('50cf812d256383d556ab497c') }, # cursors remaining: 0 m30999| Mon Dec 17 15:32:02.356 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:02.357 [Balancer] ChunkManager: time to load chunks for test.mrShardedOut: 0ms sequenceNumber: 47 version: 2|1||50cf81365ec0810ee359b56b based on: 1|64||50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:32:02.357 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:32:02.357 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:02.390 [cleanupOldData-50cf8142c94e4981dc6c1b1d] waiting to remove documents for test.mrShardedOut from { _id: MinKey } -> { _id: ObjectId('50cf812d256383d556ab497c') } m30001| Mon Dec 17 15:32:02.391 [cleanupOldData-50cf8142c94e4981dc6c1b1d] moveChunk starting delete for: test.mrShardedOut from { _id: MinKey } -> { _id: ObjectId('50cf812d256383d556ab497c') } m30001| Mon Dec 17 15:32:02.426 [cleanupOldData-50cf8142c94e4981dc6c1b1d] moveChunk deleted 461 documents for test.mrShardedOut from { _id: MinKey } -> { _id: ObjectId('50cf812d256383d556ab497c') } ---- Setup OK: count matches (60000) -- Starting MapReduce ---- m30999| Mon Dec 17 15:32:02.953 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0000", shardHost: "localhost:30000" } 0x91767f8 46 m30000| Mon Dec 17 15:32:02.990 [conn6] CMD: drop test.tmp.mr.foo_0_inc m30000| Mon Dec 17 15:32:02.991 [conn6] build index test.tmp.mr.foo_0_inc { 0: 1 } m30000| Mon Dec 17 15:32:02.991 [conn6] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:32:02.991 [conn6] CMD: drop test.tmp.mr.foo_0 m30000| Mon Dec 17 15:32:02.991 [conn6] build index test.tmp.mr.foo_0 { _id: 1 } m30000| Mon Dec 17 15:32:02.992 [conn6] build index done. scanned 0 total records. 0 secs m30000| Mon Dec 17 15:32:02.996 [conn6] CMD: drop test.tmp.mrs.foo_1355776322_1 m30000| Mon Dec 17 15:32:02.998 [conn6] CMD: drop test.tmp.mr.foo_0 m30000| Mon Dec 17 15:32:02.998 [conn6] CMD: drop test.tmp.mr.foo_0 m30000| Mon Dec 17 15:32:02.998 [conn6] CMD: drop test.tmp.mr.foo_0_inc m30001| Mon Dec 17 15:32:02.954 [conn3] CMD: drop test.tmp.mr.foo_2_inc m30001| Mon Dec 17 15:32:02.954 [conn3] build index test.tmp.mr.foo_2_inc { 0: 1 } m30001| Mon Dec 17 15:32:02.955 [conn3] build index done. scanned 0 total records. 0 secs m30001| Mon Dec 17 15:32:02.955 [conn3] CMD: drop test.tmp.mr.foo_2 m30001| Mon Dec 17 15:32:02.955 [conn3] build index test.tmp.mr.foo_2 { _id: 1 } m30001| Mon Dec 17 15:32:02.955 [conn3] build index done. scanned 0 total records. 0 secs m30999| Mon Dec 17 15:32:02.953 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30999| Mon Dec 17 15:32:02.953 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|19, versionEpoch: ObjectId('50cf812d5ec0810ee359b569'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 46 m30999| Mon Dec 17 15:32:02.954 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|3, oldVersionEpoch: ObjectId('50cf812d5ec0810ee359b569'), ok: 1.0 } m30001| Mon Dec 17 15:32:03.437 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_0.3993422724306583", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30999| Mon Dec 17 15:32:03.406 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:03.406 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:03.406 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:03 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf81435ec0810ee359b56f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf81425ec0810ee359b56e" } } m30999| Mon Dec 17 15:32:03.434 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81435ec0810ee359b56f m30999| Mon Dec 17 15:32:03.434 [Balancer] *** start balancing round m30999| Mon Dec 17 15:32:03.436 [Balancer] shard0001 has more chunks me:42 best: shard0000:1 m30999| Mon Dec 17 15:32:03.436 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:03.436 [Balancer] donor : shard0001 chunks on 42 m30999| Mon Dec 17 15:32:03.436 [Balancer] receiver : shard0000 chunks on 1 m30999| Mon Dec 17 15:32:03.436 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:03.436 [Balancer] ns: test.foo going to move { _id: "test.foo-a_0.3993422724306583", lastmod: Timestamp 2000|18, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:03.436 [Balancer] shard0001 has more chunks me:64 best: shard0000:1 m30999| Mon Dec 17 15:32:03.436 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:03.436 [Balancer] donor : shard0001 chunks on 64 m30999| Mon Dec 17 15:32:03.436 [Balancer] receiver : shard0000 chunks on 1 m30999| Mon Dec 17 15:32:03.436 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:03.436 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab497c')", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab497c') }, max: { _id: ObjectId('50cf812d256383d556ab4b4a') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:03.436 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 2|18||000000000000000000000000min: { a: 0.3993422724306583 }max: { a: 10.46284288167953 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:03.456 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8143c94e4981dc6c1b1e m30001| Mon Dec 17 15:32:03.456 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:03-53", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776323456), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:03.456 [conn8] moveChunk request accepted at version 2|19||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:03.459 [conn8] moveChunk number of documents: 605 m30001| Mon Dec 17 15:32:03.462 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:03.466 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:03.474 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:03.494 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 215, clonedBytes: 231340, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:03.513 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:03.513 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 0.3993422724306583 } -> { a: 10.46284288167953 } m30001| Mon Dec 17 15:32:03.514 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, shardKeyPattern: { a: 1.0 }, state: "steady", counts: { cloned: 605, clonedBytes: 650980, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:03.526 [conn8] moveChunk setting version to: 3|0||50cf812d5ec0810ee359b569 m30000| Mon Dec 17 15:32:03.526 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:03.526 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 0.3993422724306583 } -> { a: 10.46284288167953 } m30000| Mon Dec 17 15:32:03.526 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:03-2", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776323526), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 34, step4 of 5: 0, step5 of 5: 13 } } m30001| Mon Dec 17 15:32:03.530 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, shardKeyPattern: { a: 1.0 }, state: "done", counts: { cloned: 605, clonedBytes: 650980, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:03.530 [conn8] moveChunk updating self version to: 3|1||50cf812d5ec0810ee359b569 through { a: 10.46284288167953 } -> { a: 21.16596954874694 } for collection 'test.foo' m30001| Mon Dec 17 15:32:03.531 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:03-54", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776323531), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:03.531 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:03.533 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:03.533 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:03.533 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:03.533 [cleanupOldData-50cf8143c94e4981dc6c1b1f] (start) waiting to cleanup test.foo from { a: 0.3993422724306583 } -> { a: 10.46284288167953 }, # cursors remaining: 2 m30001| Mon Dec 17 15:32:03.539 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:03.539 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:03.539 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:03-55", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776323539), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, step1 of 6: 0, step2 of 6: 19, step3 of 6: 2, step4 of 6: 55, step5 of 6: 18, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:03.539 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 0.3993422724306583 }, max: { a: 10.46284288167953 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_0.3993422724306583", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 4 locks(micros) W:49 r:1437 w:47 reslen:37 102ms m30999| Mon Dec 17 15:32:03.539 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:03.540 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 48 version: 3|1||50cf812d5ec0810ee359b569 based on: 2|19||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:03.541 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab497c') }max: { _id: ObjectId('50cf812d256383d556ab4b4a') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:03.541 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab497c') }, max: { _id: ObjectId('50cf812d256383d556ab4b4a') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab497c')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:03.545 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8143c94e4981dc6c1b20 m30001| Mon Dec 17 15:32:03.545 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:03-56", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776323545), what: "moveChunk.start", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab497c') }, max: { _id: ObjectId('50cf812d256383d556ab4b4a') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:03.546 [conn8] moveChunk request accepted at version 2|1||50cf81365ec0810ee359b56b m30001| Mon Dec 17 15:32:03.546 [conn8] moveChunk number of documents: 462 m30001| Mon Dec 17 15:32:03.553 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab497c') }, max: { _id: ObjectId('50cf812d256383d556ab4b4a') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:03.554 [cleanupOldData-50cf8143c94e4981dc6c1b1f] (looping 1) waiting to cleanup test.foo from { a: 0.3993422724306583 } -> { a: 10.46284288167953 } # cursors:2 m30001| Mon Dec 17 15:32:03.554 [cleanupOldData-50cf8143c94e4981dc6c1b1f] cursors: 69090575698978 69090894570165 m30001| Mon Dec 17 15:32:03.558 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab497c') }, max: { _id: ObjectId('50cf812d256383d556ab4b4a') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 18, clonedBytes: 19458, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:03.574 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab497c') }, max: { _id: ObjectId('50cf812d256383d556ab4b4a') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 431, clonedBytes: 465911, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:03.579 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:03.579 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab497c') } -> { _id: ObjectId('50cf812d256383d556ab4b4a') } m30001| Mon Dec 17 15:32:03.582 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab497c') }, max: { _id: ObjectId('50cf812d256383d556ab4b4a') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:03.586 [conn8] moveChunk setting version to: 3|0||50cf81365ec0810ee359b56b m30000| Mon Dec 17 15:32:03.586 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:03.590 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab497c') } -> { _id: ObjectId('50cf812d256383d556ab4b4a') } m30000| Mon Dec 17 15:32:03.590 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:03-3", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776323590), what: "moveChunk.to", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab497c') }, max: { _id: ObjectId('50cf812d256383d556ab4b4a') }, step1 of 5: 3, step2 of 5: 0, step3 of 5: 25, step4 of 5: 0, step5 of 5: 11 } } m30001| Mon Dec 17 15:32:03.594 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab497c') }, max: { _id: ObjectId('50cf812d256383d556ab4b4a') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:03.594 [conn8] moveChunk updating self version to: 3|1||50cf81365ec0810ee359b56b through { _id: ObjectId('50cf812d256383d556ab4b4a') } -> { _id: ObjectId('50cf812d256383d556ab4d18') } for collection 'test.mrShardedOut' m30001| Mon Dec 17 15:32:03.595 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:03-57", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776323594), what: "moveChunk.commit", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab497c') }, max: { _id: ObjectId('50cf812d256383d556ab4b4a') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:03.595 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:03.595 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:03.595 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:03.595 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:03.595 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:03.595 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:03.595 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:03-58", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776323595), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab497c') }, max: { _id: ObjectId('50cf812d256383d556ab4b4a') }, step1 of 6: 0, step2 of 6: 4, step3 of 6: 0, step4 of 6: 35, step5 of 6: 12, step6 of 6: 0 } } m30999| Mon Dec 17 15:32:03.595 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:03.596 [Balancer] ChunkManager: time to load chunks for test.mrShardedOut: 0ms sequenceNumber: 49 version: 3|1||50cf81365ec0810ee359b56b based on: 2|1||50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:32:03.596 [Balancer] *** end of balancing round m30001| Mon Dec 17 15:32:03.595 [cleanupOldData-50cf8143c94e4981dc6c1b21] (start) waiting to cleanup test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab497c') } -> { _id: ObjectId('50cf812d256383d556ab4b4a') }, # cursors remaining: 0 m30999| Mon Dec 17 15:32:03.598 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:03.618 [cleanupOldData-50cf8143c94e4981dc6c1b21] waiting to remove documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab497c') } -> { _id: ObjectId('50cf812d256383d556ab4b4a') } m30001| Mon Dec 17 15:32:03.618 [cleanupOldData-50cf8143c94e4981dc6c1b21] moveChunk starting delete for: test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab497c') } -> { _id: ObjectId('50cf812d256383d556ab4b4a') } m30999| Mon Dec 17 15:32:04.622 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:04.622 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:04.622 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:04 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf81445ec0810ee359b570" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf81435ec0810ee359b56f" } } m30999| Mon Dec 17 15:32:04.623 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81445ec0810ee359b570 m30999| Mon Dec 17 15:32:04.623 [Balancer] *** start balancing round m30999| Mon Dec 17 15:32:04.658 [Balancer] shard0001 has more chunks me:41 best: shard0000:2 m30999| Mon Dec 17 15:32:04.658 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:04.658 [Balancer] donor : shard0001 chunks on 41 m30999| Mon Dec 17 15:32:04.658 [Balancer] receiver : shard0000 chunks on 2 m30999| Mon Dec 17 15:32:04.658 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:04.658 [Balancer] ns: test.foo going to move { _id: "test.foo-a_10.46284288167953", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:04.659 [Balancer] shard0001 has more chunks me:63 best: shard0000:2 m30999| Mon Dec 17 15:32:04.659 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:04.659 [Balancer] donor : shard0001 chunks on 63 m30999| Mon Dec 17 15:32:04.659 [Balancer] receiver : shard0000 chunks on 2 m30999| Mon Dec 17 15:32:04.659 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:04.659 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab4b4a')", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab4b4a') }, max: { _id: ObjectId('50cf812d256383d556ab4d18') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:04.659 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 3|1||000000000000000000000000min: { a: 10.46284288167953 }max: { a: 21.16596954874694 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:04.659 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_10.46284288167953", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:04.660 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8144c94e4981dc6c1b22 m30001| Mon Dec 17 15:32:04.660 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:04-59", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776324660), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:04.661 [conn8] moveChunk request accepted at version 3|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:04.665 [conn8] moveChunk number of documents: 606 m30001| Mon Dec 17 15:32:04.673 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:04.686 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 124, clonedBytes: 133424, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:04.704 [cleanupOldData-50cf8143c94e4981dc6c1b21] moveChunk deleted 462 documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab497c') } -> { _id: ObjectId('50cf812d256383d556ab4b4a') } m30001| Mon Dec 17 15:32:04.704 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 495, clonedBytes: 532620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:04.713 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:04.713 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 10.46284288167953 } -> { a: 21.16596954874694 } m30001| Mon Dec 17 15:32:04.743 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, shardKeyPattern: { a: 1.0 }, state: "steady", counts: { cloned: 606, clonedBytes: 652056, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:04.746 [conn8] moveChunk setting version to: 4|0||50cf812d5ec0810ee359b569 m30000| Mon Dec 17 15:32:04.746 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:04.754 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 10.46284288167953 } -> { a: 21.16596954874694 } m30000| Mon Dec 17 15:32:04.754 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:04.754 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:04-4", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776324754), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 43, step4 of 5: 0, step5 of 5: 41 } } m30001| Mon Dec 17 15:32:04.764 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, shardKeyPattern: { a: 1.0 }, state: "done", counts: { cloned: 606, clonedBytes: 652056, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:04.764 [conn8] moveChunk updating self version to: 4|1||50cf812d5ec0810ee359b569 through { a: 21.16596954874694 } -> { a: 40.64535931684077 } for collection 'test.foo' m30001| Mon Dec 17 15:32:04.765 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:04-60", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776324765), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:04.765 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:04.765 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:04.765 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:04.765 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:04.765 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:04.765 [cleanupOldData-50cf8144c94e4981dc6c1b23] (start) waiting to cleanup test.foo from { a: 10.46284288167953 } -> { a: 21.16596954874694 }, # cursors remaining: 2 m30001| Mon Dec 17 15:32:04.766 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:04.766 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:04-61", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776324766), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 3, step4 of 6: 78, step5 of 6: 22, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:04.766 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 10.46284288167953 }, max: { a: 21.16596954874694 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_10.46284288167953", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:752 w:39 reslen:37 106ms m30001| Mon Dec 17 15:32:04.767 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab4b4a') }, max: { _id: ObjectId('50cf812d256383d556ab4d18') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab4b4a')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:04.768 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8144c94e4981dc6c1b24 m30001| Mon Dec 17 15:32:04.768 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:04-62", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776324768), what: "moveChunk.start", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab4b4a') }, max: { _id: ObjectId('50cf812d256383d556ab4d18') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:04.769 [conn8] moveChunk request accepted at version 3|1||50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:32:04.766 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:04.767 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 50 version: 4|1||50cf812d5ec0810ee359b569 based on: 3|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:04.767 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 3|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab4b4a') }max: { _id: ObjectId('50cf812d256383d556ab4d18') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:04.769 [conn8] moveChunk number of documents: 462 m30001| Mon Dec 17 15:32:04.774 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4b4a') }, max: { _id: ObjectId('50cf812d256383d556ab4d18') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:04.784 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4b4a') }, max: { _id: ObjectId('50cf812d256383d556ab4d18') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:04.794 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4b4a') }, max: { _id: ObjectId('50cf812d256383d556ab4d18') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 143, clonedBytes: 154583, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:04.794 [cleanupOldData-50cf8144c94e4981dc6c1b23] (looping 1) waiting to cleanup test.foo from { a: 10.46284288167953 } -> { a: 21.16596954874694 } # cursors:2 m30001| Mon Dec 17 15:32:04.794 [cleanupOldData-50cf8144c94e4981dc6c1b23] cursors: 69090575698978 69090894570165 m30001| Mon Dec 17 15:32:04.814 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4b4a') }, max: { _id: ObjectId('50cf812d256383d556ab4d18') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 374, clonedBytes: 404294, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:04.824 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:04.824 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab4b4a') } -> { _id: ObjectId('50cf812d256383d556ab4d18') } m30001| Mon Dec 17 15:32:04.834 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4b4a') }, max: { _id: ObjectId('50cf812d256383d556ab4d18') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:04.834 [conn8] moveChunk setting version to: 4|0||50cf81365ec0810ee359b56b m30000| Mon Dec 17 15:32:04.834 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:04.838 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab4b4a') } -> { _id: ObjectId('50cf812d256383d556ab4d18') } m30000| Mon Dec 17 15:32:04.838 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:04-5", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776324838), what: "moveChunk.to", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab4b4a') }, max: { _id: ObjectId('50cf812d256383d556ab4d18') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 54, step4 of 5: 0, step5 of 5: 13 } } m30001| Mon Dec 17 15:32:04.844 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4b4a') }, max: { _id: ObjectId('50cf812d256383d556ab4d18') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:04.844 [conn8] moveChunk updating self version to: 4|1||50cf81365ec0810ee359b56b through { _id: ObjectId('50cf812d256383d556ab4d18') } -> { _id: ObjectId('50cf812d256383d556ab4ee6') } for collection 'test.mrShardedOut' m30001| Mon Dec 17 15:32:04.845 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:04-63", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776324845), what: "moveChunk.commit", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab4b4a') }, max: { _id: ObjectId('50cf812d256383d556ab4d18') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:04.845 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:04.847 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:04.847 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:04.847 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:04.847 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:04.847 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:04.847 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:04-64", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776324847), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab4b4a') }, max: { _id: ObjectId('50cf812d256383d556ab4d18') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 64, step5 of 6: 12, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:04.849 [cleanupOldData-50cf8144c94e4981dc6c1b25] (start) waiting to cleanup test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab4b4a') } -> { _id: ObjectId('50cf812d256383d556ab4d18') }, # cursors remaining: 0 m30999| Mon Dec 17 15:32:04.848 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:04.849 [Balancer] ChunkManager: time to load chunks for test.mrShardedOut: 0ms sequenceNumber: 51 version: 4|1||50cf81365ec0810ee359b56b based on: 3|1||50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:32:04.849 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:32:04.849 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:04.874 [cleanupOldData-50cf8144c94e4981dc6c1b25] waiting to remove documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab4b4a') } -> { _id: ObjectId('50cf812d256383d556ab4d18') } m30001| Mon Dec 17 15:32:04.874 [cleanupOldData-50cf8144c94e4981dc6c1b25] moveChunk starting delete for: test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab4b4a') } -> { _id: ObjectId('50cf812d256383d556ab4d18') } m30001| Mon Dec 17 15:32:04.995 [cleanupOldData-50cf8144c94e4981dc6c1b25] moveChunk deleted 462 documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab4b4a') } -> { _id: ObjectId('50cf812d256383d556ab4d18') } m30001| Mon Dec 17 15:32:05.005 [conn3] 22100/59980 36% m30999| Mon Dec 17 15:32:05.850 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:05.851 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:05.851 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:05 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf81455ec0810ee359b571" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf81445ec0810ee359b570" } } m30999| Mon Dec 17 15:32:05.852 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81455ec0810ee359b571 m30999| Mon Dec 17 15:32:05.852 [Balancer] *** start balancing round m30999| Mon Dec 17 15:32:05.855 [Balancer] shard0001 has more chunks me:40 best: shard0000:3 m30999| Mon Dec 17 15:32:05.855 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:05.855 [Balancer] donor : shard0001 chunks on 40 m30999| Mon Dec 17 15:32:05.855 [Balancer] receiver : shard0000 chunks on 3 m30999| Mon Dec 17 15:32:05.855 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:05.855 [Balancer] ns: test.foo going to move { _id: "test.foo-a_21.16596954874694", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:05.855 [Balancer] shard0001 has more chunks me:62 best: shard0000:3 m30999| Mon Dec 17 15:32:05.855 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:05.855 [Balancer] donor : shard0001 chunks on 62 m30999| Mon Dec 17 15:32:05.855 [Balancer] receiver : shard0000 chunks on 3 m30999| Mon Dec 17 15:32:05.855 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:05.855 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab4d18')", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab4d18') }, max: { _id: ObjectId('50cf812d256383d556ab4ee6') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:05.855 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { a: 21.16596954874694 }max: { a: 40.64535931684077 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:05.856 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_21.16596954874694", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:05.856 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8145c94e4981dc6c1b26 m30001| Mon Dec 17 15:32:05.856 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:05-65", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776325856), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:05.857 [conn8] moveChunk request accepted at version 4|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:05.859 [conn8] moveChunk number of documents: 1141 m30001| Mon Dec 17 15:32:05.869 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:05.884 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 211, clonedBytes: 227036, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:05.907 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 397, clonedBytes: 427172, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:05.924 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 733, clonedBytes: 788708, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:05.944 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 1089, clonedBytes: 1171764, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:05.947 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:05.947 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 21.16596954874694 } -> { a: 40.64535931684077 } m30001| Mon Dec 17 15:32:05.978 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, shardKeyPattern: { a: 1.0 }, state: "steady", counts: { cloned: 1141, clonedBytes: 1227716, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:05.982 [conn8] moveChunk setting version to: 5|0||50cf812d5ec0810ee359b569 m30000| Mon Dec 17 15:32:05.982 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:05.983 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 21.16596954874694 } -> { a: 40.64535931684077 } m30000| Mon Dec 17 15:32:05.983 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:05-6", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776325983), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 86, step4 of 5: 0, step5 of 5: 35 } } m30001| Mon Dec 17 15:32:05.986 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, shardKeyPattern: { a: 1.0 }, state: "done", counts: { cloned: 1141, clonedBytes: 1227716, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:05.986 [conn8] moveChunk updating self version to: 5|1||50cf812d5ec0810ee359b569 through { a: 40.64535931684077 } -> { a: 62.87552835419774 } for collection 'test.foo' m30001| Mon Dec 17 15:32:05.987 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:05-66", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776325987), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:05.987 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:05.995 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:05.995 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:05.995 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:05.995 [cleanupOldData-50cf8145c94e4981dc6c1b27] (start) waiting to cleanup test.foo from { a: 21.16596954874694 } -> { a: 40.64535931684077 }, # cursors remaining: 2 m30999| Mon Dec 17 15:32:06.001 [Balancer] moveChunk result: { ok: 1.0 } m30001| Mon Dec 17 15:32:06.001 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:06.001 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:06.001 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:06-67", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776326001), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 118, step5 of 6: 16, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:06.001 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 21.16596954874694 }, max: { a: 40.64535931684077 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_21.16596954874694", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 8 locks(micros) W:38 r:2833 w:38 reslen:37 145ms m30001| Mon Dec 17 15:32:06.003 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab4d18') }, max: { _id: ObjectId('50cf812d256383d556ab4ee6') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab4d18')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:06.003 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8146c94e4981dc6c1b28 m30001| Mon Dec 17 15:32:06.004 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:06-68", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776326003), what: "moveChunk.start", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab4d18') }, max: { _id: ObjectId('50cf812d256383d556ab4ee6') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:06.004 [conn8] moveChunk request accepted at version 4|1||50cf81365ec0810ee359b56b m30001| Mon Dec 17 15:32:06.005 [conn8] moveChunk number of documents: 462 m30999| Mon Dec 17 15:32:06.002 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 52 version: 5|1||50cf812d5ec0810ee359b569 based on: 4|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:06.002 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab4d18') }max: { _id: ObjectId('50cf812d256383d556ab4ee6') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:06.014 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4d18') }, max: { _id: ObjectId('50cf812d256383d556ab4ee6') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 95, clonedBytes: 102695, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:06.024 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4d18') }, max: { _id: ObjectId('50cf812d256383d556ab4ee6') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 364, clonedBytes: 393484, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:06.028 [cleanupOldData-50cf8145c94e4981dc6c1b27] (looping 1) waiting to cleanup test.foo from { a: 21.16596954874694 } -> { a: 40.64535931684077 } # cursors:2 m30001| Mon Dec 17 15:32:06.028 [cleanupOldData-50cf8145c94e4981dc6c1b27] cursors: 69090575698978 69090894570165 m30001| Mon Dec 17 15:32:06.030 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4d18') }, max: { _id: ObjectId('50cf812d256383d556ab4ee6') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:06.030 [conn8] moveChunk setting version to: 5|0||50cf81365ec0810ee359b56b m30000| Mon Dec 17 15:32:06.028 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:06.028 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab4d18') } -> { _id: ObjectId('50cf812d256383d556ab4ee6') } m30000| Mon Dec 17 15:32:06.030 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:06.034 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:06.038 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:06.042 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:06.042 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab4d18') } -> { _id: ObjectId('50cf812d256383d556ab4ee6') } m30000| Mon Dec 17 15:32:06.042 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:06-7", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776326042), what: "moveChunk.to", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab4d18') }, max: { _id: ObjectId('50cf812d256383d556ab4ee6') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 22, step4 of 5: 0, step5 of 5: 14 } } m30001| Mon Dec 17 15:32:06.046 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4d18') }, max: { _id: ObjectId('50cf812d256383d556ab4ee6') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:06.046 [conn8] moveChunk updating self version to: 5|1||50cf81365ec0810ee359b56b through { _id: ObjectId('50cf812d256383d556ab4ee6') } -> { _id: ObjectId('50cf812d256383d556ab50b4') } for collection 'test.mrShardedOut' m30001| Mon Dec 17 15:32:06.047 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:06-69", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776326047), what: "moveChunk.commit", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab4d18') }, max: { _id: ObjectId('50cf812d256383d556ab4ee6') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:06.047 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:06.054 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:06.054 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:06.054 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:06.054 [cleanupOldData-50cf8146c94e4981dc6c1b29] (start) waiting to cleanup test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab4d18') } -> { _id: ObjectId('50cf812d256383d556ab4ee6') }, # cursors remaining: 0 m30999| Mon Dec 17 15:32:06.060 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:06.061 [Balancer] ChunkManager: time to load chunks for test.mrShardedOut: 0ms sequenceNumber: 53 version: 5|1||50cf81365ec0810ee359b56b based on: 4|1||50cf81365ec0810ee359b56b m30001| Mon Dec 17 15:32:06.060 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:06.060 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:06.060 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:06-70", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776326060), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab4d18') }, max: { _id: ObjectId('50cf812d256383d556ab4ee6') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 25, step5 of 6: 23, step6 of 6: 0 } } m30999| Mon Dec 17 15:32:06.062 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:32:06.062 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:06.078 [cleanupOldData-50cf8146c94e4981dc6c1b29] waiting to remove documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab4d18') } -> { _id: ObjectId('50cf812d256383d556ab4ee6') } m30001| Mon Dec 17 15:32:06.078 [cleanupOldData-50cf8146c94e4981dc6c1b29] moveChunk starting delete for: test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab4d18') } -> { _id: ObjectId('50cf812d256383d556ab4ee6') } m30001| Mon Dec 17 15:32:06.443 [cleanupOldData-50cf8146c94e4981dc6c1b29] moveChunk deleted 462 documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab4d18') } -> { _id: ObjectId('50cf812d256383d556ab4ee6') } m30999| Mon Dec 17 15:32:07.062 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:07.063 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:07.063 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:07 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf81475ec0810ee359b572" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf81455ec0810ee359b571" } } m30999| Mon Dec 17 15:32:07.064 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81475ec0810ee359b572 m30999| Mon Dec 17 15:32:07.064 [Balancer] *** start balancing round m30999| Mon Dec 17 15:32:07.067 [Balancer] shard0001 has more chunks me:39 best: shard0000:4 m30999| Mon Dec 17 15:32:07.067 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:07.067 [Balancer] donor : shard0001 chunks on 39 m30999| Mon Dec 17 15:32:07.067 [Balancer] receiver : shard0000 chunks on 4 m30001| Mon Dec 17 15:32:07.068 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 40.64535931684077 }, max: { a: 62.87552835419774 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_40.64535931684077", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:07.069 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8147c94e4981dc6c1b2a m30001| Mon Dec 17 15:32:07.069 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:07-71", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776327069), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 40.64535931684077 }, max: { a: 62.87552835419774 }, from: "shard0001", to: "shard0000" } } m30999| Mon Dec 17 15:32:07.067 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:07.067 [Balancer] ns: test.foo going to move { _id: "test.foo-a_40.64535931684077", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 40.64535931684077 }, max: { a: 62.87552835419774 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:07.068 [Balancer] shard0001 has more chunks me:61 best: shard0000:4 m30999| Mon Dec 17 15:32:07.068 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:07.068 [Balancer] donor : shard0001 chunks on 61 m30999| Mon Dec 17 15:32:07.068 [Balancer] receiver : shard0000 chunks on 4 m30999| Mon Dec 17 15:32:07.068 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:07.068 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab4ee6')", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab4ee6') }, max: { _id: ObjectId('50cf812d256383d556ab50b4') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:07.068 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 5|1||000000000000000000000000min: { a: 40.64535931684077 }max: { a: 62.87552835419774 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30999| Mon Dec 17 15:32:07.079 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 1584720, errmsg: "chunk too big to move", ok: 0.0 } m30999| Mon Dec 17 15:32:07.079 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 1584720, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { a: 40.64535931684077 } max: { a: 40.64535931684077 } m30999| Mon Dec 17 15:32:07.079 [Balancer] forcing a split because migrate failed for size reasons m30001| Mon Dec 17 15:32:07.070 [conn8] moveChunk request accepted at version 5|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:07.073 [conn8] can't move chunk of size (approximately) 1584720 because maximum size allowed to move is 1048576 ns: test.foo { a: 40.64535931684077 } -> { a: 62.87552835419774 } m30001| Mon Dec 17 15:32:07.073 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:07.078 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:07.079 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:07.079 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:07-72", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776327079), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 40.64535931684077 }, max: { a: 62.87552835419774 }, step1 of 6: 0, step2 of 6: 1, note: "aborted" } } m30001| Mon Dec 17 15:32:07.079 [conn8] request split points lookup for chunk test.foo { : 40.64535931684077 } -->> { : 62.87552835419774 } m30001| Mon Dec 17 15:32:07.080 [conn8] splitVector doing another cycle because of force, keyCount now: 697 m30001| Mon Dec 17 15:32:07.083 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 40.64535931684077 }, max: { a: 62.87552835419774 }, from: "shard0001", splitKeys: [ { a: 51.38014652766287 } ], shardId: "test.foo-a_40.64535931684077", configdb: "localhost:30000" } m30999| Mon Dec 17 15:32:07.091 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 54 version: 5|3||50cf812d5ec0810ee359b569 based on: 5|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:07.091 [Balancer] forced split results: { ok: 1.0 } m30999| Mon Dec 17 15:32:07.091 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 5|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab4ee6') }max: { _id: ObjectId('50cf812d256383d556ab50b4') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:07.088 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8147c94e4981dc6c1b2b m30001| Mon Dec 17 15:32:07.089 [conn8] splitChunk accepted at version 5|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:07.089 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:07-73", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776327089), what: "split", ns: "test.foo", details: { before: { min: { a: 40.64535931684077 }, max: { a: 62.87552835419774 }, lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, lastmod: Timestamp 5000|2, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:32:07.090 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:07.091 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab4ee6') }, max: { _id: ObjectId('50cf812d256383d556ab50b4') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab4ee6')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:07.092 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8147c94e4981dc6c1b2c m30001| Mon Dec 17 15:32:07.092 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:07-74", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776327092), what: "moveChunk.start", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab4ee6') }, max: { _id: ObjectId('50cf812d256383d556ab50b4') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:07.093 [conn8] moveChunk request accepted at version 5|1||50cf81365ec0810ee359b56b m30001| Mon Dec 17 15:32:07.093 [conn8] moveChunk number of documents: 462 m30001| Mon Dec 17 15:32:07.103 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4ee6') }, max: { _id: ObjectId('50cf812d256383d556ab50b4') }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:07.106 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4ee6') }, max: { _id: ObjectId('50cf812d256383d556ab50b4') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:07.123 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4ee6') }, max: { _id: ObjectId('50cf812d256383d556ab50b4') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 313, clonedBytes: 338353, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:07.129 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:07.129 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab4ee6') } -> { _id: ObjectId('50cf812d256383d556ab50b4') } m30001| Mon Dec 17 15:32:07.134 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4ee6') }, max: { _id: ObjectId('50cf812d256383d556ab50b4') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:07.147 [conn8] moveChunk setting version to: 6|0||50cf81365ec0810ee359b56b m30000| Mon Dec 17 15:32:07.147 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:07.147 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab4ee6') } -> { _id: ObjectId('50cf812d256383d556ab50b4') } m30000| Mon Dec 17 15:32:07.147 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:07-8", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776327147), what: "moveChunk.to", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab4ee6') }, max: { _id: ObjectId('50cf812d256383d556ab50b4') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 25, step4 of 5: 0, step5 of 5: 18 } } m30999| Mon Dec 17 15:32:07.151 [Balancer] moveChunk result: { ok: 1.0 } m30001| Mon Dec 17 15:32:07.150 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab4ee6') }, max: { _id: ObjectId('50cf812d256383d556ab50b4') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:07.150 [conn8] moveChunk updating self version to: 6|1||50cf81365ec0810ee359b56b through { _id: ObjectId('50cf812d256383d556ab50b4') } -> { _id: ObjectId('50cf812d256383d556ab5282') } for collection 'test.mrShardedOut' m30999| Mon Dec 17 15:32:07.152 [Balancer] ChunkManager: time to load chunks for test.mrShardedOut: 0ms sequenceNumber: 55 version: 6|1||50cf81365ec0810ee359b56b based on: 5|1||50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:32:07.153 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:32:07.153 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:07.151 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:07-75", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776327151), what: "moveChunk.commit", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab4ee6') }, max: { _id: ObjectId('50cf812d256383d556ab50b4') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:07.151 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:07.151 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:07.151 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:07.151 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:07.151 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:07.151 [cleanupOldData-50cf8147c94e4981dc6c1b2d] (start) waiting to cleanup test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab4ee6') } -> { _id: ObjectId('50cf812d256383d556ab50b4') }, # cursors remaining: 0 m30001| Mon Dec 17 15:32:07.151 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:07.151 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:07-76", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776327151), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab4ee6') }, max: { _id: ObjectId('50cf812d256383d556ab50b4') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 40, step5 of 6: 16, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:07.179 [cleanupOldData-50cf8147c94e4981dc6c1b2d] waiting to remove documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab4ee6') } -> { _id: ObjectId('50cf812d256383d556ab50b4') } m30001| Mon Dec 17 15:32:07.179 [cleanupOldData-50cf8147c94e4981dc6c1b2d] moveChunk starting delete for: test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab4ee6') } -> { _id: ObjectId('50cf812d256383d556ab50b4') } m30001| Mon Dec 17 15:32:07.203 [cleanupOldData-50cf8147c94e4981dc6c1b2d] moveChunk deleted 462 documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab4ee6') } -> { _id: ObjectId('50cf812d256383d556ab50b4') } m30001| Mon Dec 17 15:32:08.000 [conn3] 54900/59980 91% m30999| Mon Dec 17 15:32:08.155 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:08.155 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:08.155 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:08 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf81485ec0810ee359b573" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf81475ec0810ee359b572" } } m30999| Mon Dec 17 15:32:08.158 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81485ec0810ee359b573 m30999| Mon Dec 17 15:32:08.158 [Balancer] *** start balancing round m30999| Mon Dec 17 15:32:08.159 [Balancer] shard0001 has more chunks me:40 best: shard0000:4 m30999| Mon Dec 17 15:32:08.159 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:08.159 [Balancer] donor : shard0001 chunks on 40 m30999| Mon Dec 17 15:32:08.159 [Balancer] receiver : shard0000 chunks on 4 m30999| Mon Dec 17 15:32:08.159 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:08.159 [Balancer] ns: test.foo going to move { _id: "test.foo-a_40.64535931684077", lastmod: Timestamp 5000|2, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:08.160 [Balancer] shard0001 has more chunks me:60 best: shard0000:5 m30999| Mon Dec 17 15:32:08.160 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:08.160 [Balancer] donor : shard0001 chunks on 60 m30999| Mon Dec 17 15:32:08.160 [Balancer] receiver : shard0000 chunks on 5 m30999| Mon Dec 17 15:32:08.160 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:08.160 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab50b4')", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab50b4') }, max: { _id: ObjectId('50cf812d256383d556ab5282') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:08.160 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 5|2||000000000000000000000000min: { a: 40.64535931684077 }max: { a: 51.38014652766287 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:08.160 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_40.64535931684077", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:08.161 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8148c94e4981dc6c1b2e m30001| Mon Dec 17 15:32:08.161 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:08-77", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776328161), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:08.162 [conn8] moveChunk request accepted at version 5|3||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:08.163 [conn8] moveChunk number of documents: 697 m30001| Mon Dec 17 15:32:08.166 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:08.172 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 21, clonedBytes: 22596, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:08.182 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 218, clonedBytes: 234568, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:08.243 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 481, clonedBytes: 517556, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:08.255 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:08.255 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 40.64535931684077 } -> { a: 51.38014652766287 } m30001| Mon Dec 17 15:32:08.262 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, shardKeyPattern: { a: 1.0 }, state: "steady", counts: { cloned: 697, clonedBytes: 749972, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:08.268 [conn8] moveChunk setting version to: 6|0||50cf812d5ec0810ee359b569 m30000| Mon Dec 17 15:32:08.268 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:08.268 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 40.64535931684077 } -> { a: 51.38014652766287 } m30000| Mon Dec 17 15:32:08.268 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:08-9", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776328268), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 91, step4 of 5: 0, step5 of 5: 13 } } m30999| Mon Dec 17 15:32:08.272 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:08.273 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 56 version: 6|1||50cf812d5ec0810ee359b569 based on: 5|3||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:08.273 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 6|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab50b4') }max: { _id: ObjectId('50cf812d256383d556ab5282') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:08.270 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, shardKeyPattern: { a: 1.0 }, state: "done", counts: { cloned: 697, clonedBytes: 749972, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:08.270 [conn8] moveChunk updating self version to: 6|1||50cf812d5ec0810ee359b569 through { a: 51.38014652766287 } -> { a: 62.87552835419774 } for collection 'test.foo' m30001| Mon Dec 17 15:32:08.271 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:08-78", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776328271), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:08.271 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:08.271 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:08.271 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:08.271 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:08.271 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:08.271 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:08.271 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:08-79", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776328271), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 99, step5 of 6: 8, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:08.271 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 40.64535931684077 }, max: { a: 51.38014652766287 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_40.64535931684077", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:959 w:42 reslen:37 111ms m30001| Mon Dec 17 15:32:08.273 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab50b4') }, max: { _id: ObjectId('50cf812d256383d556ab5282') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab50b4')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:08.274 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8148c94e4981dc6c1b2f m30001| Mon Dec 17 15:32:08.274 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:08-80", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776328274), what: "moveChunk.start", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab50b4') }, max: { _id: ObjectId('50cf812d256383d556ab5282') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:08.275 [conn8] moveChunk request accepted at version 6|1||50cf81365ec0810ee359b56b m30001| Mon Dec 17 15:32:08.275 [conn8] moveChunk number of documents: 462 m30001| Mon Dec 17 15:32:08.275 [cleanupOldData-50cf8148c94e4981dc6c1b30] (start) waiting to cleanup test.foo from { a: 40.64535931684077 } -> { a: 51.38014652766287 }, # cursors remaining: 2 m30001| Mon Dec 17 15:32:08.278 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab50b4') }, max: { _id: ObjectId('50cf812d256383d556ab5282') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:08.282 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab50b4') }, max: { _id: ObjectId('50cf812d256383d556ab5282') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 54, clonedBytes: 58374, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:08.302 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab50b4') }, max: { _id: ObjectId('50cf812d256383d556ab5282') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 54, clonedBytes: 58374, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:08.313 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab50b4') }, max: { _id: ObjectId('50cf812d256383d556ab5282') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 302, clonedBytes: 326462, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:08.320 [cleanupOldData-50cf8148c94e4981dc6c1b30] (looping 1) waiting to cleanup test.foo from { a: 40.64535931684077 } -> { a: 51.38014652766287 } # cursors:2 m30001| Mon Dec 17 15:32:08.320 [cleanupOldData-50cf8148c94e4981dc6c1b30] cursors: 69090575698978 69090894570165 m30000| Mon Dec 17 15:32:08.304 [FileAllocator] allocating new datafile /data/db/mrShardedOutput0/test.2, filling with zeroes... m30000| Mon Dec 17 15:32:08.328 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:08.328 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab50b4') } -> { _id: ObjectId('50cf812d256383d556ab5282') } m30001| Mon Dec 17 15:32:08.330 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab50b4') }, max: { _id: ObjectId('50cf812d256383d556ab5282') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:08.346 [conn8] moveChunk setting version to: 7|0||50cf81365ec0810ee359b56b m30000| Mon Dec 17 15:32:08.346 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:08.352 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:08.352 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab50b4') } -> { _id: ObjectId('50cf812d256383d556ab5282') } m30000| Mon Dec 17 15:32:08.352 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:08-10", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776328352), what: "moveChunk.to", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab50b4') }, max: { _id: ObjectId('50cf812d256383d556ab5282') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 52, step4 of 5: 0, step5 of 5: 24 } } m30001| Mon Dec 17 15:32:08.362 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab50b4') }, max: { _id: ObjectId('50cf812d256383d556ab5282') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:08.362 [conn8] moveChunk updating self version to: 7|1||50cf81365ec0810ee359b56b through { _id: ObjectId('50cf812d256383d556ab5282') } -> { _id: ObjectId('50cf812d256383d556ab5450') } for collection 'test.mrShardedOut' m30001| Mon Dec 17 15:32:08.363 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:08-81", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776328363), what: "moveChunk.commit", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab50b4') }, max: { _id: ObjectId('50cf812d256383d556ab5282') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:08.363 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Mon Dec 17 15:32:08.367 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:08.369 [Balancer] ChunkManager: time to load chunks for test.mrShardedOut: 0ms sequenceNumber: 57 version: 7|1||50cf81365ec0810ee359b56b based on: 6|1||50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:32:08.369 [Balancer] *** end of balancing round m30001| Mon Dec 17 15:32:08.366 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:08.366 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:08.367 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:08.367 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:08.367 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:08.367 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:08-82", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776328367), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab50b4') }, max: { _id: ObjectId('50cf812d256383d556ab5282') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 54, step5 of 6: 36, step6 of 6: 0 } } m30999| Mon Dec 17 15:32:08.369 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:08.369 [cleanupOldData-50cf8148c94e4981dc6c1b31] (start) waiting to cleanup test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab50b4') } -> { _id: ObjectId('50cf812d256383d556ab5282') }, # cursors remaining: 0 m30001| Mon Dec 17 15:32:08.392 [cleanupOldData-50cf8148c94e4981dc6c1b31] waiting to remove documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab50b4') } -> { _id: ObjectId('50cf812d256383d556ab5282') } m30001| Mon Dec 17 15:32:08.392 [cleanupOldData-50cf8148c94e4981dc6c1b31] moveChunk starting delete for: test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab50b4') } -> { _id: ObjectId('50cf812d256383d556ab5282') } m30001| Mon Dec 17 15:32:08.431 [cleanupOldData-50cf8148c94e4981dc6c1b31] moveChunk deleted 462 documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab50b4') } -> { _id: ObjectId('50cf812d256383d556ab5282') } m30001| Mon Dec 17 15:32:08.772 [cleanupOldData-50cf8143c94e4981dc6c1b1f] (looping 201) waiting to cleanup test.foo from { a: 0.3993422724306583 } -> { a: 10.46284288167953 } # cursors:1 m30001| Mon Dec 17 15:32:08.772 [cleanupOldData-50cf8143c94e4981dc6c1b1f] cursors: 69090894570165 m30001| Mon Dec 17 15:32:09.034 [FileAllocator] allocating new datafile /data/db/mrShardedOutput1/test.5, filling with zeroes... m30999| Mon Dec 17 15:32:09.372 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:09.372 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:09.373 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:09 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf81495ec0810ee359b574" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf81485ec0810ee359b573" } } m30999| Mon Dec 17 15:32:09.377 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81495ec0810ee359b574 m30999| Mon Dec 17 15:32:09.377 [Balancer] *** start balancing round m30999| Mon Dec 17 15:32:09.378 [Balancer] shard0001 has more chunks me:39 best: shard0000:5 m30999| Mon Dec 17 15:32:09.378 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:09.378 [Balancer] donor : shard0001 chunks on 39 m30999| Mon Dec 17 15:32:09.381 [Balancer] receiver : shard0000 chunks on 5 m30999| Mon Dec 17 15:32:09.381 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:09.381 [Balancer] ns: test.foo going to move { _id: "test.foo-a_51.38014652766287", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:09.382 [Balancer] shard0001 has more chunks me:59 best: shard0000:6 m30999| Mon Dec 17 15:32:09.382 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:09.382 [Balancer] donor : shard0001 chunks on 59 m30999| Mon Dec 17 15:32:09.382 [Balancer] receiver : shard0000 chunks on 6 m30999| Mon Dec 17 15:32:09.382 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:09.385 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab5282')", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab5282') }, max: { _id: ObjectId('50cf812d256383d556ab5450') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:09.385 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 6|1||000000000000000000000000min: { a: 51.38014652766287 }max: { a: 62.87552835419774 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:09.386 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_51.38014652766287", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:09.386 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8149c94e4981dc6c1b32 m30001| Mon Dec 17 15:32:09.386 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:09-83", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776329386), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:09.387 [conn8] moveChunk request accepted at version 6|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:09.388 [conn8] moveChunk number of documents: 698 m30001| Mon Dec 17 15:32:09.392 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:09.402 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:09.435 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:09.452 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 290, clonedBytes: 312040, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:09.473 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:09.473 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 51.38014652766287 } -> { a: 62.87552835419774 } m30001| Mon Dec 17 15:32:09.471 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 674, clonedBytes: 725224, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:09.506 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, shardKeyPattern: { a: 1.0 }, state: "steady", counts: { cloned: 698, clonedBytes: 751048, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Mon Dec 17 15:32:10.067 [LockPinger] cluster localhost:30000 pinged successfully at Mon Dec 17 15:32:10 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1355776300:1804289383', sleeping for 30000ms m30001| Mon Dec 17 15:32:10.086 [cleanupOldData-50cf8144c94e4981dc6c1b23] (looping 201) waiting to cleanup test.foo from { a: 10.46284288167953 } -> { a: 21.16596954874694 } # cursors:1 m30001| Mon Dec 17 15:32:10.086 [cleanupOldData-50cf8144c94e4981dc6c1b23] cursors: 69090894570165 m30001| Mon Dec 17 15:32:10.460 [conn8] moveChunk setting version to: 7|0||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:10.460 [conn5] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:18 reslen:51 973ms m30000| Mon Dec 17 15:32:10.460 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.462 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.466 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.470 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.474 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.478 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.482 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.486 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.490 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.494 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.498 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.502 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.506 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.510 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.514 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.518 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.522 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.526 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.536 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.538 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.542 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.546 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.550 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.554 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.558 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.562 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.566 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.570 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.574 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.578 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.582 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.586 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.590 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.594 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.598 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.602 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.606 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.610 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.614 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.618 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.622 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.626 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.630 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.634 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.638 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.642 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.646 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.650 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.654 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.658 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.662 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.666 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.670 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.674 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.678 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.682 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.686 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.690 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.694 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.698 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.702 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.706 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.710 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.714 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.718 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.722 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.726 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.730 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.740 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.750 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.760 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.770 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.780 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.790 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.798 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.802 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.806 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.810 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.814 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.818 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.822 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.826 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.830 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.834 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.838 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.842 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.846 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.850 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.854 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.858 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.862 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.866 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.870 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.874 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.878 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.882 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.886 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:10.890 [conn11] Waiting for commit to finish m30001| Mon Dec 17 15:32:10.893 [conn5] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:22 reslen:51 422ms m30000| Mon Dec 17 15:32:10.893 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 51.38014652766287 } -> { a: 62.87552835419774 } m30000| Mon Dec 17 15:32:10.893 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:10-11", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776330893), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, step1 of 5: 6, step2 of 5: 0, step3 of 5: 77, step4 of 5: 0, step5 of 5: 1419 } } m30000| Mon Dec 17 15:32:10.902 [conn11] command admin.$cmd command: { _recvChunkCommit: 1 } ntoreturn:1 keyUpdates:0 reslen:263 442ms m30001| Mon Dec 17 15:32:10.903 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, shardKeyPattern: { a: 1.0 }, state: "done", counts: { cloned: 698, clonedBytes: 751048, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:10.903 [conn8] moveChunk updating self version to: 7|1||50cf812d5ec0810ee359b569 through { a: 62.87552835419774 } -> { a: 89.16067937389016 } for collection 'test.foo' m30001| Mon Dec 17 15:32:10.903 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:10-84", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776330903), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:10.903 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Mon Dec 17 15:32:10.907 [FileAllocator] done allocating datafile /data/db/mrShardedOutput0/test.2, size: 64MB, took 2.602 secs m30001| Mon Dec 17 15:32:11.254 [cleanupOldData-50cf8145c94e4981dc6c1b27] (looping 201) waiting to cleanup test.foo from { a: 21.16596954874694 } -> { a: 40.64535931684077 } # cursors:1 m30001| Mon Dec 17 15:32:11.254 [cleanupOldData-50cf8145c94e4981dc6c1b27] cursors: 69090894570165 m30999| Mon Dec 17 15:32:11.378 [Balancer] moveChunk result: { ok: 1.0 } m30001| Mon Dec 17 15:32:11.377 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:11.377 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:11.377 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:11.377 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:11.377 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:11.377 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:11-85", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776331377), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 117, step5 of 6: 1870, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:11.378 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 51.38014652766287 }, max: { a: 62.87552835419774 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_51.38014652766287", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 5 locks(micros) W:68 r:1828 w:67 reslen:37 1992ms m30001| Mon Dec 17 15:32:11.379 [cleanupOldData-50cf814bc94e4981dc6c1b33] (start) waiting to cleanup test.foo from { a: 51.38014652766287 } -> { a: 62.87552835419774 }, # cursors remaining: 1 m30001| Mon Dec 17 15:32:11.379 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab5282') }, max: { _id: ObjectId('50cf812d256383d556ab5450') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab5282')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:11.380 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf814bc94e4981dc6c1b34 m30001| Mon Dec 17 15:32:11.380 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:11-86", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776331380), what: "moveChunk.start", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab5282') }, max: { _id: ObjectId('50cf812d256383d556ab5450') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:11.381 [conn8] moveChunk request accepted at version 7|1||50cf81365ec0810ee359b56b m30001| Mon Dec 17 15:32:11.382 [conn8] moveChunk number of documents: 462 m30999| Mon Dec 17 15:32:11.379 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 58 version: 7|1||50cf812d5ec0810ee359b569 based on: 6|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:11.379 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 7|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab5282') }max: { _id: ObjectId('50cf812d256383d556ab5450') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:11.386 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab5282') }, max: { _id: ObjectId('50cf812d256383d556ab5450') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:11.390 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab5282') }, max: { _id: ObjectId('50cf812d256383d556ab5450') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:11.391 [conn3] 18600/59980 31% m30001| Mon Dec 17 15:32:11.400 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab5282') }, max: { _id: ObjectId('50cf812d256383d556ab5450') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 8, clonedBytes: 8648, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:11.410 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab5282') }, max: { _id: ObjectId('50cf812d256383d556ab5450') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 266, clonedBytes: 287546, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:11.418 [cleanupOldData-50cf814bc94e4981dc6c1b33] (looping 1) waiting to cleanup test.foo from { a: 51.38014652766287 } -> { a: 62.87552835419774 } # cursors:1 m30001| Mon Dec 17 15:32:11.418 [cleanupOldData-50cf814bc94e4981dc6c1b33] cursors: 69090894570165 m30001| Mon Dec 17 15:32:11.430 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab5282') }, max: { _id: ObjectId('50cf812d256383d556ab5450') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:11.431 [conn8] moveChunk setting version to: 8|0||50cf81365ec0810ee359b56b m30000| Mon Dec 17 15:32:11.419 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:11.419 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab5282') } -> { _id: ObjectId('50cf812d256383d556ab5450') } m30000| Mon Dec 17 15:32:11.431 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:11.433 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab5282') } -> { _id: ObjectId('50cf812d256383d556ab5450') } m30000| Mon Dec 17 15:32:11.433 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:11-12", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776331433), what: "moveChunk.to", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab5282') }, max: { _id: ObjectId('50cf812d256383d556ab5450') }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 35, step4 of 5: 0, step5 of 5: 14 } } m30001| Mon Dec 17 15:32:11.440 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab5282') }, max: { _id: ObjectId('50cf812d256383d556ab5450') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:11.440 [conn8] moveChunk updating self version to: 8|1||50cf81365ec0810ee359b56b through { _id: ObjectId('50cf812d256383d556ab5450') } -> { _id: ObjectId('50cf812d256383d556ab561e') } for collection 'test.mrShardedOut' m30001| Mon Dec 17 15:32:11.441 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:11-87", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776331441), what: "moveChunk.commit", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab5282') }, max: { _id: ObjectId('50cf812d256383d556ab5450') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:11.441 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Mon Dec 17 15:32:11.442 [Balancer] moveChunk result: { ok: 1.0 } m30001| Mon Dec 17 15:32:11.441 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:11.441 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:11.441 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:11.441 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:11.442 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:11.442 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:11-88", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776331442), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab5282') }, max: { _id: ObjectId('50cf812d256383d556ab5450') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 48, step5 of 6: 10, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:11.443 [cleanupOldData-50cf814bc94e4981dc6c1b35] (start) waiting to cleanup test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab5282') } -> { _id: ObjectId('50cf812d256383d556ab5450') }, # cursors remaining: 0 m30999| Mon Dec 17 15:32:11.443 [Balancer] ChunkManager: time to load chunks for test.mrShardedOut: 0ms sequenceNumber: 59 version: 8|1||50cf81365ec0810ee359b56b based on: 7|1||50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:32:11.443 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:32:11.444 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:11.470 [cleanupOldData-50cf814bc94e4981dc6c1b35] waiting to remove documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab5282') } -> { _id: ObjectId('50cf812d256383d556ab5450') } m30001| Mon Dec 17 15:32:11.470 [cleanupOldData-50cf814bc94e4981dc6c1b35] moveChunk starting delete for: test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab5282') } -> { _id: ObjectId('50cf812d256383d556ab5450') } m30001| Mon Dec 17 15:32:11.595 [cleanupOldData-50cf814bc94e4981dc6c1b35] moveChunk deleted 462 documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab5282') } -> { _id: ObjectId('50cf812d256383d556ab5450') } m30999| Mon Dec 17 15:32:12.451 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:12.451 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:12.451 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:12 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf814c5ec0810ee359b575" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf81495ec0810ee359b574" } } m30001| Mon Dec 17 15:32:12.455 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 62.87552835419774 }, max: { a: 89.16067937389016 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_62.87552835419774", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:12.455 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf814cc94e4981dc6c1b36 m30001| Mon Dec 17 15:32:12.455 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:12-89", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776332455), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 62.87552835419774 }, max: { a: 89.16067937389016 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:12.457 [conn8] moveChunk request accepted at version 7|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:12.458 [conn8] can't move chunk of size (approximately) 1735808 because maximum size allowed to move is 1048576 ns: test.foo { a: 62.87552835419774 } -> { a: 89.16067937389016 } m30001| Mon Dec 17 15:32:12.458 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Mon Dec 17 15:32:12.452 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf814c5ec0810ee359b575 m30999| Mon Dec 17 15:32:12.452 [Balancer] *** start balancing round m30999| Mon Dec 17 15:32:12.453 [Balancer] shard0001 has more chunks me:38 best: shard0000:6 m30999| Mon Dec 17 15:32:12.453 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:12.453 [Balancer] donor : shard0001 chunks on 38 m30999| Mon Dec 17 15:32:12.454 [Balancer] receiver : shard0000 chunks on 6 m30999| Mon Dec 17 15:32:12.454 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:12.454 [Balancer] ns: test.foo going to move { _id: "test.foo-a_62.87552835419774", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 62.87552835419774 }, max: { a: 89.16067937389016 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:12.454 [Balancer] shard0001 has more chunks me:58 best: shard0000:7 m30999| Mon Dec 17 15:32:12.454 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:12.454 [Balancer] donor : shard0001 chunks on 58 m30999| Mon Dec 17 15:32:12.454 [Balancer] receiver : shard0000 chunks on 7 m30999| Mon Dec 17 15:32:12.454 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:12.454 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab5450')", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab5450') }, max: { _id: ObjectId('50cf812d256383d556ab561e') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:12.454 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 7|1||000000000000000000000000min: { a: 62.87552835419774 }max: { a: 89.16067937389016 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30999| Mon Dec 17 15:32:12.459 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 1735808, errmsg: "chunk too big to move", ok: 0.0 } m30999| Mon Dec 17 15:32:12.459 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 1735808, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { a: 62.87552835419774 } max: { a: 62.87552835419774 } m30999| Mon Dec 17 15:32:12.459 [Balancer] forcing a split because migrate failed for size reasons m30001| Mon Dec 17 15:32:12.459 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:12.459 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:12.459 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:12-90", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776332459), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 62.87552835419774 }, max: { a: 89.16067937389016 }, step1 of 6: 0, step2 of 6: 1, note: "aborted" } } m30001| Mon Dec 17 15:32:12.459 [conn8] request split points lookup for chunk test.foo { : 62.87552835419774 } -->> { : 89.16067937389016 } m30001| Mon Dec 17 15:32:12.461 [conn8] splitVector doing another cycle because of force, keyCount now: 764 m30001| Mon Dec 17 15:32:12.462 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 62.87552835419774 }, max: { a: 89.16067937389016 }, from: "shard0001", splitKeys: [ { a: 75.93300496228039 } ], shardId: "test.foo-a_62.87552835419774", configdb: "localhost:30000" } m30999| Mon Dec 17 15:32:12.466 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 60 version: 7|3||50cf812d5ec0810ee359b569 based on: 7|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:12.466 [Balancer] forced split results: { ok: 1.0 } m30999| Mon Dec 17 15:32:12.466 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 8|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab5450') }max: { _id: ObjectId('50cf812d256383d556ab561e') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:12.463 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf814cc94e4981dc6c1b37 m30001| Mon Dec 17 15:32:12.464 [conn8] splitChunk accepted at version 7|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:12.464 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:12-91", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776332464), what: "split", ns: "test.foo", details: { before: { min: { a: 62.87552835419774 }, max: { a: 89.16067937389016 }, lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, lastmod: Timestamp 7000|2, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, lastmod: Timestamp 7000|3, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:32:12.465 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:12.466 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab5450') }, max: { _id: ObjectId('50cf812d256383d556ab561e') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab5450')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:12.467 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf814cc94e4981dc6c1b38 m30001| Mon Dec 17 15:32:12.467 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:12-92", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776332467), what: "moveChunk.start", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab5450') }, max: { _id: ObjectId('50cf812d256383d556ab561e') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:12.467 [conn8] moveChunk request accepted at version 8|1||50cf81365ec0810ee359b56b m30001| Mon Dec 17 15:32:12.468 [conn8] moveChunk number of documents: 462 m30001| Mon Dec 17 15:32:12.470 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab5450') }, max: { _id: ObjectId('50cf812d256383d556ab561e') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:12.480 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab5450') }, max: { _id: ObjectId('50cf812d256383d556ab561e') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 131, clonedBytes: 141611, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:12.491 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab5450') }, max: { _id: ObjectId('50cf812d256383d556ab561e') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 381, clonedBytes: 411861, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:12.495 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:12.495 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab5450') } -> { _id: ObjectId('50cf812d256383d556ab561e') } m30001| Mon Dec 17 15:32:12.510 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab5450') }, max: { _id: ObjectId('50cf812d256383d556ab561e') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:12.510 [conn8] moveChunk setting version to: 9|0||50cf81365ec0810ee359b56b m30000| Mon Dec 17 15:32:12.511 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:12.511 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab5450') } -> { _id: ObjectId('50cf812d256383d556ab561e') } m30000| Mon Dec 17 15:32:12.511 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:12-13", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776332511), what: "moveChunk.to", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab5450') }, max: { _id: ObjectId('50cf812d256383d556ab561e') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 25, step4 of 5: 0, step5 of 5: 15 } } m30999| Mon Dec 17 15:32:12.521 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:12.522 [Balancer] ChunkManager: time to load chunks for test.mrShardedOut: 0ms sequenceNumber: 61 version: 9|1||50cf81365ec0810ee359b56b based on: 8|1||50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:32:12.523 [Balancer] *** end of balancing round m30001| Mon Dec 17 15:32:12.520 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab5450') }, max: { _id: ObjectId('50cf812d256383d556ab561e') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:12.520 [conn8] moveChunk updating self version to: 9|1||50cf81365ec0810ee359b56b through { _id: ObjectId('50cf812d256383d556ab561e') } -> { _id: ObjectId('50cf812d256383d556ab57ec') } for collection 'test.mrShardedOut' m30001| Mon Dec 17 15:32:12.521 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:12-93", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776332521), what: "moveChunk.commit", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab5450') }, max: { _id: ObjectId('50cf812d256383d556ab561e') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:12.521 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:12.521 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:12.521 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:12.521 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:12.521 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:12.521 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:12.521 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:12-94", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776332521), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab5450') }, max: { _id: ObjectId('50cf812d256383d556ab561e') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 42, step5 of 6: 10, step6 of 6: 0 } } m30999| Mon Dec 17 15:32:12.523 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:12.523 [cleanupOldData-50cf814cc94e4981dc6c1b39] (start) waiting to cleanup test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab5450') } -> { _id: ObjectId('50cf812d256383d556ab561e') }, # cursors remaining: 0 m30001| Mon Dec 17 15:32:12.550 [cleanupOldData-50cf814cc94e4981dc6c1b39] waiting to remove documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab5450') } -> { _id: ObjectId('50cf812d256383d556ab561e') } m30001| Mon Dec 17 15:32:12.550 [cleanupOldData-50cf814cc94e4981dc6c1b39] moveChunk starting delete for: test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab5450') } -> { _id: ObjectId('50cf812d256383d556ab561e') } m30001| Mon Dec 17 15:32:13.301 [cleanupOldData-50cf814cc94e4981dc6c1b39] moveChunk deleted 462 documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab5450') } -> { _id: ObjectId('50cf812d256383d556ab561e') } m30001| Mon Dec 17 15:32:13.398 [cleanupOldData-50cf8148c94e4981dc6c1b30] (looping 201) waiting to cleanup test.foo from { a: 40.64535931684077 } -> { a: 51.38014652766287 } # cursors:1 m30001| Mon Dec 17 15:32:13.398 [cleanupOldData-50cf8148c94e4981dc6c1b30] cursors: 69090894570165 m30999| Mon Dec 17 15:32:13.527 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:13.527 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:13.528 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:13 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf814d5ec0810ee359b576" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf814c5ec0810ee359b575" } } m30999| Mon Dec 17 15:32:13.528 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf814d5ec0810ee359b576 m30999| Mon Dec 17 15:32:13.528 [Balancer] *** start balancing round m30001| Mon Dec 17 15:32:13.651 [conn8] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:52 reslen:1969 121ms m30999| Mon Dec 17 15:32:13.652 [Balancer] shard0001 has more chunks me:39 best: shard0000:6 m30999| Mon Dec 17 15:32:13.652 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:13.652 [Balancer] donor : shard0001 chunks on 39 m30999| Mon Dec 17 15:32:13.652 [Balancer] receiver : shard0000 chunks on 6 m30999| Mon Dec 17 15:32:13.652 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:13.652 [Balancer] ns: test.foo going to move { _id: "test.foo-a_62.87552835419774", lastmod: Timestamp 7000|2, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:13.652 [Balancer] shard0001 has more chunks me:57 best: shard0000:8 m30999| Mon Dec 17 15:32:13.652 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:13.652 [Balancer] donor : shard0001 chunks on 57 m30999| Mon Dec 17 15:32:13.652 [Balancer] receiver : shard0000 chunks on 8 m30999| Mon Dec 17 15:32:13.652 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:13.652 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab561e')", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab561e') }, max: { _id: ObjectId('50cf812d256383d556ab57ec') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:13.652 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 7|2||000000000000000000000000min: { a: 62.87552835419774 }max: { a: 75.93300496228039 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:13.653 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_62.87552835419774", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:13.655 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf814dc94e4981dc6c1b3a m30001| Mon Dec 17 15:32:13.655 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:13-95", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776333655), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:13.655 [conn8] moveChunk request accepted at version 7|3||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:13.656 [conn8] moveChunk number of documents: 764 m30001| Mon Dec 17 15:32:13.687 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:13.690 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:13.698 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:13.712 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 194, clonedBytes: 208744, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:13.732 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 588, clonedBytes: 632688, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:13.744 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:13.744 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 62.87552835419774 } -> { a: 75.93300496228039 } m30001| Mon Dec 17 15:32:13.767 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, shardKeyPattern: { a: 1.0 }, state: "steady", counts: { cloned: 764, clonedBytes: 822064, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:13.770 [conn8] moveChunk setting version to: 8|0||50cf812d5ec0810ee359b569 m30000| Mon Dec 17 15:32:13.770 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:13.778 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 62.87552835419774 } -> { a: 75.93300496228039 } m30000| Mon Dec 17 15:32:13.779 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:13-14", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776333779), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 60, step4 of 5: 0, step5 of 5: 34 } } m30001| Mon Dec 17 15:32:13.779 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, shardKeyPattern: { a: 1.0 }, state: "done", counts: { cloned: 764, clonedBytes: 822064, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:13.779 [conn8] moveChunk updating self version to: 8|1||50cf812d5ec0810ee359b569 through { a: 75.93300496228039 } -> { a: 89.16067937389016 } for collection 'test.foo' m30001| Mon Dec 17 15:32:13.780 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:13-96", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776333780), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:13.780 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:13.780 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:13.780 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:13.780 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:13.780 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:13.780 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:13.780 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:13-97", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776333780), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 27, step4 of 6: 84, step5 of 6: 13, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:13.780 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 62.87552835419774 }, max: { a: 75.93300496228039 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_62.87552835419774", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:33 r:1070 w:71 reslen:37 127ms m30001| Mon Dec 17 15:32:13.782 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab561e') }, max: { _id: ObjectId('50cf812d256383d556ab57ec') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab561e')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:13.783 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf814dc94e4981dc6c1b3b m30001| Mon Dec 17 15:32:13.783 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:13-98", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776333783), what: "moveChunk.start", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab561e') }, max: { _id: ObjectId('50cf812d256383d556ab57ec') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:13.784 [cleanupOldData-50cf814dc94e4981dc6c1b3c] (start) waiting to cleanup test.foo from { a: 62.87552835419774 } -> { a: 75.93300496228039 }, # cursors remaining: 1 m30001| Mon Dec 17 15:32:13.784 [conn8] moveChunk request accepted at version 9|1||50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:32:13.780 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:13.781 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 62 version: 8|1||50cf812d5ec0810ee359b569 based on: 7|3||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:13.782 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 9|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab561e') }max: { _id: ObjectId('50cf812d256383d556ab57ec') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:13.784 [conn8] moveChunk number of documents: 462 m30001| Mon Dec 17 15:32:13.789 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab561e') }, max: { _id: ObjectId('50cf812d256383d556ab57ec') }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:13.791 [cleanupOldData-50cf8143c94e4981dc6c1b1f] (looping 401) waiting to cleanup test.foo from { a: 0.3993422724306583 } -> { a: 10.46284288167953 } # cursors:1 m30001| Mon Dec 17 15:32:13.791 [cleanupOldData-50cf8143c94e4981dc6c1b1f] cursors: 69090894570165 m30001| Mon Dec 17 15:32:13.798 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab561e') }, max: { _id: ObjectId('50cf812d256383d556ab57ec') }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:13.807 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab561e') }, max: { _id: ObjectId('50cf812d256383d556ab57ec') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 15, clonedBytes: 16215, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:13.819 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab561e') }, max: { _id: ObjectId('50cf812d256383d556ab57ec') }, shardKeyPattern: { _id: 1 }, state: "clone", counts: { cloned: 347, clonedBytes: 375107, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:13.824 [cleanupOldData-50cf814dc94e4981dc6c1b3c] (looping 1) waiting to cleanup test.foo from { a: 62.87552835419774 } -> { a: 75.93300496228039 } # cursors:1 m30001| Mon Dec 17 15:32:13.824 [cleanupOldData-50cf814dc94e4981dc6c1b3c] cursors: 69090894570165 m30000| Mon Dec 17 15:32:13.827 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:13.827 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab561e') } -> { _id: ObjectId('50cf812d256383d556ab57ec') } m30001| Mon Dec 17 15:32:13.839 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab561e') }, max: { _id: ObjectId('50cf812d256383d556ab57ec') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:13.839 [conn8] moveChunk setting version to: 10|0||50cf81365ec0810ee359b56b m30000| Mon Dec 17 15:32:13.839 [conn11] Waiting for commit to finish m30001| Mon Dec 17 15:32:13.859 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab561e') }, max: { _id: ObjectId('50cf812d256383d556ab57ec') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:13.859 [conn8] moveChunk updating self version to: 10|1||50cf81365ec0810ee359b56b through { _id: ObjectId('50cf812d256383d556ab57ec') } -> { _id: ObjectId('50cf812d256383d556ab59ba') } for collection 'test.mrShardedOut' m30001| Mon Dec 17 15:32:13.860 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:13-99", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776333860), what: "moveChunk.commit", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab561e') }, max: { _id: ObjectId('50cf812d256383d556ab57ec') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:13.860 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:13.860 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:13.860 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:13.860 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:13.860 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:13.860 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:13.860 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:13-100", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776333860), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab561e') }, max: { _id: ObjectId('50cf812d256383d556ab57ec') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 54, step5 of 6: 20, step6 of 6: 0 } } m30999| Mon Dec 17 15:32:13.861 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:13.862 [Balancer] ChunkManager: time to load chunks for test.mrShardedOut: 0ms sequenceNumber: 63 version: 10|1||50cf81365ec0810ee359b56b based on: 9|1||50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:32:13.862 [Balancer] *** end of balancing round m30001| Mon Dec 17 15:32:13.889 [cleanupOldData-50cf814dc94e4981dc6c1b3d] (start) waiting to cleanup test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab561e') } -> { _id: ObjectId('50cf812d256383d556ab57ec') }, # cursors remaining: 0 m30999| Mon Dec 17 15:32:13.889 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30000| Mon Dec 17 15:32:13.849 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:13.851 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab561e') } -> { _id: ObjectId('50cf812d256383d556ab57ec') } m30000| Mon Dec 17 15:32:13.851 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:13-15", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776333851), what: "moveChunk.to", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab561e') }, max: { _id: ObjectId('50cf812d256383d556ab57ec') }, step1 of 5: 3, step2 of 5: 0, step3 of 5: 23, step4 of 5: 3, step5 of 5: 24 } } m30001| Mon Dec 17 15:32:13.910 [cleanupOldData-50cf814dc94e4981dc6c1b3d] waiting to remove documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab561e') } -> { _id: ObjectId('50cf812d256383d556ab57ec') } m30001| Mon Dec 17 15:32:13.910 [cleanupOldData-50cf814dc94e4981dc6c1b3d] moveChunk starting delete for: test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab561e') } -> { _id: ObjectId('50cf812d256383d556ab57ec') } m30001| Mon Dec 17 15:32:14.611 [conn3] 46200/59980 77% m30001| Mon Dec 17 15:32:14.871 [cleanupOldData-50cf814dc94e4981dc6c1b3d] moveChunk deleted 462 documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab561e') } -> { _id: ObjectId('50cf812d256383d556ab57ec') } m30999| Mon Dec 17 15:32:14.891 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:14.891 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:14.891 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:14 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf814e5ec0810ee359b577" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf814d5ec0810ee359b576" } } m30999| Mon Dec 17 15:32:14.892 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf814e5ec0810ee359b577 m30999| Mon Dec 17 15:32:14.892 [Balancer] *** start balancing round m30999| Mon Dec 17 15:32:14.894 [Balancer] shard0001 has more chunks me:38 best: shard0000:7 m30999| Mon Dec 17 15:32:14.894 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:14.894 [Balancer] donor : shard0001 chunks on 38 m30999| Mon Dec 17 15:32:14.894 [Balancer] receiver : shard0000 chunks on 7 m30999| Mon Dec 17 15:32:14.894 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:14.894 [Balancer] ns: test.foo going to move { _id: "test.foo-a_75.93300496228039", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:14.894 [Balancer] shard0001 has more chunks me:56 best: shard0000:9 m30999| Mon Dec 17 15:32:14.894 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:14.894 [Balancer] donor : shard0001 chunks on 56 m30999| Mon Dec 17 15:32:14.894 [Balancer] receiver : shard0000 chunks on 9 m30999| Mon Dec 17 15:32:14.894 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:14.894 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab57ec')", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab57ec') }, max: { _id: ObjectId('50cf812d256383d556ab59ba') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:14.894 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 8|1||000000000000000000000000min: { a: 75.93300496228039 }max: { a: 89.16067937389016 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:14.895 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_75.93300496228039", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:14.895 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf814ec94e4981dc6c1b3e m30001| Mon Dec 17 15:32:14.896 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:14-101", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776334895), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:14.896 [conn8] moveChunk request accepted at version 8|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:14.898 [conn8] moveChunk number of documents: 764 m30001| Mon Dec 17 15:32:14.907 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:14.911 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:14.918 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:14.931 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:14.952 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 336, clonedBytes: 361536, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:14.987 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 503, clonedBytes: 541228, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:15.022 [cleanupOldData-50cf8144c94e4981dc6c1b23] (looping 401) waiting to cleanup test.foo from { a: 10.46284288167953 } -> { a: 21.16596954874694 } # cursors:1 m30001| Mon Dec 17 15:32:15.023 [cleanupOldData-50cf8144c94e4981dc6c1b23] cursors: 69090894570165 m30001| Mon Dec 17 15:32:15.055 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 503, clonedBytes: 541228, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:15.187 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 503, clonedBytes: 541228, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:15.447 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 503, clonedBytes: 541228, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:15.768 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:15.768 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 75.93300496228039 } -> { a: 89.16067937389016 } m30001| Mon Dec 17 15:32:15.963 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, shardKeyPattern: { a: 1.0 }, state: "steady", counts: { cloned: 764, clonedBytes: 822064, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:16.037 [conn8] moveChunk setting version to: 9|0||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:16.038 [conn5] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:34 reslen:51 226ms m30000| Mon Dec 17 15:32:16.038 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:16.042 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:16.051 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 75.93300496228039 } -> { a: 89.16067937389016 } m30000| Mon Dec 17 15:32:16.051 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:16-16", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776336051), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 869, step4 of 5: 0, step5 of 5: 282 } } m30001| Mon Dec 17 15:32:16.053 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, shardKeyPattern: { a: 1.0 }, state: "done", counts: { cloned: 764, clonedBytes: 822064, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:16.053 [conn8] moveChunk updating self version to: 9|1||50cf812d5ec0810ee359b569 through { a: 89.16067937389016 } -> { a: 119.0328269731253 } for collection 'test.foo' m30001| Mon Dec 17 15:32:16.054 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:16-102", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776336054), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:16.054 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:16.054 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:16.054 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:16.054 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:16.054 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:16.054 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:16.054 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:16-103", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776336054), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 7, step4 of 6: 1058, step5 of 6: 91, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:16.054 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 75.93300496228039 }, max: { a: 89.16067937389016 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_75.93300496228039", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 5 locks(micros) W:27 r:2291 w:64 reslen:37 1159ms m30001| Mon Dec 17 15:32:16.056 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab57ec') }, max: { _id: ObjectId('50cf812d256383d556ab59ba') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab57ec')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:16.063 [cleanupOldData-50cf8150c94e4981dc6c1b3f] (start) waiting to cleanup test.foo from { a: 75.93300496228039 } -> { a: 89.16067937389016 }, # cursors remaining: 1 m30999| Mon Dec 17 15:32:16.055 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:16.056 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 64 version: 9|1||50cf812d5ec0810ee359b569 based on: 8|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:16.056 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 10|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab57ec') }max: { _id: ObjectId('50cf812d256383d556ab59ba') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:16.064 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8150c94e4981dc6c1b40 m30001| Mon Dec 17 15:32:16.064 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:16-104", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776336064), what: "moveChunk.start", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab57ec') }, max: { _id: ObjectId('50cf812d256383d556ab59ba') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:16.065 [conn8] moveChunk request accepted at version 10|1||50cf81365ec0810ee359b56b m30001| Mon Dec 17 15:32:16.065 [conn8] moveChunk number of documents: 462 m30001| Mon Dec 17 15:32:16.073 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab57ec') }, max: { _id: ObjectId('50cf812d256383d556ab59ba') }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:16.083 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab57ec') }, max: { _id: ObjectId('50cf812d256383d556ab59ba') }, shardKeyPattern: { _id: 1 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:16.107 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:16.107 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab57ec') } -> { _id: ObjectId('50cf812d256383d556ab59ba') } m30001| Mon Dec 17 15:32:16.115 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab57ec') }, max: { _id: ObjectId('50cf812d256383d556ab59ba') }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:16.115 [cleanupOldData-50cf8150c94e4981dc6c1b3f] (looping 1) waiting to cleanup test.foo from { a: 75.93300496228039 } -> { a: 89.16067937389016 } # cursors:1 m30001| Mon Dec 17 15:32:16.115 [cleanupOldData-50cf8150c94e4981dc6c1b3f] cursors: 69090894570165 m30001| Mon Dec 17 15:32:16.115 [conn8] moveChunk setting version to: 11|0||50cf81365ec0810ee359b56b m30001| Mon Dec 17 15:32:16.134 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf812d256383d556ab57ec') }, max: { _id: ObjectId('50cf812d256383d556ab59ba') }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 462, clonedBytes: 499422, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:16.134 [conn8] moveChunk updating self version to: 11|1||50cf81365ec0810ee359b56b through { _id: ObjectId('50cf812d256383d556ab59ba') } -> { _id: ObjectId('50cf812d256383d556ab5b88') } for collection 'test.mrShardedOut' m30001| Mon Dec 17 15:32:16.135 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:16-105", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776336135), what: "moveChunk.commit", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab57ec') }, max: { _id: ObjectId('50cf812d256383d556ab59ba') }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:16.135 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Mon Dec 17 15:32:16.135 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:16.136 [Balancer] ChunkManager: time to load chunks for test.mrShardedOut: 0ms sequenceNumber: 65 version: 11|1||50cf81365ec0810ee359b56b based on: 10|1||50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:32:16.137 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:32:16.137 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30000| Mon Dec 17 15:32:16.115 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:16.118 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:16.122 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:16.126 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:16.131 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf812d256383d556ab57ec') } -> { _id: ObjectId('50cf812d256383d556ab59ba') } m30000| Mon Dec 17 15:32:16.131 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:16-17", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776336131), what: "moveChunk.to", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab57ec') }, max: { _id: ObjectId('50cf812d256383d556ab59ba') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 22, step4 of 5: 0, step5 of 5: 24 } } m30001| Mon Dec 17 15:32:16.135 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:16.135 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:16.135 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:16.135 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:16.135 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:16.135 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:16-106", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776336135), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab57ec') }, max: { _id: ObjectId('50cf812d256383d556ab59ba') }, step1 of 6: 0, step2 of 6: 8, step3 of 6: 0, step4 of 6: 49, step5 of 6: 20, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:16.137 [cleanupOldData-50cf8150c94e4981dc6c1b41] (start) waiting to cleanup test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab57ec') } -> { _id: ObjectId('50cf812d256383d556ab59ba') }, # cursors remaining: 0 m30001| Mon Dec 17 15:32:16.158 [cleanupOldData-50cf8150c94e4981dc6c1b41] waiting to remove documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab57ec') } -> { _id: ObjectId('50cf812d256383d556ab59ba') } m30001| Mon Dec 17 15:32:16.159 [cleanupOldData-50cf8150c94e4981dc6c1b41] moveChunk starting delete for: test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab57ec') } -> { _id: ObjectId('50cf812d256383d556ab59ba') } m30001| Mon Dec 17 15:32:16.199 [cleanupOldData-50cf8145c94e4981dc6c1b27] (looping 401) waiting to cleanup test.foo from { a: 21.16596954874694 } -> { a: 40.64535931684077 } # cursors:1 m30001| Mon Dec 17 15:32:16.199 [cleanupOldData-50cf8145c94e4981dc6c1b27] cursors: 69090894570165 m30001| Mon Dec 17 15:32:16.290 [cleanupOldData-50cf8150c94e4981dc6c1b41] moveChunk deleted 462 documents for test.mrShardedOut from { _id: ObjectId('50cf812d256383d556ab57ec') } -> { _id: ObjectId('50cf812d256383d556ab59ba') } m30001| Mon Dec 17 15:32:16.531 [cleanupOldData-50cf814bc94e4981dc6c1b33] (looping 201) waiting to cleanup test.foo from { a: 51.38014652766287 } -> { a: 62.87552835419774 } # cursors:1 m30001| Mon Dec 17 15:32:16.531 [cleanupOldData-50cf814bc94e4981dc6c1b33] cursors: 69090894570165 m30001| Mon Dec 17 15:32:16.799 [conn3] CMD: drop test.tmp.mrs.foo_1355776322_1 m30001| Mon Dec 17 15:32:16.800 [conn3] CMD: drop test.tmp.mr.foo_2 m30001| Mon Dec 17 15:32:16.800 [conn3] request split points lookup for chunk test.tmp.mrs.foo_1355776322_1 { : MinKey } -->> { : MaxKey } m30001| Mon Dec 17 15:32:16.846 [conn3] CMD: drop test.tmp.mr.foo_2 m30001| Mon Dec 17 15:32:16.847 [conn3] CMD: drop test.tmp.mr.foo_2_inc m30001| Mon Dec 17 15:32:16.964 [conn3] command test.$cmd command: { drop: "tmp.mr.foo_2_inc" } ntoreturn:1 keyUpdates:0 locks(micros) w:117589 reslen:128 117ms m30001| Mon Dec 17 15:32:16.964 [conn3] command test.$cmd command: { mapreduce: "foo", map: function map2() { emit(this._id, {count: 1, y: this.y}); }, reduce: function reduce2(key, values) { return values[0]; }, out: "tmp.mrs.foo_1355776322_1", shardedFirstPass: true, splitInfo: 1048576 } ntoreturn:1 keyUpdates:0 numYields: 60580 locks(micros) W:1700 r:9136063 w:6918079 reslen:3537 14010ms m30999| Mon Dec 17 15:32:16.965 [conn1] MR with sharded output, NS=test.mrShardedOut m30999| Mon Dec 17 15:32:16.965 [conn1] created new distributed lock for test.mrShardedOut on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Mon Dec 17 15:32:16.965 [conn1] trying to acquire new distributed lock for test.mrShardedOut on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:16.965 [conn1] about to acquire distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:16 2012" }, m30999| "why" : "mr-post-process", m30999| "ts" : { "$oid" : "50cf81505ec0810ee359b578" } } m30999| { "_id" : "test.mrShardedOut", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf8150c94e4981dc6c1b40" } } m30999| Mon Dec 17 15:32:16.966 [conn1] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81505ec0810ee359b578 m30999| Mon Dec 17 15:32:16.966 [conn1] setShardVersion shard0000 localhost:30000 test.mrShardedOut { setShardVersion: "test.mrShardedOut", configdb: "localhost:30000", version: Timestamp 11000|0, versionEpoch: ObjectId('50cf81365ec0810ee359b56b'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0000", shardHost: "localhost:30000" } 0x91767f8 65 m30999| Mon Dec 17 15:32:16.967 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.mrShardedOut", need_authoritative: true, errmsg: "first time for collection 'test.mrShardedOut'", ok: 0.0 } m30999| Mon Dec 17 15:32:16.968 [conn1] setShardVersion shard0000 localhost:30000 test.mrShardedOut { setShardVersion: "test.mrShardedOut", configdb: "localhost:30000", version: Timestamp 11000|0, versionEpoch: ObjectId('50cf81365ec0810ee359b56b'), serverID: ObjectId('50cf812c5ec0810ee359b567'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x91767f8 65 m30000| Mon Dec 17 15:32:16.968 [conn6] no current chunk manager found for this shard, will initialize m30999| Mon Dec 17 15:32:16.969 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Mon Dec 17 15:32:16.969 [conn1] setShardVersion shard0001 localhost:30001 test.mrShardedOut { setShardVersion: "test.mrShardedOut", configdb: "localhost:30000", version: Timestamp 11000|1, versionEpoch: ObjectId('50cf81365ec0810ee359b56b'), serverID: ObjectId('50cf812c5ec0810ee359b567'), shard: "shard0001", shardHost: "localhost:30001" } 0x9176ff0 65 m30000| Mon Dec 17 15:32:16.970 [conn6] CMD: drop test.tmp.mr.foo_1 m30000| Mon Dec 17 15:32:16.970 [conn6] build index test.tmp.mr.foo_1 { _id: 1 } m30999| Mon Dec 17 15:32:16.971 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|64, oldVersionEpoch: ObjectId('50cf81365ec0810ee359b56b'), ok: 1.0 } m30001| Mon Dec 17 15:32:16.971 [conn3] CMD: drop test.tmp.mr.foo_3 m30001| Mon Dec 17 15:32:16.972 [conn3] build index test.tmp.mr.foo_3 { _id: 1 } m30001| Mon Dec 17 15:32:16.973 [cleanupOldData-50cf8150c94e4981dc6c1b3f] waiting to remove documents for test.foo from { a: 75.93300496228039 } -> { a: 89.16067937389016 } m30001| Mon Dec 17 15:32:16.973 [cleanupOldData-50cf8150c94e4981dc6c1b3f] moveChunk starting delete for: test.foo from { a: 75.93300496228039 } -> { a: 89.16067937389016 } m30001| Mon Dec 17 15:32:16.973 [cleanupOldData-50cf814bc94e4981dc6c1b33] waiting to remove documents for test.foo from { a: 51.38014652766287 } -> { a: 62.87552835419774 } m30001| Mon Dec 17 15:32:16.973 [cleanupOldData-50cf8148c94e4981dc6c1b30] waiting to remove documents for test.foo from { a: 40.64535931684077 } -> { a: 51.38014652766287 } m30001| Mon Dec 17 15:32:16.973 [cleanupOldData-50cf8143c94e4981dc6c1b1f] waiting to remove documents for test.foo from { a: 0.3993422724306583 } -> { a: 10.46284288167953 } m30001| Mon Dec 17 15:32:16.979 [cleanupOldData-50cf8145c94e4981dc6c1b27] waiting to remove documents for test.foo from { a: 21.16596954874694 } -> { a: 40.64535931684077 } m30001| Mon Dec 17 15:32:16.979 [cleanupOldData-50cf8144c94e4981dc6c1b23] waiting to remove documents for test.foo from { a: 10.46284288167953 } -> { a: 21.16596954874694 } m30001| Mon Dec 17 15:32:16.979 [cleanupOldData-50cf814dc94e4981dc6c1b3c] waiting to remove documents for test.foo from { a: 62.87552835419774 } -> { a: 75.93300496228039 } m30999| Mon Dec 17 15:32:17.139 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:17.139 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:17.140 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:17 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf81515ec0810ee359b579" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf814e5ec0810ee359b577" } } m30999| Mon Dec 17 15:32:17.140 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81515ec0810ee359b579 m30999| Mon Dec 17 15:32:17.140 [Balancer] *** start balancing round m30001| Mon Dec 17 15:32:17.465 [conn3] build index done. scanned 0 total records. 0.493 secs m30000| Mon Dec 17 15:32:17.466 [conn6] build index done. scanned 0 total records. 0.495 secs m30000| Mon Dec 17 15:32:17.466 [conn9] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:38 reslen:2124 325ms m30000| Mon Dec 17 15:32:17.467 [initandlisten] connection accepted from 127.0.0.1:39895 #15 (15 connections now open) m30000| Mon Dec 17 15:32:17.468 [initandlisten] connection accepted from 127.0.0.1:39896 #16 (16 connections now open) m30000| Mon Dec 17 15:32:17.469 [conn6] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 2 version: 9|1||50cf812d5ec0810ee359b569 based on: (empty) m30000| Mon Dec 17 15:32:17.471 [conn6] ChunkManager: time to load chunks for test.mrShardedOut: 1ms sequenceNumber: 3 version: 11|1||50cf81365ec0810ee359b56b based on: (empty) m30000| Mon Dec 17 15:32:17.471 [initandlisten] connection accepted from 127.0.0.1:39897 #17 (17 connections now open) m30001| Mon Dec 17 15:32:17.472 [initandlisten] connection accepted from 127.0.0.1:42570 #9 (9 connections now open) m30001| Mon Dec 17 15:32:17.827 [conn8] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:32 reslen:1949 360ms m30999| Mon Dec 17 15:32:17.833 [Balancer] shard0001 has more chunks me:37 best: shard0000:8 m30999| Mon Dec 17 15:32:17.833 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:17.833 [Balancer] donor : shard0001 chunks on 37 m30999| Mon Dec 17 15:32:17.833 [Balancer] receiver : shard0000 chunks on 8 m30999| Mon Dec 17 15:32:17.833 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:17.833 [Balancer] ns: test.foo going to move { _id: "test.foo-a_89.16067937389016", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 89.16067937389016 }, max: { a: 119.0328269731253 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:17.834 [Balancer] shard0001 has more chunks me:55 best: shard0000:10 m30999| Mon Dec 17 15:32:17.834 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:17.834 [Balancer] donor : shard0001 chunks on 55 m30999| Mon Dec 17 15:32:17.834 [Balancer] receiver : shard0000 chunks on 10 m30999| Mon Dec 17 15:32:17.834 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:17.834 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:17.834 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 9|1||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 119.0328269731253 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:17.842 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 89.16067937389016 }, max: { a: 119.0328269731253 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_89.16067937389016", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:17.926 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8151c94e4981dc6c1b42 m30001| Mon Dec 17 15:32:17.926 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:17-107", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776337926), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 89.16067937389016 }, max: { a: 119.0328269731253 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:17.928 [conn8] moveChunk request accepted at version 9|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:17.936 [conn8] can't move chunk of size (approximately) 2003904 because maximum size allowed to move is 1048576 ns: test.foo { a: 89.16067937389016 } -> { a: 119.0328269731253 } m30001| Mon Dec 17 15:32:17.936 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:17.939 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:17.939 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:17.940 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:17-108", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776337940), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 89.16067937389016 }, max: { a: 119.0328269731253 }, step1 of 6: 8, step2 of 6: 85, note: "aborted" } } m30001| Mon Dec 17 15:32:17.940 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 89.16067937389016 }, max: { a: 119.0328269731253 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_89.16067937389016", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 10 locks(micros) W:110 r:3872 reslen:109 105ms m30999| Mon Dec 17 15:32:17.940 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 2003904, errmsg: "chunk too big to move", ok: 0.0 } m30999| Mon Dec 17 15:32:17.940 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 2003904, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { a: 89.16067937389016 } max: { a: 89.16067937389016 } m30999| Mon Dec 17 15:32:17.940 [Balancer] forcing a split because migrate failed for size reasons m30001| Mon Dec 17 15:32:17.967 [conn8] request split points lookup for chunk test.foo { : 89.16067937389016 } -->> { : 119.0328269731253 } m30001| Mon Dec 17 15:32:17.979 [conn8] splitVector doing another cycle because of force, keyCount now: 882 m30001| Mon Dec 17 15:32:17.981 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 89.16067937389016 }, max: { a: 119.0328269731253 }, from: "shard0001", splitKeys: [ { a: 104.1863551363349 } ], shardId: "test.foo-a_89.16067937389016", configdb: "localhost:30000" } m30001| Mon Dec 17 15:32:17.983 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8151c94e4981dc6c1b43 m30001| Mon Dec 17 15:32:17.984 [conn8] splitChunk accepted at version 9|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:17.985 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:17-109", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776337985), what: "split", ns: "test.foo", details: { before: { min: { a: 89.16067937389016 }, max: { a: 119.0328269731253 }, lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, lastmod: Timestamp 9000|2, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, lastmod: Timestamp 9000|3, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:32:17.985 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:17.992 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:17.993 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:17-110", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776337993), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, step1 of 6: 0, note: "aborted" } } m30999| Mon Dec 17 15:32:17.992 [Balancer] ChunkManager: time to load chunks for test.foo: 6ms sequenceNumber: 66 version: 9|3||50cf812d5ec0810ee359b569 based on: 9|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:17.992 [Balancer] forced split results: { ok: 1.0 } m30999| Mon Dec 17 15:32:17.992 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab59ba') }max: { _id: ObjectId('50cf812d256383d556ab5b88') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30999| Mon Dec 17 15:32:17.993 [Balancer] moveChunk result: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } m30999| Mon Dec 17 15:32:17.994 [Balancer] balancer move failed: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { _id: ObjectId('50cf812d256383d556ab59ba') } max: { _id: ObjectId('50cf812d256383d556ab59ba') } m30999| Mon Dec 17 15:32:17.994 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:32:18.034 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:18.067 [cleanupOldData-50cf8150c94e4981dc6c1b3f] moveChunk deleted 764 documents for test.foo from { a: 75.93300496228039 } -> { a: 89.16067937389016 } m30001| Mon Dec 17 15:32:18.067 [cleanupOldData-50cf814bc94e4981dc6c1b33] moveChunk starting delete for: test.foo from { a: 51.38014652766287 } -> { a: 62.87552835419774 } m30000| Mon Dec 17 15:32:20.698 [conn6] CMD: drop test.mrShardedOut m30000| Mon Dec 17 15:32:22.313 [conn6] command test.$cmd command: { drop: "mrShardedOut" } ntoreturn:1 keyUpdates:0 reslen:124 1615ms m30000| Mon Dec 17 15:32:22.315 [conn6] CMD: drop test.tmp.mr.foo_1 m30000| Mon Dec 17 15:32:22.315 [conn6] CMD: drop test.tmp.mr.foo_1 m30000| Mon Dec 17 15:32:22.315 [conn6] CMD: drop test.tmp.mr.foo_1 m30000| Mon Dec 17 15:32:22.316 [conn6] command test.$cmd command: { mapreduce.shardedfinish: { mapreduce: "foo", map: function map2() { emit(this._id, {count: 1, y: this.y}); }, reduce: function reduce2(key, values) { return values[0]; }, out: { replace: "mrShardedOut", sharded: true } }, inputDB: "test", shardedOutputCollection: "tmp.mrs.foo_1355776322_1", shards: { localhost:30000: { result: "tmp.mrs.foo_1355776322_1", splitKeys: {}, timeMillis: 44, counts: { input: 20, emit: 20, reduce: 0, output: 20 }, ok: 1.0 }, localhost:30001: { result: "tmp.mrs.foo_1355776322_1", splitKeys: [ { _id: ObjectId('50cf812d256383d556ab497c') }, { _id: ObjectId('50cf812d256383d556ab4b4a') }, { _id: ObjectId('50cf812d256383d556ab4d18') }, { _id: ObjectId('50cf812d256383d556ab4ee6') }, { _id: ObjectId('50cf812d256383d556ab50b4') }, { _id: ObjectId('50cf812d256383d556ab5282') }, { _id: ObjectId('50cf812d256383d556ab5450') }, { _id: ObjectId('50cf812d256383d556ab561e') }, { _id: ObjectId('50cf812d256383d556ab57ec') }, { _id: ObjectId('50cf812d256383d556ab59ba') }, { _id: ObjectId('50cf812d256383d556ab5b88') }, { _id: ObjectId('50cf812d256383d556ab5d56') }, { _id: ObjectId('50cf812d256383d556ab5f24') }, { _id: ObjectId('50cf812d256383d556ab60f3') }, { _id: ObjectId('50cf812d256383d556ab62c1') }, { _id: ObjectId('50cf812e256383d556ab6490') }, { _id: ObjectId('50cf812e256383d556ab665e') }, { _id: ObjectId('50cf812e256383d556ab682d') }, { _id: ObjectId('50cf812e256383d556ab69fb') }, { _id: ObjectId('50cf812e256383d556ab6bca') }, { _id: ObjectId('50cf812e256383d556ab6d98') }, { _id: ObjectId('50cf812e256383d556ab6f66') }, { _id: ObjectId('50cf812e256383d556ab7134') }, { _id: ObjectId('50cf812e256383d556ab7303') }, { _id: ObjectId('50cf812e256383d556ab74d1') }, { _id: ObjectId('50cf812e256383d556ab769f') }, { _id: ObjectId('50cf812e256383d556ab786d') }, { _id: ObjectId('50cf812e256383d556ab7a3b') }, { _id: ObjectId('50cf812e256383d556ab7c09') }, { _id: ObjectId('50cf812e256383d556ab7dd7') }, { _id: ObjectId('50cf812e256383d556ab7fa5') }, { _id: ObjectId('50cf812f256383d556ab8173') }, { _id: ObjectId('50cf812f256383d556ab8341') }, { _id: ObjectId('50cf812f256383d556ab850f') }, { _id: ObjectId('50cf812f256383d556ab86de') }, { _id: ObjectId('50cf812f256383d556ab88ac') }, { _id: ObjectId('50cf812f256383d556ab8a7a') }, { _id: ObjectId('50cf812f256383d556ab8c48') }, { _id: ObjectId('50cf812f256383d556ab8e16') }, { _id: ObjectId('50cf812f256383d556ab8fe4') }, { _id: ObjectId('50cf812f256383d556ab91b2') }, { _id: ObjectId('50cf812f256383d556ab9381') }, { _id: ObjectId('50cf812f256383d556ab954f') }, { _id: ObjectId('50cf812f256383d556ab971e') }, { _id: ObjectId('50cf812f256383d556ab98ec') }, { _id: ObjectId('50cf812f256383d556ab9abb') }, { _id: ObjectId('50cf812f256383d556ab9c89') }, { _id: ObjectId('50cf812f256383d556ab9e57') }, { _id: ObjectId('50cf812f256383d556aba026') }, { _id: ObjectId('50cf8130256383d556aba1f4') }, { _id: ObjectId('50cf8130256383d556aba3c2') }, { _id: ObjectId('50cf8130256383d556aba590') }, { _id: ObjectId('50cf8130256383d556aba75e') }, { _id: ObjectId('50cf8130256383d556aba92c') }, { _id: ObjectId('50cf8130256383d556abaafa') }, { _id: ObjectId('50cf8130256383d556abacc8') }, { _id: ObjectId('50cf8130256383d556abae96') }, { _id: ObjectId('50cf8130256383d556abb064') }, { _id: ObjectId('50cf8130256383d556abb233') }, { _id: ObjectId('50cf8130256383d556abb401') }, { _id: ObjectId('50cf8130256383d556abb5cf') }, { _id: ObjectId('50cf8130256383d556abb79d') }, { _id: ObjectId('50cf8130256383d556abb96b') }, { _id: ObjectId('50cf8130256383d556abbb39') }, { _id: ObjectId('50cf813e256383d556abbd09') }, { _id: ObjectId('50cf813e256383d556abbed7') }, { _id: ObjectId('50cf813e256383d556abc0a5') }, { _id: ObjectId('50cf813e256383d556abc273') }, { _id: ObjectId('50cf813e256383d556abc441') }, { _id: ObjectId('50cf813e256383d556abc60f') }, { _id: ObjectId('50cf813e256383d556abc7dd') }, { _id: ObjectId('50cf813e256383d556abc9ab') }, { _id: ObjectId('50cf813e256383d556abcb79') }, { _id: ObjectId('50cf813e256383d556abcd47') }, { _id: ObjectId('50cf813e256383d556abcf15') }, { _id: ObjectId('50cf813e256383d556abd0e3') }, { _id: ObjectId('50cf813e256383d556abd2b1') }, { _id: ObjectId('50cf813e256383d556abd47f') }, { _id: ObjectId('50cf813e256383d556abd64d') }, { _id: ObjectId('50cf813e256383d556abd81b') }, { _id: ObjectId('50cf813e256383d556abd9e9') }, { _id: ObjectId('50cf813e256383d556abdbb8') }, { _id: ObjectId('50cf813e256383d556abdd86') }, { _id: ObjectId('50cf813e256383d556abdf55') }, { _id: ObjectId('50cf813e256383d556abe124') }, { _id: ObjectId('50cf813e256383d556abe2f2') }, { _id: ObjectId('50cf813e256383d556abe4c1') }, { _id: ObjectId('50cf813e256383d556abe68f') }, { _id: ObjectId('50cf813e256383d556abe85d') }, { _id: ObjectId('50cf813e256383d556abea2b') }, { _id: ObjectId('50cf813f256383d556abebf9') }, { _id: ObjectId('50cf813f256383d556abedc7') }, { _id: ObjectId('50cf813f256383d556abef95') }, { _id: ObjectId('50cf813f256383d556abf164') }, { _id: ObjectId('50cf813f256383d556abf332') }, { _id: ObjectId('50cf813f256383d556abf501') }, { _id: ObjectId('50cf813f256383d556abf6cf') }, { _id: ObjectId('50cf813f256383d556abf89d') }, { _id: ObjectId('50cf813f256383d556abfa6b') }, { _id: ObjectId('50cf813f256383d556abfc39') }, { _id: ObjectId('50cf813f256383d556abfe07') }, { _id: ObjectId('50cf813f256383d556abffd5') }, { _id: ObjectId('50cf813f256383d556ac01a3') }, { _id: ObjectId('50cf813f256383d556ac0371') }, { _id: ObjectId('50cf813f256383d556ac053f') }, { _id: ObjectId('50cf813f256383d556ac070d') }, { _id: ObjectId('50cf813f256383d556ac08db') }, { _id: ObjectId('50cf813f256383d556ac0aa9') }, { _id: ObjectId('50cf813f256383d556ac0c77') }, { _id: ObjectId('50cf813f256383d556ac0e45') }, { _id: ObjectId('50cf813f256383d556ac1013') }, { _id: ObjectId('50cf813f256383d556ac11e1') }, { _id: ObjectId('50cf813f256383d556ac13af') }, { _id: ObjectId('50cf813f256383d556ac157d') }, { _id: ObjectId('50cf8140256383d556ac174b') }, { _id: ObjectId('50cf8140256383d556ac1919') }, { _id: ObjectId('50cf8140256383d556ac1ae8') }, { _id: ObjectId('50cf8140256383d556ac1cb6') }, { _id: ObjectId('50cf8140256383d556ac1e84') }, { _id: ObjectId('50cf8140256383d556ac2052') }, { _id: ObjectId('50cf8140256383d556ac2220') }, { _id: ObjectId('50cf8140256383d556ac23ee') }, { _id: ObjectId('50cf8140256383d556ac25bc') }, { _id: ObjectId('50cf8140256383d556ac278a') }, { _id: ObjectId('50cf8140256383d556ac2958') }, { _id: ObjectId('50cf8140256383d556ac2b26') }, { _id: ObjectId('50cf8141256383d556ac2cf4') }, { _id: ObjectId('50cf8141256383d556ac2ec2') }, { _id: ObjectId('50cf8141256383d556ac3090') } ], timeMillis: 13892, counts: { input: 59980, emit: 59980, reduce: 0, output: 59980 }, ok: 1.0 } }, shardCounts: { localhost:30000: { input: 20, emit: 20, reduce: 0, output: 20 }, localhost:30001: { input: 59980, emit: 59980, reduce: 0, output: 59980 } }, counts: { emit: 60000, input: 60000, output: 60000, reduce: 0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:1616319 w:2485085 reslen:482 5345ms m30001| Mon Dec 17 15:32:23.940 [cleanupOldData-50cf814bc94e4981dc6c1b33] moveChunk deleted 698 documents for test.foo from { a: 51.38014652766287 } -> { a: 62.87552835419774 } m30001| Mon Dec 17 15:32:23.940 [cleanupOldData-50cf8148c94e4981dc6c1b30] moveChunk starting delete for: test.foo from { a: 40.64535931684077 } -> { a: 51.38014652766287 } m30999| Mon Dec 17 15:32:24.036 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:24.036 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:24.036 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:24 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf81585ec0810ee359b57a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf81515ec0810ee359b579" } } m30999| Mon Dec 17 15:32:24.037 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81585ec0810ee359b57a m30999| Mon Dec 17 15:32:24.037 [Balancer] *** start balancing round m30001| Mon Dec 17 15:32:24.882 [conn8] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:51 reslen:1949 844ms m30999| Mon Dec 17 15:32:24.883 [Balancer] shard0001 has more chunks me:38 best: shard0000:8 m30999| Mon Dec 17 15:32:24.883 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:24.883 [Balancer] donor : shard0001 chunks on 38 m30999| Mon Dec 17 15:32:24.883 [Balancer] receiver : shard0000 chunks on 8 m30999| Mon Dec 17 15:32:24.883 [Balancer] threshold : 4 m30999| Mon Dec 17 15:32:24.883 [Balancer] ns: test.foo going to move { _id: "test.foo-a_89.16067937389016", lastmod: Timestamp 9000|2, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:24.883 [Balancer] shard0001 has more chunks me:55 best: shard0000:10 m30999| Mon Dec 17 15:32:24.883 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:24.883 [Balancer] donor : shard0001 chunks on 55 m30999| Mon Dec 17 15:32:24.883 [Balancer] receiver : shard0000 chunks on 10 m30999| Mon Dec 17 15:32:24.883 [Balancer] threshold : 4 m30999| Mon Dec 17 15:32:24.883 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:24.884 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 9|2||000000000000000000000000min: { a: 89.16067937389016 }max: { a: 104.1863551363349 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:24.884 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_89.16067937389016", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:24.885 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8158c94e4981dc6c1b44 m30001| Mon Dec 17 15:32:24.885 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:24-111", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776344885), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:24.885 [conn8] moveChunk request accepted at version 9|3||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:24.886 [conn8] moveChunk number of documents: 882 m30001| Mon Dec 17 15:32:24.893 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:24.903 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:24.913 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 172, clonedBytes: 185072, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:24.933 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 576, clonedBytes: 619776, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:24.953 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:24.953 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 89.16067937389016 } -> { a: 104.1863551363349 } m30001| Mon Dec 17 15:32:24.955 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, shardKeyPattern: { a: 1.0 }, state: "steady", counts: { cloned: 882, clonedBytes: 949032, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:24.955 [conn8] moveChunk setting version to: 10|0||50cf812d5ec0810ee359b569 m30000| Mon Dec 17 15:32:24.955 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:24.961 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:24.963 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:24.967 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:24.971 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:24.973 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 89.16067937389016 } -> { a: 104.1863551363349 } m30000| Mon Dec 17 15:32:24.973 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:24-18", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776344973), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, step1 of 5: 4, step2 of 5: 0, step3 of 5: 61, step4 of 5: 0, step5 of 5: 20 } } m30001| Mon Dec 17 15:32:24.975 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, shardKeyPattern: { a: 1.0 }, state: "done", counts: { cloned: 882, clonedBytes: 949032, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:24.975 [conn8] moveChunk updating self version to: 10|1||50cf812d5ec0810ee359b569 through { a: 104.1863551363349 } -> { a: 119.0328269731253 } for collection 'test.foo' m30001| Mon Dec 17 15:32:24.976 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:24-112", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776344976), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:24.976 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:24.976 [conn8] MigrateFromStatus::done Global lock acquired m30999| Mon Dec 17 15:32:24.983 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:24.984 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 67 version: 10|1||50cf812d5ec0810ee359b569 based on: 9|3||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:24.984 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab59ba') }max: { _id: ObjectId('50cf812d256383d556ab5b88') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30999| Mon Dec 17 15:32:24.986 [Balancer] moveChunk result: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } m30999| Mon Dec 17 15:32:24.986 [Balancer] balancer move failed: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { _id: ObjectId('50cf812d256383d556ab59ba') } max: { _id: ObjectId('50cf812d256383d556ab59ba') } m30999| Mon Dec 17 15:32:24.986 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:32:24.986 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:24.979 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:24.979 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:24.979 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:24.983 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:24.983 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:24-113", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776344983), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 89.16067937389016 }, max: { a: 104.1863551363349 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 68, step5 of 6: 24, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:24.983 [cleanupOldData-50cf8158c94e4981dc6c1b45] (start) waiting to cleanup test.foo from { a: 89.16067937389016 } -> { a: 104.1863551363349 }, # cursors remaining: 0 m30001| Mon Dec 17 15:32:24.985 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:24.985 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:24-114", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776344985), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, step1 of 6: 0, note: "aborted" } } m30001| Mon Dec 17 15:32:25.007 [cleanupOldData-50cf8158c94e4981dc6c1b45] waiting to remove documents for test.foo from { a: 89.16067937389016 } -> { a: 104.1863551363349 } m30999| Mon Dec 17 15:32:25.988 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:25.988 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:25.988 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:25 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf81595ec0810ee359b57b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf81585ec0810ee359b57a" } } m30999| Mon Dec 17 15:32:25.989 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81595ec0810ee359b57b m30999| Mon Dec 17 15:32:25.989 [Balancer] *** start balancing round m30001| Mon Dec 17 15:32:27.391 [conn8] serverStatus was very slow: { after basic: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after cursors: 0, after extra_info: 0, after globalLock: 0, after indexCounters: 0, after locks: 0, after network: 0, after opcounters: 0, after opcountersRepl: 0, after recordStats: 1170, at end: 1170 } m30001| Mon Dec 17 15:32:27.391 [conn8] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:39 reslen:2258 1401ms m30001| Mon Dec 17 15:32:27.393 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_104.1863551363349", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:27.394 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf815bc94e4981dc6c1b46 m30001| Mon Dec 17 15:32:27.394 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:27-115", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776347394), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:27.395 [conn8] moveChunk request accepted at version 10|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:27.392 [Balancer] shard0001 has more chunks me:37 best: shard0000:9 m30999| Mon Dec 17 15:32:27.392 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:27.392 [Balancer] donor : shard0001 chunks on 37 m30999| Mon Dec 17 15:32:27.392 [Balancer] receiver : shard0000 chunks on 9 m30999| Mon Dec 17 15:32:27.392 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:27.392 [Balancer] ns: test.foo going to move { _id: "test.foo-a_104.1863551363349", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:27.392 [Balancer] shard0001 has more chunks me:55 best: shard0000:10 m30999| Mon Dec 17 15:32:27.392 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:27.392 [Balancer] donor : shard0001 chunks on 55 m30999| Mon Dec 17 15:32:27.392 [Balancer] receiver : shard0000 chunks on 10 m30999| Mon Dec 17 15:32:27.392 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:27.392 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:27.392 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 10|1||000000000000000000000000min: { a: 104.1863551363349 }max: { a: 119.0328269731253 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:27.396 [conn8] moveChunk number of documents: 882 m30001| Mon Dec 17 15:32:27.402 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, shardKeyPattern: { a: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:27.412 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:27.461 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 32, clonedBytes: 34432, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:27.471 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 461, clonedBytes: 496036, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:27.475 [cleanupOldData-50cf8148c94e4981dc6c1b30] moveChunk deleted 697 documents for test.foo from { a: 40.64535931684077 } -> { a: 51.38014652766287 } m30001| Mon Dec 17 15:32:27.475 [cleanupOldData-50cf8143c94e4981dc6c1b1f] moveChunk starting delete for: test.foo from { a: 0.3993422724306583 } -> { a: 10.46284288167953 } m30001| Mon Dec 17 15:32:27.491 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 815, clonedBytes: 876940, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:27.495 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:27.496 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 104.1863551363349 } -> { a: 119.0328269731253 } m30001| Mon Dec 17 15:32:27.534 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, shardKeyPattern: { a: 1.0 }, state: "steady", counts: { cloned: 882, clonedBytes: 949032, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:27.534 [conn8] moveChunk setting version to: 11|0||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:27.541 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, shardKeyPattern: { a: 1.0 }, state: "done", counts: { cloned: 882, clonedBytes: 949032, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:27.541 [conn8] moveChunk updating self version to: 11|1||50cf812d5ec0810ee359b569 through { a: 119.0328269731253 } -> { a: 152.16144034639 } for collection 'test.foo' m30001| Mon Dec 17 15:32:27.542 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:27-116", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776347542), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:27.542 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:27.542 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:27.542 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:27.542 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:27.542 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:27.542 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:27.542 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:27-117", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776347542), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 137, step5 of 6: 8, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:27.542 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_104.1863551363349", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 6 locks(micros) W:22 r:2290 w:46 reslen:37 149ms m30001| Mon Dec 17 15:32:27.544 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:27.545 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:27-118", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776347545), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, step1 of 6: 0, note: "aborted" } } m30999| Mon Dec 17 15:32:27.543 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:27.544 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 68 version: 11|1||50cf812d5ec0810ee359b569 based on: 10|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:27.544 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab59ba') }max: { _id: ObjectId('50cf812d256383d556ab5b88') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30000| Mon Dec 17 15:32:27.534 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:27.541 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 104.1863551363349 } -> { a: 119.0328269731253 } m30000| Mon Dec 17 15:32:27.541 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:27-19", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776347541), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 104.1863551363349 }, max: { a: 119.0328269731253 }, step1 of 5: 5, step2 of 5: 0, step3 of 5: 93, step4 of 5: 0, step5 of 5: 45 } } m30001| Mon Dec 17 15:32:27.548 [cleanupOldData-50cf815bc94e4981dc6c1b47] (start) waiting to cleanup test.foo from { a: 104.1863551363349 } -> { a: 119.0328269731253 }, # cursors remaining: 0 m30999| Mon Dec 17 15:32:27.548 [Balancer] moveChunk result: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } m30999| Mon Dec 17 15:32:27.548 [Balancer] balancer move failed: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { _id: ObjectId('50cf812d256383d556ab59ba') } max: { _id: ObjectId('50cf812d256383d556ab59ba') } m30999| Mon Dec 17 15:32:27.548 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:32:27.548 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:27.572 [cleanupOldData-50cf815bc94e4981dc6c1b47] waiting to remove documents for test.foo from { a: 104.1863551363349 } -> { a: 119.0328269731253 } m30001| Mon Dec 17 15:32:27.584 [cleanupOldData-50cf8143c94e4981dc6c1b1f] moveChunk deleted 605 documents for test.foo from { a: 0.3993422724306583 } -> { a: 10.46284288167953 } m30001| Mon Dec 17 15:32:27.602 [cleanupOldData-50cf8145c94e4981dc6c1b27] moveChunk starting delete for: test.foo from { a: 21.16596954874694 } -> { a: 40.64535931684077 } m30999| Mon Dec 17 15:32:28.552 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:28.552 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:28.552 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:28 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf815c5ec0810ee359b57c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf81595ec0810ee359b57b" } } m30999| Mon Dec 17 15:32:28.553 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf815c5ec0810ee359b57c m30999| Mon Dec 17 15:32:28.553 [Balancer] *** start balancing round m30999| Mon Dec 17 15:32:28.611 [Balancer] shard0001 has more chunks me:36 best: shard0000:10 m30999| Mon Dec 17 15:32:28.611 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:28.611 [Balancer] donor : shard0001 chunks on 36 m30999| Mon Dec 17 15:32:28.611 [Balancer] receiver : shard0000 chunks on 10 m30999| Mon Dec 17 15:32:28.611 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:28.611 [Balancer] ns: test.foo going to move { _id: "test.foo-a_119.0328269731253", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 119.0328269731253 }, max: { a: 152.16144034639 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:28.612 [Balancer] shard0001 has more chunks me:55 best: shard0000:10 m30999| Mon Dec 17 15:32:28.612 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:28.612 [Balancer] donor : shard0001 chunks on 55 m30999| Mon Dec 17 15:32:28.612 [Balancer] receiver : shard0000 chunks on 10 m30999| Mon Dec 17 15:32:28.612 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:28.612 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:28.612 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { a: 119.0328269731253 }max: { a: 152.16144034639 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:28.612 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 119.0328269731253 }, max: { a: 152.16144034639 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_119.0328269731253", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:28.613 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf815cc94e4981dc6c1b48 m30001| Mon Dec 17 15:32:28.613 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:28-119", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776348613), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 119.0328269731253 }, max: { a: 152.16144034639 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:28.614 [conn8] moveChunk request accepted at version 11|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:28.616 [conn8] can't move chunk of size (approximately) 2357200 because maximum size allowed to move is 1048576 ns: test.foo { a: 119.0328269731253 } -> { a: 152.16144034639 } m30001| Mon Dec 17 15:32:28.616 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:28.616 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:28.617 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:28.617 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:28-120", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776348617), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 119.0328269731253 }, max: { a: 152.16144034639 }, step1 of 6: 0, step2 of 6: 1, note: "aborted" } } m30001| Mon Dec 17 15:32:28.619 [conn8] request split points lookup for chunk test.foo { : 119.0328269731253 } -->> { : 152.16144034639 } m30001| Mon Dec 17 15:32:28.621 [conn8] splitVector doing another cycle because of force, keyCount now: 1037 m30001| Mon Dec 17 15:32:28.623 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 119.0328269731253 }, max: { a: 152.16144034639 }, from: "shard0001", splitKeys: [ { a: 135.4934894479811 } ], shardId: "test.foo-a_119.0328269731253", configdb: "localhost:30000" } m30999| Mon Dec 17 15:32:28.617 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 2357200, errmsg: "chunk too big to move", ok: 0.0 } m30999| Mon Dec 17 15:32:28.617 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 2357200, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { a: 119.0328269731253 } max: { a: 119.0328269731253 } m30999| Mon Dec 17 15:32:28.617 [Balancer] forcing a split because migrate failed for size reasons m30001| Mon Dec 17 15:32:28.624 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf815cc94e4981dc6c1b49 m30001| Mon Dec 17 15:32:28.625 [conn8] splitChunk accepted at version 11|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:28.625 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:28-121", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776348625), what: "split", ns: "test.foo", details: { before: { min: { a: 119.0328269731253 }, max: { a: 152.16144034639 }, lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, lastmod: Timestamp 11000|2, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, lastmod: Timestamp 11000|3, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:32:28.626 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:28.627 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:28.628 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:28-122", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776348628), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, step1 of 6: 0, note: "aborted" } } m30999| Mon Dec 17 15:32:28.627 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 69 version: 11|3||50cf812d5ec0810ee359b569 based on: 11|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:28.627 [Balancer] forced split results: { ok: 1.0 } m30999| Mon Dec 17 15:32:28.627 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab59ba') }max: { _id: ObjectId('50cf812d256383d556ab5b88') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30999| Mon Dec 17 15:32:28.628 [Balancer] moveChunk result: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } m30999| Mon Dec 17 15:32:28.628 [Balancer] balancer move failed: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { _id: ObjectId('50cf812d256383d556ab59ba') } max: { _id: ObjectId('50cf812d256383d556ab59ba') } m30999| Mon Dec 17 15:32:28.628 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:32:28.628 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:33.164 [cleanupOldData-50cf8145c94e4981dc6c1b27] moveChunk deleted 1141 documents for test.foo from { a: 21.16596954874694 } -> { a: 40.64535931684077 } m30001| Mon Dec 17 15:32:33.164 [cleanupOldData-50cf8144c94e4981dc6c1b23] moveChunk starting delete for: test.foo from { a: 10.46284288167953 } -> { a: 21.16596954874694 } m30999| Mon Dec 17 15:32:34.632 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:34.633 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:34.633 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:34 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf81625ec0810ee359b57d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf815c5ec0810ee359b57c" } } m30999| Mon Dec 17 15:32:34.633 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf81625ec0810ee359b57d m30999| Mon Dec 17 15:32:34.633 [Balancer] *** start balancing round m30999| Mon Dec 17 15:32:36.673 [Balancer] shard0001 has more chunks me:37 best: shard0000:10 m30999| Mon Dec 17 15:32:36.673 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:36.673 [Balancer] donor : shard0001 chunks on 37 m30999| Mon Dec 17 15:32:36.673 [Balancer] receiver : shard0000 chunks on 10 m30999| Mon Dec 17 15:32:36.673 [Balancer] threshold : 4 m30999| Mon Dec 17 15:32:36.673 [Balancer] ns: test.foo going to move { _id: "test.foo-a_119.0328269731253", lastmod: Timestamp 11000|2, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:36.674 [Balancer] shard0001 has more chunks me:55 best: shard0000:10 m30999| Mon Dec 17 15:32:36.674 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:36.674 [Balancer] donor : shard0001 chunks on 55 m30999| Mon Dec 17 15:32:36.674 [Balancer] receiver : shard0000 chunks on 10 m30999| Mon Dec 17 15:32:36.674 [Balancer] threshold : 4 m30999| Mon Dec 17 15:32:36.674 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:36.674 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 11|2||000000000000000000000000min: { a: 119.0328269731253 }max: { a: 135.4934894479811 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:36.672 [conn8] serverStatus was very slow: { after basic: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after cursors: 0, after extra_info: 1700, after globalLock: 1700, after indexCounters: 1700, after locks: 1700, after network: 1700, after opcounters: 1700, after opcountersRepl: 1700, after recordStats: 1700, at end: 1700 } m30001| Mon Dec 17 15:32:36.672 [conn8] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:49 reslen:2258 2038ms m30001| Mon Dec 17 15:32:36.679 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_119.0328269731253", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:36.680 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf8164c94e4981dc6c1b4a m30001| Mon Dec 17 15:32:36.681 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:36-123", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776356681), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:36.681 [conn8] moveChunk request accepted at version 11|3||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:36.683 [conn8] moveChunk number of documents: 1037 m30001| Mon Dec 17 15:32:36.694 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:36.700 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:36.741 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 163, clonedBytes: 175388, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:36.758 [cleanupOldData-50cf8144c94e4981dc6c1b23] moveChunk deleted 606 documents for test.foo from { a: 10.46284288167953 } -> { a: 21.16596954874694 } m30001| Mon Dec 17 15:32:36.759 [cleanupOldData-50cf814dc94e4981dc6c1b3c] moveChunk starting delete for: test.foo from { a: 62.87552835419774 } -> { a: 75.93300496228039 } m30001| Mon Dec 17 15:32:36.760 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 529, clonedBytes: 569204, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:36.784 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 912, clonedBytes: 981312, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:36.820 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 912, clonedBytes: 981312, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:36.888 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 912, clonedBytes: 981312, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:37.020 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 912, clonedBytes: 981312, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:37.280 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 912, clonedBytes: 981312, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:37.796 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 912, clonedBytes: 981312, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:38.824 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 912, clonedBytes: 981312, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:39.085 [conn7] getmore test.tmp.mrs.foo_1355776322_1 query: { query: { _id: { $gte: ObjectId('50cf812f256383d556ab91ac'), $lt: ObjectId('50cf812f256383d556ab937a') } }, orderby: { _id: 1 } } cursorid:183773884158349 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:2704 nreturned:360 reslen:389180 2305ms m30001| Mon Dec 17 15:32:39.852 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 912, clonedBytes: 981312, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Mon Dec 17 15:32:40.068 [LockPinger] creating new connection to:localhost:30000 m30999| Mon Dec 17 15:32:40.068 BackgroundJob starting: ConnectBG m30999| Mon Dec 17 15:32:40.068 [LockPinger] connected connection! m30000| Mon Dec 17 15:32:40.068 [initandlisten] connection accepted from 127.0.0.1:39911 #18 (18 connections now open) m30001| Mon Dec 17 15:32:40.880 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 912, clonedBytes: 981312, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Mon Dec 17 15:32:41.792 [conn4] update config.foo.bar update: { x: 1 } nscanned:0 nupdated:0 keyUpdates:0 locks(micros) w:2316089 2316ms m30999| Mon Dec 17 15:32:41.796 [LockPinger] cluster localhost:30000 pinged successfully at Mon Dec 17 15:32:40 2012 by distributed lock pinger 'localhost:30000/domU-12-31-39-01-70-B4:30999:1355776300:1804289383', sleeping for 30000ms m30001| Mon Dec 17 15:32:41.908 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 915, clonedBytes: 984540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:42.936 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 1037, clonedBytes: 1115812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:43.264 [conn5] command admin.$cmd command: { _migrateClone: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:19 reslen:51 516ms m30000| Mon Dec 17 15:32:43.265 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:43.265 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 119.0328269731253 } -> { a: 135.4934894479811 } m30001| Mon Dec 17 15:32:43.964 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "steady", counts: { cloned: 1037, clonedBytes: 1115812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:44.205 [conn8] moveChunk setting version to: 12|0||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:32:44.206 [conn5] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:8 reslen:51 841ms m30000| Mon Dec 17 15:32:44.205 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:44.206 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 119.0328269731253 } -> { a: 135.4934894479811 } m30000| Mon Dec 17 15:32:44.206 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:44-20", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776364206), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6580, step4 of 5: 0, step5 of 5: 941 } } m30001| Mon Dec 17 15:32:44.208 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, shardKeyPattern: { a: 1.0 }, state: "done", counts: { cloned: 1037, clonedBytes: 1115812, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:44.208 [conn8] moveChunk updating self version to: 12|1||50cf812d5ec0810ee359b569 through { a: 135.4934894479811 } -> { a: 152.16144034639 } for collection 'test.foo' m30001| Mon Dec 17 15:32:44.213 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:44-124", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776364213), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:44.213 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:44.213 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:44.213 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:44.213 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:44.213 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:44.213 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:44.213 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:44-125", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776364213), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, step1 of 6: 5, step2 of 6: 1, step3 of 6: 2, step4 of 6: 7281, step5 of 6: 248, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:44.213 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 119.0328269731253 }, max: { a: 135.4934894479811 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_119.0328269731253", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 8 locks(micros) W:25 r:3434 w:57 reslen:37 7539ms m30001| Mon Dec 17 15:32:44.215 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30999| Mon Dec 17 15:32:44.213 [Balancer] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:32:44.214 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 70 version: 12|1||50cf812d5ec0810ee359b569 based on: 11|3||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:44.215 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab59ba') }max: { _id: ObjectId('50cf812d256383d556ab5b88') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:44.234 [cleanupOldData-50cf814dc94e4981dc6c1b3c] moveChunk deleted 764 documents for test.foo from { a: 62.87552835419774 } -> { a: 75.93300496228039 } m30001| Mon Dec 17 15:32:44.234 [cleanupOldData-50cf8158c94e4981dc6c1b45] moveChunk starting delete for: test.foo from { a: 89.16067937389016 } -> { a: 104.1863551363349 } m30001| Mon Dec 17 15:32:44.234 [cleanupOldData-50cf816cc94e4981dc6c1b4b] (start) waiting to cleanup test.foo from { a: 119.0328269731253 } -> { a: 135.4934894479811 }, # cursors remaining: 0 m30001| Mon Dec 17 15:32:44.235 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:44-126", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776364235), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, step1 of 6: 0, note: "aborted" } } m30999| Mon Dec 17 15:32:44.235 [Balancer] moveChunk result: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } m30999| Mon Dec 17 15:32:44.235 [Balancer] balancer move failed: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { _id: ObjectId('50cf812d256383d556ab59ba') } max: { _id: ObjectId('50cf812d256383d556ab59ba') } m30999| Mon Dec 17 15:32:44.235 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:32:44.236 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:44.260 [cleanupOldData-50cf816cc94e4981dc6c1b4b] waiting to remove documents for test.foo from { a: 119.0328269731253 } -> { a: 135.4934894479811 } m30999| Mon Dec 17 15:32:45.237 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:45.237 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:45.237 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:45 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf816d5ec0810ee359b57e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf81625ec0810ee359b57d" } } m30999| Mon Dec 17 15:32:45.238 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf816d5ec0810ee359b57e m30999| Mon Dec 17 15:32:45.238 [Balancer] *** start balancing round m30001| Mon Dec 17 15:32:46.606 [conn8] serverStatus was very slow: { after basic: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after cursors: 0, after extra_info: 800, after globalLock: 800, after indexCounters: 800, after locks: 800, after network: 800, after opcounters: 800, after opcountersRepl: 800, after recordStats: 1140, at end: 1140 } m30001| Mon Dec 17 15:32:46.606 [conn8] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:41 reslen:2278 1366ms m30001| Mon Dec 17 15:32:46.610 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_135.4934894479811", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:46.611 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf816ec94e4981dc6c1b4c m30001| Mon Dec 17 15:32:46.611 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:46-127", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776366611), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:46.612 [conn8] moveChunk request accepted at version 12|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:46.608 [Balancer] shard0001 has more chunks me:36 best: shard0000:11 m30999| Mon Dec 17 15:32:46.608 [Balancer] collection : test.foo m30999| Mon Dec 17 15:32:46.608 [Balancer] donor : shard0001 chunks on 36 m30999| Mon Dec 17 15:32:46.608 [Balancer] receiver : shard0000 chunks on 11 m30999| Mon Dec 17 15:32:46.608 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:46.608 [Balancer] ns: test.foo going to move { _id: "test.foo-a_135.4934894479811", lastmod: Timestamp 12000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:46.609 [Balancer] shard0001 has more chunks me:55 best: shard0000:10 m30999| Mon Dec 17 15:32:46.609 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:32:46.609 [Balancer] donor : shard0001 chunks on 55 m30999| Mon Dec 17 15:32:46.609 [Balancer] receiver : shard0000 chunks on 10 m30999| Mon Dec 17 15:32:46.609 [Balancer] threshold : 2 m30999| Mon Dec 17 15:32:46.609 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:32:46.609 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 12|1||000000000000000000000000min: { a: 135.4934894479811 }max: { a: 152.16144034639 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:46.908 [conn8] moveChunk number of documents: 1038 m30001| Mon Dec 17 15:32:46.913 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:46.921 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:46.933 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 58, clonedBytes: 62408, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:46.978 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 211, clonedBytes: 227036, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:46.996 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 223, clonedBytes: 239948, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:47.033 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 223, clonedBytes: 239948, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:47.100 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 223, clonedBytes: 239948, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:47.232 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 223, clonedBytes: 239948, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:47.493 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 223, clonedBytes: 239948, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:48.009 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 223, clonedBytes: 239948, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:48.249 [cleanupOldData-50cf8158c94e4981dc6c1b45] moveChunk deleted 882 documents for test.foo from { a: 89.16067937389016 } -> { a: 104.1863551363349 } m30001| Mon Dec 17 15:32:48.250 [cleanupOldData-50cf815bc94e4981dc6c1b47] moveChunk starting delete for: test.foo from { a: 104.1863551363349 } -> { a: 119.0328269731253 } m30001| Mon Dec 17 15:32:49.037 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 223, clonedBytes: 239948, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:50.065 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 223, clonedBytes: 239948, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:51.093 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 223, clonedBytes: 239948, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:52.121 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 223, clonedBytes: 239948, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:53.149 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 223, clonedBytes: 239948, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:54.177 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "clone", counts: { cloned: 1038, clonedBytes: 1116888, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:54.361 [conn5] command admin.$cmd command: { _migrateClone: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:25 reslen:51 596ms m30000| Mon Dec 17 15:32:54.362 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:32:54.362 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 135.4934894479811 } -> { a: 152.16144034639 } m30001| Mon Dec 17 15:32:54.659 [cleanupOldData-50cf815bc94e4981dc6c1b47] moveChunk deleted 882 documents for test.foo from { a: 104.1863551363349 } -> { a: 119.0328269731253 } m30001| Mon Dec 17 15:32:54.659 [cleanupOldData-50cf816cc94e4981dc6c1b4b] moveChunk starting delete for: test.foo from { a: 119.0328269731253 } -> { a: 135.4934894479811 } m30001| Mon Dec 17 15:32:55.205 [conn8] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "steady", counts: { cloned: 1038, clonedBytes: 1116888, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:32:57.148 [conn5] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:12 reslen:51 2378ms m30001| Mon Dec 17 15:32:57.596 [conn5] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:11 reslen:51 434ms m30001| Mon Dec 17 15:32:57.598 [conn8] moveChunk setting version to: 13|0||50cf812d5ec0810ee359b569 m30000| Mon Dec 17 15:32:57.598 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:57.601 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:32:57.609 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { a: 135.4934894479811 } -> { a: 152.16144034639 } m30000| Mon Dec 17 15:32:57.609 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:57-21", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776377609), what: "moveChunk.to", ns: "test.foo", details: { min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7451, step4 of 5: 0, step5 of 5: 3247 } } m30001| Mon Dec 17 15:32:57.612 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, shardKeyPattern: { a: 1.0 }, state: "done", counts: { cloned: 1038, clonedBytes: 1116888, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:32:57.612 [conn8] moveChunk updating self version to: 13|1||50cf812d5ec0810ee359b569 through { a: 152.16144034639 } -> { a: 178.156032692641 } for collection 'test.foo' m30001| Mon Dec 17 15:32:57.613 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:57-128", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776377613), what: "moveChunk.commit", ns: "test.foo", details: { min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:32:57.613 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:57.613 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:57.613 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:32:57.613 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:32:57.613 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:32:57.614 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:32:57.614 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:57-129", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776377614), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 296, step4 of 6: 8296, step5 of 6: 2408, step6 of 6: 0 } } m30001| Mon Dec 17 15:32:57.614 [conn8] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 135.4934894479811 }, max: { a: 152.16144034639 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_135.4934894479811", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 8 locks(micros) W:37 r:3457 w:61 reslen:37 11003ms m30999| Mon Dec 17 15:32:57.614 [Balancer] moveChunk result: { ok: 1.0 } m30001| Mon Dec 17 15:32:57.614 [cleanupOldData-50cf8179c94e4981dc6c1b4d] (start) waiting to cleanup test.foo from { a: 135.4934894479811 } -> { a: 152.16144034639 }, # cursors remaining: 0 m30999| Mon Dec 17 15:32:57.615 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 71 version: 13|1||50cf812d5ec0810ee359b569 based on: 12|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:32:57.615 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab59ba') }max: { _id: ObjectId('50cf812d256383d556ab5b88') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:32:57.615 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:32:57.618 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:32:57-130", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776377618), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, step1 of 6: 0, note: "aborted" } } m30999| Mon Dec 17 15:32:57.618 [Balancer] moveChunk result: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } m30999| Mon Dec 17 15:32:57.618 [Balancer] balancer move failed: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { _id: ObjectId('50cf812d256383d556ab59ba') } max: { _id: ObjectId('50cf812d256383d556ab59ba') } m30999| Mon Dec 17 15:32:57.618 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:32:57.618 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:32:57.653 [cleanupOldData-50cf8179c94e4981dc6c1b4d] waiting to remove documents for test.foo from { a: 135.4934894479811 } -> { a: 152.16144034639 } m30001| Mon Dec 17 15:32:57.897 [cleanupOldData-50cf816cc94e4981dc6c1b4b] moveChunk deleted 1037 documents for test.foo from { a: 119.0328269731253 } -> { a: 135.4934894479811 } m30001| Mon Dec 17 15:32:57.897 [cleanupOldData-50cf8179c94e4981dc6c1b4d] moveChunk starting delete for: test.foo from { a: 135.4934894479811 } -> { a: 152.16144034639 } m30999| Mon Dec 17 15:32:58.621 [Balancer] Refreshing MaxChunkSize: 1 m30999| Mon Dec 17 15:32:58.622 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : domU-12-31-39-01-70-B4:30999:1355776300:1804289383 ) m30999| Mon Dec 17 15:32:58.622 [Balancer] about to acquire distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383: m30999| { "state" : 1, m30999| "who" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:Balancer:846930886", m30999| "process" : "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", m30999| "when" : { "$date" : "Mon Dec 17 15:32:58 2012" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "50cf817a5ec0810ee359b57f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "50cf816d5ec0810ee359b57e" } } m30999| Mon Dec 17 15:32:58.626 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' acquired, ts : 50cf817a5ec0810ee359b57f m30999| Mon Dec 17 15:32:58.626 [Balancer] *** start balancing round m30001| Mon Dec 17 15:33:00.565 [conn8] serverStatus was very slow: { after basic: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after cursors: 0, after extra_info: 1620, after globalLock: 1620, after indexCounters: 1620, after locks: 1620, after network: 1620, after opcounters: 1620, after opcountersRepl: 1620, after recordStats: 1620, at end: 1620 } m30001| Mon Dec 17 15:33:00.566 [conn8] command admin.$cmd command: { serverStatus: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:29 reslen:2278 1938ms m30999| Mon Dec 17 15:33:00.569 [Balancer] shard0001 has more chunks me:35 best: shard0000:12 m30999| Mon Dec 17 15:33:00.569 [Balancer] collection : test.foo m30999| Mon Dec 17 15:33:00.569 [Balancer] donor : shard0001 chunks on 35 m30999| Mon Dec 17 15:33:00.569 [Balancer] receiver : shard0000 chunks on 12 m30999| Mon Dec 17 15:33:00.569 [Balancer] threshold : 2 m30999| Mon Dec 17 15:33:00.569 [Balancer] ns: test.foo going to move { _id: "test.foo-a_152.16144034639", lastmod: Timestamp 13000|1, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569'), ns: "test.foo", min: { a: 152.16144034639 }, max: { a: 178.156032692641 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:33:00.570 [Balancer] shard0001 has more chunks me:55 best: shard0000:10 m30999| Mon Dec 17 15:33:00.570 [Balancer] collection : test.mrShardedOut m30999| Mon Dec 17 15:33:00.570 [Balancer] donor : shard0001 chunks on 55 m30999| Mon Dec 17 15:33:00.570 [Balancer] receiver : shard0000 chunks on 10 m30999| Mon Dec 17 15:33:00.570 [Balancer] threshold : 2 m30999| Mon Dec 17 15:33:00.570 [Balancer] ns: test.mrShardedOut going to move { _id: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b'), ns: "test.mrShardedOut", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Mon Dec 17 15:33:00.570 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 13|1||000000000000000000000000min: { a: 152.16144034639 }max: { a: 178.156032692641 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:33:00.573 [conn8] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { a: 152.16144034639 }, max: { a: 178.156032692641 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-a_152.16144034639", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:33:00.579 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf817cc94e4981dc6c1b4e m30001| Mon Dec 17 15:33:00.581 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:33:00-131", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776380581), what: "moveChunk.start", ns: "test.foo", details: { min: { a: 152.16144034639 }, max: { a: 178.156032692641 }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:33:00.582 [conn8] moveChunk request accepted at version 13|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:33:00.584 [conn8] can't move chunk of size (approximately) 1714224 because maximum size allowed to move is 1048576 ns: test.foo { a: 152.16144034639 } -> { a: 178.156032692641 } m30001| Mon Dec 17 15:33:00.584 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:33:00.584 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:33:00.585 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:33:00.585 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:33:00-132", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776380585), what: "moveChunk.from", ns: "test.foo", details: { min: { a: 152.16144034639 }, max: { a: 178.156032692641 }, step1 of 6: 3, step2 of 6: 8, note: "aborted" } } m30999| Mon Dec 17 15:33:00.589 [Balancer] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 1714224, errmsg: "chunk too big to move", ok: 0.0 } m30999| Mon Dec 17 15:33:00.590 [Balancer] balancer move failed: { chunkTooBig: true, estimatedChunkSize: 1714224, errmsg: "chunk too big to move", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { a: 152.16144034639 } max: { a: 152.16144034639 } m30999| Mon Dec 17 15:33:00.590 [Balancer] forcing a split because migrate failed for size reasons m30001| Mon Dec 17 15:33:00.590 [conn8] request split points lookup for chunk test.foo { : 152.16144034639 } -->> { : 178.156032692641 } m30001| Mon Dec 17 15:33:00.591 [conn8] splitVector doing another cycle because of force, keyCount now: 754 m30001| Mon Dec 17 15:33:00.623 [conn8] received splitChunk request: { splitChunk: "test.foo", keyPattern: { a: 1.0 }, min: { a: 152.16144034639 }, max: { a: 178.156032692641 }, from: "shard0001", splitKeys: [ { a: 165.039369603619 } ], shardId: "test.foo-a_152.16144034639", configdb: "localhost:30000" } m30001| Mon Dec 17 15:33:00.624 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf817cc94e4981dc6c1b4f m30001| Mon Dec 17 15:33:00.625 [conn8] splitChunk accepted at version 13|1||50cf812d5ec0810ee359b569 m30001| Mon Dec 17 15:33:00.626 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:33:00-133", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776380626), what: "split", ns: "test.foo", details: { before: { min: { a: 152.16144034639 }, max: { a: 178.156032692641 }, lastmod: Timestamp 13000|1, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { a: 152.16144034639 }, max: { a: 165.039369603619 }, lastmod: Timestamp 13000|2, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') }, right: { min: { a: 165.039369603619 }, max: { a: 178.156032692641 }, lastmod: Timestamp 13000|3, lastmodEpoch: ObjectId('50cf812d5ec0810ee359b569') } } } m30001| Mon Dec 17 15:33:00.627 [conn8] distributed lock 'test.foo/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30999| Mon Dec 17 15:33:00.630 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 72 version: 13|3||50cf812d5ec0810ee359b569 based on: 13|1||50cf812d5ec0810ee359b569 m30999| Mon Dec 17 15:33:00.630 [Balancer] forced split results: { ok: 1.0 } m30999| Mon Dec 17 15:33:00.630 [Balancer] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab59ba') }max: { _id: ObjectId('50cf812d256383d556ab5b88') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30999| Mon Dec 17 15:33:00.631 [Balancer] moveChunk result: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } m30999| Mon Dec 17 15:33:00.632 [Balancer] balancer move failed: { who: { _id: "test.mrShardedOut", process: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383", state: 2, ts: ObjectId('50cf81505ec0810ee359b578'), when: new Date(1355776336965), who: "domU-12-31-39-01-70-B4:30999:1355776300:1804289383:conn1:1681692777", why: "mr-post-process" }, errmsg: "the collection metadata could not be locked with lock migrate-{ _id: ObjectId('50cf812d256383d556ab59ba') }", ok: 0.0 } from: shard0001 to: shard0000 chunk: min: { _id: ObjectId('50cf812d256383d556ab59ba') } max: { _id: ObjectId('50cf812d256383d556ab59ba') } m30999| Mon Dec 17 15:33:00.632 [Balancer] *** end of balancing round m30999| Mon Dec 17 15:33:00.637 [Balancer] distributed lock 'balancer/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30001| Mon Dec 17 15:33:00.631 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf812d256383d556ab59ba')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:33:00.631 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:33:00-134", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776380631), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf812d256383d556ab59ba') }, max: { _id: ObjectId('50cf812d256383d556ab5b88') }, step1 of 6: 0, note: "aborted" } } m30001| Mon Dec 17 15:33:01.131 [cleanupOldData-50cf8179c94e4981dc6c1b4d] moveChunk deleted 1038 documents for test.foo from { a: 135.4934894479811 } -> { a: 152.16144034639 } m30001| Mon Dec 17 15:33:02.005 [conn3] CMD: drop test.mrShardedOut m30001| Mon Dec 17 15:33:02.007 [conn3] CMD: drop test.tmp.mr.foo_3 m30001| Mon Dec 17 15:33:02.007 [conn3] CMD: drop test.tmp.mr.foo_3 m30001| Mon Dec 17 15:33:02.007 [conn3] CMD: drop test.tmp.mr.foo_3 m30001| Mon Dec 17 15:33:02.007 [conn3] command test.$cmd command: { mapreduce.shardedfinish: { mapreduce: "foo", map: function map2() { emit(this._id, {count: 1, y: this.y}); }, reduce: function reduce2(key, values) { return values[0]; }, out: { replace: "mrShardedOut", sharded: true } }, inputDB: "test", shardedOutputCollection: "tmp.mrs.foo_1355776322_1", shards: { localhost:30000: { result: "tmp.mrs.foo_1355776322_1", splitKeys: {}, timeMillis: 44, counts: { input: 20, emit: 20, reduce: 0, output: 20 }, ok: 1.0 }, localhost:30001: { result: "tmp.mrs.foo_1355776322_1", splitKeys: [ { _id: ObjectId('50cf812d256383d556ab497c') }, { _id: ObjectId('50cf812d256383d556ab4b4a') }, { _id: ObjectId('50cf812d256383d556ab4d18') }, { _id: ObjectId('50cf812d256383d556ab4ee6') }, { _id: ObjectId('50cf812d256383d556ab50b4') }, { _id: ObjectId('50cf812d256383d556ab5282') }, { _id: ObjectId('50cf812d256383d556ab5450') }, { _id: ObjectId('50cf812d256383d556ab561e') }, { _id: ObjectId('50cf812d256383d556ab57ec') }, { _id: ObjectId('50cf812d256383d556ab59ba') }, { _id: ObjectId('50cf812d256383d556ab5b88') }, { _id: ObjectId('50cf812d256383d556ab5d56') }, { _id: ObjectId('50cf812d256383d556ab5f24') }, { _id: ObjectId('50cf812d256383d556ab60f3') }, { _id: ObjectId('50cf812d256383d556ab62c1') }, { _id: ObjectId('50cf812e256383d556ab6490') }, { _id: ObjectId('50cf812e256383d556ab665e') }, { _id: ObjectId('50cf812e256383d556ab682d') }, { _id: ObjectId('50cf812e256383d556ab69fb') }, { _id: ObjectId('50cf812e256383d556ab6bca') }, { _id: ObjectId('50cf812e256383d556ab6d98') }, { _id: ObjectId('50cf812e256383d556ab6f66') }, { _id: ObjectId('50cf812e256383d556ab7134') }, { _id: ObjectId('50cf812e256383d556ab7303') }, { _id: ObjectId('50cf812e256383d556ab74d1') }, { _id: ObjectId('50cf812e256383d556ab769f') }, { _id: ObjectId('50cf812e256383d556ab786d') }, { _id: ObjectId('50cf812e256383d556ab7a3b') }, { _id: ObjectId('50cf812e256383d556ab7c09') }, { _id: ObjectId('50cf812e256383d556ab7dd7') }, { _id: ObjectId('50cf812e256383d556ab7fa5') }, { _id: ObjectId('50cf812f256383d556ab8173') }, { _id: ObjectId('50cf812f256383d556ab8341') }, { _id: ObjectId('50cf812f256383d556ab850f') }, { _id: ObjectId('50cf812f256383d556ab86de') }, { _id: ObjectId('50cf812f256383d556ab88ac') }, { _id: ObjectId('50cf812f256383d556ab8a7a') }, { _id: ObjectId('50cf812f256383d556ab8c48') }, { _id: ObjectId('50cf812f256383d556ab8e16') }, { _id: ObjectId('50cf812f256383d556ab8fe4') }, { _id: ObjectId('50cf812f256383d556ab91b2') }, { _id: ObjectId('50cf812f256383d556ab9381') }, { _id: ObjectId('50cf812f256383d556ab954f') }, { _id: ObjectId('50cf812f256383d556ab971e') }, { _id: ObjectId('50cf812f256383d556ab98ec') }, { _id: ObjectId('50cf812f256383d556ab9abb') }, { _id: ObjectId('50cf812f256383d556ab9c89') }, { _id: ObjectId('50cf812f256383d556ab9e57') }, { _id: ObjectId('50cf812f256383d556aba026') }, { _id: ObjectId('50cf8130256383d556aba1f4') }, { _id: ObjectId('50cf8130256383d556aba3c2') }, { _id: ObjectId('50cf8130256383d556aba590') }, { _id: ObjectId('50cf8130256383d556aba75e') }, { _id: ObjectId('50cf8130256383d556aba92c') }, { _id: ObjectId('50cf8130256383d556abaafa') }, { _id: ObjectId('50cf8130256383d556abacc8') }, { _id: ObjectId('50cf8130256383d556abae96') }, { _id: ObjectId('50cf8130256383d556abb064') }, { _id: ObjectId('50cf8130256383d556abb233') }, { _id: ObjectId('50cf8130256383d556abb401') }, { _id: ObjectId('50cf8130256383d556abb5cf') }, { _id: ObjectId('50cf8130256383d556abb79d') }, { _id: ObjectId('50cf8130256383d556abb96b') }, { _id: ObjectId('50cf8130256383d556abbb39') }, { _id: ObjectId('50cf813e256383d556abbd09') }, { _id: ObjectId('50cf813e256383d556abbed7') }, { _id: ObjectId('50cf813e256383d556abc0a5') }, { _id: ObjectId('50cf813e256383d556abc273') }, { _id: ObjectId('50cf813e256383d556abc441') }, { _id: ObjectId('50cf813e256383d556abc60f') }, { _id: ObjectId('50cf813e256383d556abc7dd') }, { _id: ObjectId('50cf813e256383d556abc9ab') }, { _id: ObjectId('50cf813e256383d556abcb79') }, { _id: ObjectId('50cf813e256383d556abcd47') }, { _id: ObjectId('50cf813e256383d556abcf15') }, { _id: ObjectId('50cf813e256383d556abd0e3') }, { _id: ObjectId('50cf813e256383d556abd2b1') }, { _id: ObjectId('50cf813e256383d556abd47f') }, { _id: ObjectId('50cf813e256383d556abd64d') }, { _id: ObjectId('50cf813e256383d556abd81b') }, { _id: ObjectId('50cf813e256383d556abd9e9') }, { _id: ObjectId('50cf813e256383d556abdbb8') }, { _id: ObjectId('50cf813e256383d556abdd86') }, { _id: ObjectId('50cf813e256383d556abdf55') }, { _id: ObjectId('50cf813e256383d556abe124') }, { _id: ObjectId('50cf813e256383d556abe2f2') }, { _id: ObjectId('50cf813e256383d556abe4c1') }, { _id: ObjectId('50cf813e256383d556abe68f') }, { _id: ObjectId('50cf813e256383d556abe85d') }, { _id: ObjectId('50cf813e256383d556abea2b') }, { _id: ObjectId('50cf813f256383d556abebf9') }, { _id: ObjectId('50cf813f256383d556abedc7') }, { _id: ObjectId('50cf813f256383d556abef95') }, { _id: ObjectId('50cf813f256383d556abf164') }, { _id: ObjectId('50cf813f256383d556abf332') }, { _id: ObjectId('50cf813f256383d556abf501') }, { _id: ObjectId('50cf813f256383d556abf6cf') }, { _id: ObjectId('50cf813f256383d556abf89d') }, { _id: ObjectId('50cf813f256383d556abfa6b') }, { _id: ObjectId('50cf813f256383d556abfc39') }, { _id: ObjectId('50cf813f256383d556abfe07') }, { _id: ObjectId('50cf813f256383d556abffd5') }, { _id: ObjectId('50cf813f256383d556ac01a3') }, { _id: ObjectId('50cf813f256383d556ac0371') }, { _id: ObjectId('50cf813f256383d556ac053f') }, { _id: ObjectId('50cf813f256383d556ac070d') }, { _id: ObjectId('50cf813f256383d556ac08db') }, { _id: ObjectId('50cf813f256383d556ac0aa9') }, { _id: ObjectId('50cf813f256383d556ac0c77') }, { _id: ObjectId('50cf813f256383d556ac0e45') }, { _id: ObjectId('50cf813f256383d556ac1013') }, { _id: ObjectId('50cf813f256383d556ac11e1') }, { _id: ObjectId('50cf813f256383d556ac13af') }, { _id: ObjectId('50cf813f256383d556ac157d') }, { _id: ObjectId('50cf8140256383d556ac174b') }, { _id: ObjectId('50cf8140256383d556ac1919') }, { _id: ObjectId('50cf8140256383d556ac1ae8') }, { _id: ObjectId('50cf8140256383d556ac1cb6') }, { _id: ObjectId('50cf8140256383d556ac1e84') }, { _id: ObjectId('50cf8140256383d556ac2052') }, { _id: ObjectId('50cf8140256383d556ac2220') }, { _id: ObjectId('50cf8140256383d556ac23ee') }, { _id: ObjectId('50cf8140256383d556ac25bc') }, { _id: ObjectId('50cf8140256383d556ac278a') }, { _id: ObjectId('50cf8140256383d556ac2958') }, { _id: ObjectId('50cf8140256383d556ac2b26') }, { _id: ObjectId('50cf8141256383d556ac2cf4') }, { _id: ObjectId('50cf8141256383d556ac2ec2') }, { _id: ObjectId('50cf8141256383d556ac3090') } ], timeMillis: 13892, counts: { input: 59980, emit: 59980, reduce: 0, output: 59980 }, ok: 1.0 } }, shardCounts: { localhost:30000: { input: 20, emit: 20, reduce: 0, output: 20 }, localhost:30001: { input: 59980, emit: 59980, reduce: 0, output: 59980 } }, counts: { emit: 60000, input: 60000, output: 60000, reduce: 0 } } ntoreturn:1 keyUpdates:0 locks(micros) W:1634 w:9895087 reslen:2382 45036ms m30999| Mon Dec 17 15:33:02.008 [conn1] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30999:1355776300:1804289383' unlocked. m30999| Mon Dec 17 15:33:02.029 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0000:localhost:30000lastmod: 2|0||000000000000000000000000min: { _id: MinKey }max: { _id: ObjectId('50cf812d256383d556ab497c') } dataWritten: 554250 splitThreshold: 943718 m30000| Mon Dec 17 15:33:02.029 [conn9] request split points lookup for chunk test.mrShardedOut { : MinKey } -->> { : ObjectId('50cf812d256383d556ab497c') } m30999| Mon Dec 17 15:33:02.030 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Mon Dec 17 15:33:02.030 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0000:localhost:30000lastmod: 3|0||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab497c') }max: { _id: ObjectId('50cf812d256383d556ab4b4a') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.030 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab4b49') } m30000| Mon Dec 17 15:33:02.030 [conn9] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab497c') } -->> { : ObjectId('50cf812d256383d556ab4b4a') } m30999| Mon Dec 17 15:33:02.034 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0000:localhost:30000lastmod: 4|0||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab4b4a') }max: { _id: ObjectId('50cf812d256383d556ab4d18') } dataWritten: 555331 splitThreshold: 1048576 m30000| Mon Dec 17 15:33:02.034 [conn9] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab4b4a') } -->> { : ObjectId('50cf812d256383d556ab4d18') } m30999| Mon Dec 17 15:33:02.034 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab4d17') } m30999| Mon Dec 17 15:33:02.038 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0000:localhost:30000lastmod: 5|0||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab4d18') }max: { _id: ObjectId('50cf812d256383d556ab4ee6') } dataWritten: 555331 splitThreshold: 1048576 m30000| Mon Dec 17 15:33:02.038 [conn9] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab4d18') } -->> { : ObjectId('50cf812d256383d556ab4ee6') } m30999| Mon Dec 17 15:33:02.038 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab4ee5') } m30999| Mon Dec 17 15:33:02.038 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0000:localhost:30000lastmod: 6|0||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab4ee6') }max: { _id: ObjectId('50cf812d256383d556ab50b4') } dataWritten: 555331 splitThreshold: 1048576 m30000| Mon Dec 17 15:33:02.038 [conn9] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab4ee6') } -->> { : ObjectId('50cf812d256383d556ab50b4') } m30000| Mon Dec 17 15:33:02.040 [conn9] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab50b4') } -->> { : ObjectId('50cf812d256383d556ab5282') } m30000| Mon Dec 17 15:33:02.040 [conn9] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab5282') } -->> { : ObjectId('50cf812d256383d556ab5450') } m30000| Mon Dec 17 15:33:02.041 [conn9] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab5450') } -->> { : ObjectId('50cf812d256383d556ab561e') } m30000| Mon Dec 17 15:33:02.042 [conn9] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab561e') } -->> { : ObjectId('50cf812d256383d556ab57ec') } m30000| Mon Dec 17 15:33:02.042 [conn9] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab57ec') } -->> { : ObjectId('50cf812d256383d556ab59ba') } m30999| Mon Dec 17 15:33:02.039 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab50b3') } m30001| Mon Dec 17 15:33:02.043 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab59ba') } -->> { : ObjectId('50cf812d256383d556ab5b88') } m30001| Mon Dec 17 15:33:02.044 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab5b88') } -->> { : ObjectId('50cf812d256383d556ab5d56') } m30001| Mon Dec 17 15:33:02.045 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab5d56') } -->> { : ObjectId('50cf812d256383d556ab5f24') } m30001| Mon Dec 17 15:33:02.045 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab5f24') } -->> { : ObjectId('50cf812d256383d556ab60f2') } m30001| Mon Dec 17 15:33:02.046 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab60f2') } -->> { : ObjectId('50cf812d256383d556ab62c0') } m30001| Mon Dec 17 15:33:02.047 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812d256383d556ab62c0') } -->> { : ObjectId('50cf812e256383d556ab648e') } m30001| Mon Dec 17 15:33:02.047 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab648e') } -->> { : ObjectId('50cf812e256383d556ab665c') } m30001| Mon Dec 17 15:33:02.048 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab665c') } -->> { : ObjectId('50cf812e256383d556ab682a') } m30001| Mon Dec 17 15:33:02.049 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab682a') } -->> { : ObjectId('50cf812e256383d556ab69f8') } m30001| Mon Dec 17 15:33:02.049 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab69f8') } -->> { : ObjectId('50cf812e256383d556ab6bc6') } m30001| Mon Dec 17 15:33:02.050 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab6bc6') } -->> { : ObjectId('50cf812e256383d556ab6d94') } m30001| Mon Dec 17 15:33:02.051 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab6d94') } -->> { : ObjectId('50cf812e256383d556ab6f62') } m30001| Mon Dec 17 15:33:02.051 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab6f62') } -->> { : ObjectId('50cf812e256383d556ab7130') } m30001| Mon Dec 17 15:33:02.052 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab7130') } -->> { : ObjectId('50cf812e256383d556ab72fe') } m30001| Mon Dec 17 15:33:02.053 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab72fe') } -->> { : ObjectId('50cf812e256383d556ab74cc') } m30001| Mon Dec 17 15:33:02.053 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab74cc') } -->> { : ObjectId('50cf812e256383d556ab769a') } m30001| Mon Dec 17 15:33:02.054 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab769a') } -->> { : ObjectId('50cf812e256383d556ab7868') } m30001| Mon Dec 17 15:33:02.055 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab7868') } -->> { : ObjectId('50cf812e256383d556ab7a36') } m30001| Mon Dec 17 15:33:02.055 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab7a36') } -->> { : ObjectId('50cf812e256383d556ab7c04') } m30001| Mon Dec 17 15:33:02.056 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab7c04') } -->> { : ObjectId('50cf812e256383d556ab7dd2') } m30001| Mon Dec 17 15:33:02.056 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab7dd2') } -->> { : ObjectId('50cf812e256383d556ab7fa0') } m30001| Mon Dec 17 15:33:02.057 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812e256383d556ab7fa0') } -->> { : ObjectId('50cf812f256383d556ab816e') } m30001| Mon Dec 17 15:33:02.058 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab816e') } -->> { : ObjectId('50cf812f256383d556ab833c') } m30001| Mon Dec 17 15:33:02.058 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab833c') } -->> { : ObjectId('50cf812f256383d556ab850a') } m30001| Mon Dec 17 15:33:02.059 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab850a') } -->> { : ObjectId('50cf812f256383d556ab86d8') } m30001| Mon Dec 17 15:33:02.060 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab86d8') } -->> { : ObjectId('50cf812f256383d556ab88a6') } m30001| Mon Dec 17 15:33:02.060 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab88a6') } -->> { : ObjectId('50cf812f256383d556ab8a74') } m30001| Mon Dec 17 15:33:02.061 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab8a74') } -->> { : ObjectId('50cf812f256383d556ab8c42') } m30999| Mon Dec 17 15:33:02.040 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0000:localhost:30000lastmod: 7|0||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab50b4') }max: { _id: ObjectId('50cf812d256383d556ab5282') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.040 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab5281') } m30999| Mon Dec 17 15:33:02.040 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0000:localhost:30000lastmod: 8|0||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab5282') }max: { _id: ObjectId('50cf812d256383d556ab5450') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.041 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab544f') } m30999| Mon Dec 17 15:33:02.041 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0000:localhost:30000lastmod: 9|0||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab5450') }max: { _id: ObjectId('50cf812d256383d556ab561e') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.042 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab561d') } m30999| Mon Dec 17 15:33:02.042 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0000:localhost:30000lastmod: 10|0||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab561e') }max: { _id: ObjectId('50cf812d256383d556ab57ec') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.042 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab57eb') } m30999| Mon Dec 17 15:33:02.042 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0000:localhost:30000lastmod: 11|0||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab57ec') }max: { _id: ObjectId('50cf812d256383d556ab59ba') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.043 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab59b9') } m30999| Mon Dec 17 15:33:02.043 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab59ba') }max: { _id: ObjectId('50cf812d256383d556ab5b88') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.044 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab5b87') } m30999| Mon Dec 17 15:33:02.044 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|11||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab5b88') }max: { _id: ObjectId('50cf812d256383d556ab5d56') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.044 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab5d55') } m30999| Mon Dec 17 15:33:02.045 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab5d56') }max: { _id: ObjectId('50cf812d256383d556ab5f24') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.045 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab5f23') } m30999| Mon Dec 17 15:33:02.045 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|13||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab5f24') }max: { _id: ObjectId('50cf812d256383d556ab60f2') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.046 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab60f1') } m30999| Mon Dec 17 15:33:02.046 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab60f2') }max: { _id: ObjectId('50cf812d256383d556ab62c0') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.046 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812d256383d556ab62bf') } m30999| Mon Dec 17 15:33:02.047 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|15||000000000000000000000000min: { _id: ObjectId('50cf812d256383d556ab62c0') }max: { _id: ObjectId('50cf812e256383d556ab648e') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.047 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab648d') } m30999| Mon Dec 17 15:33:02.047 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab648e') }max: { _id: ObjectId('50cf812e256383d556ab665c') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.048 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab665b') } m30999| Mon Dec 17 15:33:02.048 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|17||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab665c') }max: { _id: ObjectId('50cf812e256383d556ab682a') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.048 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab6829') } m30999| Mon Dec 17 15:33:02.049 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab682a') }max: { _id: ObjectId('50cf812e256383d556ab69f8') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.049 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab69f7') } m30999| Mon Dec 17 15:33:02.049 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|19||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab69f8') }max: { _id: ObjectId('50cf812e256383d556ab6bc6') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.050 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab6bc5') } m30999| Mon Dec 17 15:33:02.050 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab6bc6') }max: { _id: ObjectId('50cf812e256383d556ab6d94') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.050 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab6d93') } m30999| Mon Dec 17 15:33:02.051 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab6d94') }max: { _id: ObjectId('50cf812e256383d556ab6f62') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.051 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab6f61') } m30999| Mon Dec 17 15:33:02.051 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab6f62') }max: { _id: ObjectId('50cf812e256383d556ab7130') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.052 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab712f') } m30999| Mon Dec 17 15:33:02.052 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|23||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab7130') }max: { _id: ObjectId('50cf812e256383d556ab72fe') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.052 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab72fd') } m30999| Mon Dec 17 15:33:02.052 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab72fe') }max: { _id: ObjectId('50cf812e256383d556ab74cc') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.053 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab74cb') } m30999| Mon Dec 17 15:33:02.053 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|25||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab74cc') }max: { _id: ObjectId('50cf812e256383d556ab769a') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.054 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab7699') } m30999| Mon Dec 17 15:33:02.054 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab769a') }max: { _id: ObjectId('50cf812e256383d556ab7868') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.054 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab7867') } m30999| Mon Dec 17 15:33:02.054 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|27||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab7868') }max: { _id: ObjectId('50cf812e256383d556ab7a36') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.055 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab7a35') } m30999| Mon Dec 17 15:33:02.055 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab7a36') }max: { _id: ObjectId('50cf812e256383d556ab7c04') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.056 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab7c03') } m30999| Mon Dec 17 15:33:02.056 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|29||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab7c04') }max: { _id: ObjectId('50cf812e256383d556ab7dd2') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.056 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab7dd1') } m30999| Mon Dec 17 15:33:02.056 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab7dd2') }max: { _id: ObjectId('50cf812e256383d556ab7fa0') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.057 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812e256383d556ab7f9f') } m30999| Mon Dec 17 15:33:02.057 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { _id: ObjectId('50cf812e256383d556ab7fa0') }max: { _id: ObjectId('50cf812f256383d556ab816e') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.058 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab816d') } m30999| Mon Dec 17 15:33:02.058 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab816e') }max: { _id: ObjectId('50cf812f256383d556ab833c') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.058 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab833b') } m30999| Mon Dec 17 15:33:02.058 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|33||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab833c') }max: { _id: ObjectId('50cf812f256383d556ab850a') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.059 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab8509') } m30999| Mon Dec 17 15:33:02.059 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab850a') }max: { _id: ObjectId('50cf812f256383d556ab86d8') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.060 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab86d7') } m30999| Mon Dec 17 15:33:02.060 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|35||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab86d8') }max: { _id: ObjectId('50cf812f256383d556ab88a6') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.060 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab88a5') } m30999| Mon Dec 17 15:33:02.060 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab88a6') }max: { _id: ObjectId('50cf812f256383d556ab8a74') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.061 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab8a73') } m30999| Mon Dec 17 15:33:02.061 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|37||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab8a74') }max: { _id: ObjectId('50cf812f256383d556ab8c42') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.061 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab8c41') } m30999| Mon Dec 17 15:33:02.062 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab8c42') }max: { _id: ObjectId('50cf812f256383d556ab8e10') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.062 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab8e0f') } m30999| Mon Dec 17 15:33:02.062 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|39||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab8e10') }max: { _id: ObjectId('50cf812f256383d556ab8fde') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.063 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab8fdd') } m30999| Mon Dec 17 15:33:02.063 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab8fde') }max: { _id: ObjectId('50cf812f256383d556ab91ac') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.063 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab91ab') } m30999| Mon Dec 17 15:33:02.064 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|41||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab91ac') }max: { _id: ObjectId('50cf812f256383d556ab937a') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.064 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab9379') } m30999| Mon Dec 17 15:33:02.064 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab937a') }max: { _id: ObjectId('50cf812f256383d556ab9548') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.065 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab9547') } m30999| Mon Dec 17 15:33:02.065 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|43||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab9548') }max: { _id: ObjectId('50cf812f256383d556ab9716') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.065 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab9715') } m30999| Mon Dec 17 15:33:02.066 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab9716') }max: { _id: ObjectId('50cf812f256383d556ab98e4') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.066 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab98e3') } m30999| Mon Dec 17 15:33:02.066 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|45||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab98e4') }max: { _id: ObjectId('50cf812f256383d556ab9ab2') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.067 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab9ab1') } m30999| Mon Dec 17 15:33:02.067 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab9ab2') }max: { _id: ObjectId('50cf812f256383d556ab9c80') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.067 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab9c7f') } m30999| Mon Dec 17 15:33:02.068 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|47||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab9c80') }max: { _id: ObjectId('50cf812f256383d556ab9e4e') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.068 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556ab9e4d') } m30999| Mon Dec 17 15:33:02.068 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556ab9e4e') }max: { _id: ObjectId('50cf812f256383d556aba01c') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.069 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf812f256383d556aba01b') } m30999| Mon Dec 17 15:33:02.069 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|49||000000000000000000000000min: { _id: ObjectId('50cf812f256383d556aba01c') }max: { _id: ObjectId('50cf8130256383d556aba1ea') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.069 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556aba1e9') } m30999| Mon Dec 17 15:33:02.070 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556aba1ea') }max: { _id: ObjectId('50cf8130256383d556aba3b8') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.070 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556aba3b7') } m30999| Mon Dec 17 15:33:02.070 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|51||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556aba3b8') }max: { _id: ObjectId('50cf8130256383d556aba586') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.071 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556aba585') } m30999| Mon Dec 17 15:33:02.071 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556aba586') }max: { _id: ObjectId('50cf8130256383d556aba754') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.071 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556aba753') } m30999| Mon Dec 17 15:33:02.072 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|53||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556aba754') }max: { _id: ObjectId('50cf8130256383d556aba922') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.072 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556aba921') } m30999| Mon Dec 17 15:33:02.072 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|54||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556aba922') }max: { _id: ObjectId('50cf8130256383d556abaaf0') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.073 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abaaef') } m30999| Mon Dec 17 15:33:02.073 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|55||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abaaf0') }max: { _id: ObjectId('50cf8130256383d556abacbe') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.073 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abacbd') } m30999| Mon Dec 17 15:33:02.074 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|56||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abacbe') }max: { _id: ObjectId('50cf8130256383d556abae8c') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.074 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abae8b') } m30999| Mon Dec 17 15:33:02.074 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|57||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abae8c') }max: { _id: ObjectId('50cf8130256383d556abb05a') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.075 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abb059') } m30999| Mon Dec 17 15:33:02.075 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|58||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abb05a') }max: { _id: ObjectId('50cf8130256383d556abb228') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.075 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abb227') } m30999| Mon Dec 17 15:33:02.076 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|59||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abb228') }max: { _id: ObjectId('50cf8130256383d556abb3f6') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.076 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abb3f5') } m30999| Mon Dec 17 15:33:02.076 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|60||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abb3f6') }max: { _id: ObjectId('50cf8130256383d556abb5c4') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.077 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abb5c3') } m30999| Mon Dec 17 15:33:02.077 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|61||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abb5c4') }max: { _id: ObjectId('50cf8130256383d556abb792') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.077 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abb791') } m30999| Mon Dec 17 15:33:02.078 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|62||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abb792') }max: { _id: ObjectId('50cf8130256383d556abb960') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.078 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abb95f') } m30999| Mon Dec 17 15:33:02.078 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|63||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abb960') }max: { _id: ObjectId('50cf8130256383d556abbb2e') } dataWritten: 555331 splitThreshold: 1048576 m30999| Mon Dec 17 15:33:02.079 [conn1] chunk not full enough to trigger auto-split { _id: ObjectId('50cf8130256383d556abbb2d') } m30999| Mon Dec 17 15:33:02.079 [conn1] about to initiate autosplit: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|64||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abbb2e') }max: { _id: MaxKey } dataWritten: 32953982 splitThreshold: 943718 m30001| Mon Dec 17 15:33:02.062 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab8c42') } -->> { : ObjectId('50cf812f256383d556ab8e10') } m30001| Mon Dec 17 15:33:02.062 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab8e10') } -->> { : ObjectId('50cf812f256383d556ab8fde') } m30001| Mon Dec 17 15:33:02.063 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab8fde') } -->> { : ObjectId('50cf812f256383d556ab91ac') } m30001| Mon Dec 17 15:33:02.064 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab91ac') } -->> { : ObjectId('50cf812f256383d556ab937a') } m30001| Mon Dec 17 15:33:02.064 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab937a') } -->> { : ObjectId('50cf812f256383d556ab9548') } m30001| Mon Dec 17 15:33:02.065 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab9548') } -->> { : ObjectId('50cf812f256383d556ab9716') } m30001| Mon Dec 17 15:33:02.066 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab9716') } -->> { : ObjectId('50cf812f256383d556ab98e4') } m30001| Mon Dec 17 15:33:02.066 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab98e4') } -->> { : ObjectId('50cf812f256383d556ab9ab2') } m30001| Mon Dec 17 15:33:02.067 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab9ab2') } -->> { : ObjectId('50cf812f256383d556ab9c80') } m30001| Mon Dec 17 15:33:02.068 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab9c80') } -->> { : ObjectId('50cf812f256383d556ab9e4e') } m30001| Mon Dec 17 15:33:02.068 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556ab9e4e') } -->> { : ObjectId('50cf812f256383d556aba01c') } m30001| Mon Dec 17 15:33:02.069 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf812f256383d556aba01c') } -->> { : ObjectId('50cf8130256383d556aba1ea') } m30001| Mon Dec 17 15:33:02.070 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556aba1ea') } -->> { : ObjectId('50cf8130256383d556aba3b8') } m30001| Mon Dec 17 15:33:02.070 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556aba3b8') } -->> { : ObjectId('50cf8130256383d556aba586') } m30001| Mon Dec 17 15:33:02.071 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556aba586') } -->> { : ObjectId('50cf8130256383d556aba754') } m30001| Mon Dec 17 15:33:02.072 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556aba754') } -->> { : ObjectId('50cf8130256383d556aba922') } m30001| Mon Dec 17 15:33:02.072 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556aba922') } -->> { : ObjectId('50cf8130256383d556abaaf0') } m30001| Mon Dec 17 15:33:02.073 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abaaf0') } -->> { : ObjectId('50cf8130256383d556abacbe') } m30001| Mon Dec 17 15:33:02.074 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abacbe') } -->> { : ObjectId('50cf8130256383d556abae8c') } m30001| Mon Dec 17 15:33:02.074 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abae8c') } -->> { : ObjectId('50cf8130256383d556abb05a') } m30001| Mon Dec 17 15:33:02.075 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abb05a') } -->> { : ObjectId('50cf8130256383d556abb228') } m30001| Mon Dec 17 15:33:02.076 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abb228') } -->> { : ObjectId('50cf8130256383d556abb3f6') } m30001| Mon Dec 17 15:33:02.076 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abb3f6') } -->> { : ObjectId('50cf8130256383d556abb5c4') } m30001| Mon Dec 17 15:33:02.077 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abb5c4') } -->> { : ObjectId('50cf8130256383d556abb792') } m30001| Mon Dec 17 15:33:02.078 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abb792') } -->> { : ObjectId('50cf8130256383d556abb960') } m30001| Mon Dec 17 15:33:02.078 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abb960') } -->> { : ObjectId('50cf8130256383d556abbb2e') } m30001| Mon Dec 17 15:33:02.079 [conn8] request split points lookup for chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abbb2e') } -->> { : MaxKey } m30001| Mon Dec 17 15:33:02.080 [conn8] max number of requested split points reached (2) before the end of chunk test.mrShardedOut { : ObjectId('50cf8130256383d556abbb2e') } -->> { : MaxKey } m30000| Mon Dec 17 15:33:02.088 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Mon Dec 17 15:33:02.088 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf8141256383d556ac320e') } -> { _id: MaxKey } m30001| Mon Dec 17 15:33:02.080 [conn8] received splitChunk request: { splitChunk: "test.mrShardedOut", keyPattern: { _id: 1 }, min: { _id: ObjectId('50cf8130256383d556abbb2e') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('50cf8141256383d556ac320e') } ], shardId: "test.mrShardedOut-_id_ObjectId('50cf8130256383d556abbb2e')", configdb: "localhost:30000" } m30001| Mon Dec 17 15:33:02.081 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf817ec94e4981dc6c1b50 m30001| Mon Dec 17 15:33:02.082 [conn8] splitChunk accepted at version 11|1||50cf81365ec0810ee359b56b m30001| Mon Dec 17 15:33:02.082 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:33:02-135", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776382082), what: "split", ns: "test.mrShardedOut", details: { before: { min: { _id: ObjectId('50cf8130256383d556abbb2e') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('50cf8130256383d556abbb2e') }, max: { _id: ObjectId('50cf8141256383d556ac320e') }, lastmod: Timestamp 11000|2, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b') }, right: { min: { _id: ObjectId('50cf8141256383d556ac320e') }, max: { _id: MaxKey }, lastmod: Timestamp 11000|3, lastmodEpoch: ObjectId('50cf81365ec0810ee359b56b') } } } m30999| Mon Dec 17 15:33:02.084 [conn1] ChunkManager: time to load chunks for test.mrShardedOut: 0ms sequenceNumber: 73 version: 11|3||50cf81365ec0810ee359b56b based on: 11|1||50cf81365ec0810ee359b56b m30999| Mon Dec 17 15:33:02.084 [conn1] autosplitted test.mrShardedOut shard: ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 1|64||000000000000000000000000min: { _id: ObjectId('50cf8130256383d556abbb2e') }max: { _id: MaxKey } on: { _id: ObjectId('50cf8141256383d556ac320e') } (splitThreshold 943718) (migrate suggested) m30999| Mon Dec 17 15:33:02.086 [conn1] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 128 writeLock: 0 m30999| Mon Dec 17 15:33:02.086 [conn1] moving chunk (auto): ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 11|3||000000000000000000000000min: { _id: ObjectId('50cf8141256383d556ac320e') }max: { _id: MaxKey } to: shard0000:localhost:30000 m30999| Mon Dec 17 15:33:02.086 [conn1] moving chunk ns: test.mrShardedOut moving ( ns:test.mrShardedOutshard: shard0001:localhost:30001lastmod: 11|3||000000000000000000000000min: { _id: ObjectId('50cf8141256383d556ac320e') }max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Mon Dec 17 15:33:02.083 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:33:02.086 [conn8] received moveChunk request: { moveChunk: "test.mrShardedOut", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('50cf8141256383d556ac320e') }, max: { _id: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.mrShardedOut-_id_ObjectId('50cf8141256383d556ac320e')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Mon Dec 17 15:33:02.087 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' acquired, ts : 50cf817ec94e4981dc6c1b51 m30001| Mon Dec 17 15:33:02.087 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:33:02-136", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776382087), what: "moveChunk.start", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf8141256383d556ac320e') }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:33:02.087 [conn8] moveChunk request accepted at version 11|3||50cf81365ec0810ee359b56b m30001| Mon Dec 17 15:33:02.087 [conn8] moveChunk number of documents: 1 m30001| Mon Dec 17 15:33:02.093 [conn8] moveChunk data transfer progress: { active: true, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf8141256383d556ac320e') }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "steady", counts: { cloned: 1, clonedBytes: 1081, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Mon Dec 17 15:33:02.093 [conn8] moveChunk setting version to: 12|0||50cf81365ec0810ee359b56b m30000| Mon Dec 17 15:33:02.093 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:33:02.097 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:33:02.101 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:33:02.105 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:33:02.109 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:33:02.113 [conn11] Waiting for commit to finish m30000| Mon Dec 17 15:33:02.113 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.mrShardedOut' { _id: ObjectId('50cf8141256383d556ac320e') } -> { _id: MaxKey } m30000| Mon Dec 17 15:33:02.114 [migrateThread] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:33:02-22", server: "domU-12-31-39-01-70-B4", clientAddr: ":27017", time: new Date(1355776382114), what: "moveChunk.to", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf8141256383d556ac320e') }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 25 } } m30001| Mon Dec 17 15:33:02.117 [conn8] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.mrShardedOut", from: "localhost:30001", min: { _id: ObjectId('50cf8141256383d556ac320e') }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1 }, state: "done", counts: { cloned: 1, clonedBytes: 1081, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Mon Dec 17 15:33:02.117 [conn8] moveChunk updating self version to: 12|1||50cf81365ec0810ee359b56b through { _id: ObjectId('50cf812d256383d556ab59ba') } -> { _id: ObjectId('50cf812d256383d556ab5b88') } for collection 'test.mrShardedOut' m30001| Mon Dec 17 15:33:02.118 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:33:02-137", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776382118), what: "moveChunk.commit", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf8141256383d556ac320e') }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } } m30001| Mon Dec 17 15:33:02.118 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:33:02.118 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:33:02.118 [conn8] forking for cleanup of chunk data m30001| Mon Dec 17 15:33:02.118 [conn8] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Mon Dec 17 15:33:02.118 [conn8] MigrateFromStatus::done Global lock acquired m30001| Mon Dec 17 15:33:02.118 [conn8] distributed lock 'test.mrShardedOut/domU-12-31-39-01-70-B4:30001:1355776301:242898411' unlocked. m30001| Mon Dec 17 15:33:02.118 [conn8] about to log metadata event: { _id: "domU-12-31-39-01-70-B4-2012-12-17T20:33:02-138", server: "domU-12-31-39-01-70-B4", clientAddr: "127.0.0.1:42550", time: new Date(1355776382118), what: "moveChunk.from", ns: "test.mrShardedOut", details: { min: { _id: ObjectId('50cf8141256383d556ac320e') }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 5, step5 of 6: 25, step6 of 6: 0 } } m30999| Mon Dec 17 15:33:02.118 [conn1] moveChunk result: { ok: 1.0 } m30999| Mon Dec 17 15:33:02.119 [conn1] ChunkManager: time to load chunks for test.mrShardedOut: 0ms sequenceNumber: 74 version: 12|1||50cf81365ec0810ee359b56b based on: 11|3||50cf81365ec0810ee359b56b m30000| Mon Dec 17 15:33:02.120 [conn9] CMD: drop test.tmp.mrs.foo_1355776322_1 m30001| Mon Dec 17 15:33:02.144 [cleanupOldData-50cf817ec94e4981dc6c1b52] (start) waiting to cleanup test.mrShardedOut from { _id: ObjectId('50cf8141256383d556ac320e') } -> { _id: MaxKey }, # cursors remaining: 0 m30001| Mon Dec 17 15:33:02.165 [cleanupOldData-50cf817ec94e4981dc6c1b52] waiting to remove documents for test.mrShardedOut from { _id: ObjectId('50cf8141256383d556ac320e') } -> { _id: MaxKey } m30001| Mon Dec 17 15:33:02.165 [cleanupOldData-50cf817ec94e4981dc6c1b52] moveChunk starting delete for: test.mrShardedOut from { _id: ObjectId('50cf8141256383d556ac320e') } -> { _id: MaxKey } m30001| Mon Dec 17 15:33:02.165 [cleanupOldData-50cf817ec94e4981dc6c1b52] moveChunk deleted 1 documents for test.mrShardedOut from { _id: ObjectId('50cf8141256383d556ac320e') } -> { _id: MaxKey } m30001| Mon Dec 17 15:33:02.919 [FileAllocator] done allocating datafile /data/db/mrShardedOutput1/test.5, size: 511MB, took 53.884 secs m30000| Mon Dec 17 15:33:02.939 [conn9] command test.$cmd command: { drop: "tmp.mrs.foo_1355776322_1" } ntoreturn:1 keyUpdates:0 locks(micros) w:819319 reslen:136 819ms m30001| Mon Dec 17 15:33:02.940 [conn8] CMD: drop test.tmp.mrs.foo_1355776322_1 ---- MapReduce results: ---- { "result" : "mrShardedOut", "counts" : { "input" : NumberLong(60000), "emit" : NumberLong(60000), "reduce" : NumberLong(0), "output" : NumberLong(64619) }, "timeMillis" : 59987, "timing" : { "shardProcessing" : 14012, "postProcessing" : 45975 }, "shardCounts" : { "localhost:30000" : { "input" : 20, "emit" : 20, "reduce" : 0, "output" : 20 }, "localhost:30001" : { "input" : 59980, "emit" : 59980, "reduce" : 0, "output" : 59980 } }, "postProcessCounts" : { "localhost:30000" : { "input" : NumberLong(4619), "reduce" : NumberLong(0), "output" : NumberLong(4619) }, "localhost:30001" : { "input" : NumberLong(60000), "reduce" : NumberLong(0), "output" : NumberLong(60000) } }, "ok" : 1, } assert: [60000] != [NumberLong(64619)] are not equal : MapReduce FAILED: res.counts.output = 64619, should be 60000 Error: Printing Stack Trace at printStackTrace (src/mongo/shell/utils.js:37:7) at doassert (src/mongo/shell/utils.js:58:1) at Function.assert.eq (src/mongo/shell/utils.js:88:1) at /mnt/slaves/Linux_32bit/mongo/jstests/sharding/mrShardedOutput.js:93:12 Mon Dec 17 15:33:02.997 exec error: src/mongo/shell/utils.js:59 [60000] != [NumberLong(64619)] are not equal : MapReduce FAILED: res.counts.output = 64619, should be 60000 throw msg; ^ failed to load: /mnt/slaves/Linux_32bit/mongo/jstests/sharding/mrShardedOutput.js m30000| Mon Dec 17 15:33:02.999 got signal 15 (Terminated), will terminate after current cmd ends m30000| Mon Dec 17 15:33:02.999 [interruptThread] now exiting m30000| Mon Dec 17 15:33:02.999 dbexit: m30000| Mon Dec 17 15:33:02.999 [interruptThread] shutdown: going to close listening sockets... m30000| Mon Dec 17 15:33:02.999 [interruptThread] closing listening socket: 13 m30000| Mon Dec 17 15:33:02.999 [interruptThread] closing listening socket: 14 m30000| Mon Dec 17 15:33:02.999 [interruptThread] closing listening socket: 15 m30000| Mon Dec 17 15:33:02.999 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Mon Dec 17 15:33:03.032 [interruptThread] shutdown: going to flush diaglog... m30000| Mon Dec 17 15:33:03.032 [interruptThread] shutdown: going to close sockets... m30000| Mon Dec 17 15:33:03.032 [interruptThread] shutdown: waiting for fs preallocator... m30000| Mon Dec 17 15:33:03.032 [interruptThread] shutdown: closing all files... m30999| Mon Dec 17 15:33:03.038 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [0] server [127.0.0.1:30000] m30001| Mon Dec 17 15:33:03.038 [conn5] end connection 127.0.0.1:42528 (8 connections now open) m30000| Mon Dec 17 15:33:03.038 [conn15] end connection 127.0.0.1:39895 (17 connections now open) m30999| Mon Dec 17 15:33:03.041 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed m30999| Mon Dec 17 15:33:03.041 [WriteBackListener-localhost:30000] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('50cf812c5ec0810ee359b567') } m30999| Mon Dec 17 15:33:03.042 [WriteBackListener-localhost:30000] Detecting bad connection created at 0 microSec, clearing pool for localhost:30000 m30999| Mon Dec 17 15:33:03.042 [WriteBackListener-localhost:30000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('50cf812c5ec0810ee359b567') } m30000| Mon Dec 17 15:33:03.042 [conn9] end connection 127.0.0.1:39843 (16 connections now open) m30000| Mon Dec 17 15:33:03.047 [conn12] end connection 127.0.0.1:39867 (15 connections now open) m30000| Mon Dec 17 15:33:03.047 [conn17] end connection 127.0.0.1:39897 (14 connections now open) m30001| Mon Dec 17 15:33:03.047 [conn9] end connection 127.0.0.1:42570 (7 connections now open) m30000| Mon Dec 17 15:33:03.047 [conn16] end connection 127.0.0.1:39896 (13 connections now open) m30000| Mon Dec 17 15:33:03.054 [interruptThread] closeAllFiles() finished m30000| Mon Dec 17 15:33:03.054 [interruptThread] shutdown: removing fs lock... m30000| Mon Dec 17 15:33:03.054 dbexit: really exiting now m30001| Mon Dec 17 15:33:04.001 got signal 15 (Terminated), will terminate after current cmd ends m30001| Mon Dec 17 15:33:04.001 [interruptThread] now exiting m30001| Mon Dec 17 15:33:04.001 dbexit: m30001| Mon Dec 17 15:33:04.001 [interruptThread] shutdown: going to close listening sockets... m30001| Mon Dec 17 15:33:04.002 [interruptThread] closing listening socket: 16 m30001| Mon Dec 17 15:33:04.002 [interruptThread] closing listening socket: 17 m30001| Mon Dec 17 15:33:04.005 [interruptThread] closing listening socket: 18 m30001| Mon Dec 17 15:33:04.005 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Mon Dec 17 15:33:04.005 [interruptThread] shutdown: going to flush diaglog... m30001| Mon Dec 17 15:33:04.005 [interruptThread] shutdown: going to close sockets... m30001| Mon Dec 17 15:33:04.005 [interruptThread] shutdown: waiting for fs preallocator... m30001| Mon Dec 17 15:33:04.005 [interruptThread] shutdown: closing all files... m30001| Mon Dec 17 15:33:04.005 [conn6] end connection 127.0.0.1:42535 (6 connections now open) m30001| Mon Dec 17 15:33:04.005 [conn8] end connection 127.0.0.1:42550 (5 connections now open) m30001| Mon Dec 17 15:33:04.005 [conn4] end connection 127.0.0.1:42513 (4 connections now open) m30999| Mon Dec 17 15:33:04.005 [WriteBackListener-localhost:30001] SocketException: remote: 127.0.0.1:30001 error: 9001 socket exception [0] server [127.0.0.1:30001] m30999| Mon Dec 17 15:33:04.005 [WriteBackListener-localhost:30001] DBClientCursor::init call() failed m30999| Mon Dec 17 15:33:04.005 [WriteBackListener-localhost:30001] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30001 ns: admin.$cmd query: { writebacklisten: ObjectId('50cf812c5ec0810ee359b567') } m30999| Mon Dec 17 15:33:04.005 [WriteBackListener-localhost:30001] Detecting bad connection created at 0 microSec, clearing pool for localhost:30001 m30999| Mon Dec 17 15:33:04.005 [WriteBackListener-localhost:30001] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30001 ns: admin.$cmd query: { writebacklisten: ObjectId('50cf812c5ec0810ee359b567') } m30001| Mon Dec 17 15:33:04.010 [conn7] end connection 127.0.0.1:42536 (3 connections now open) m30999| Mon Dec 17 15:33:04.053 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [0] server [127.0.0.1:30000] m30999| Mon Dec 17 15:33:04.053 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed m30999| Mon Dec 17 15:33:04.054 [WriteBackListener-localhost:30000] Assertion: 13632:couldn't get updated shard list from config server m30999| 0x865974d 0x8637a05 0x8619550 0x857abc5 0x8574c10 0x85bc5d1 0x861c359 0x861d1ee 0x86a623e 0x1c8542 0x5b8b6e m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15printStackTraceERSo+0x2d) [0x865974d] m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo10logContextEPKc+0xa5) [0x8637a05] m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo11msgassertedEiPKc+0xc0) [0x8619550] m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo15StaticShardInfo6reloadEv+0xf35) [0x857abc5] m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo5Shard15reloadShardInfoEv+0x20) [0x8574c10] m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo17WriteBackListener3runEv+0x71) [0x85bc5d1] m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xb9) [0x861c359] m30999| /mnt/slaves/Linux_32bit/mongo/mongos(_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x7e) [0x861d1ee] m30999| /mnt/slaves/Linux_32bit/mongo/mongos [0x86a623e] m30999| /lib/i686/nosegneg/libpthread.so.0 [0x1c8542] m30999| /lib/i686/nosegneg/libc.so.6(clone+0x5e) [0x5b8b6e] m30999| Mon Dec 17 15:33:04.057 [WriteBackListener-localhost:30000] Detecting bad connection created at 0 microSec, clearing pool for localhost:30000 m30999| Mon Dec 17 15:33:04.058 [WriteBackListener-localhost:30000] WriteBackListener exception : couldn't get updated shard list from config server m30001| Mon Dec 17 15:33:04.095 [interruptThread] closeAllFiles() finished m30001| Mon Dec 17 15:33:04.095 [interruptThread] shutdown: removing fs lock... m30001| Mon Dec 17 15:33 Mon Dec 17 15:33:06.268 got signal 15 (Terminated), will terminate after current cmd ends Mon Dec 17 15:33:06.272 [interruptThread] now exiting Mon Dec 17 15:33:06.272 dbexit: Mon Dec 17 15:33:06.272 [interruptThread] shutdown: going to close listening sockets... Mon Dec 17 15:33:06.272 [interruptThread] closing listening socket: 10 Mon Dec 17 15:33:06.272 [interruptThread] closing listening socket: 11 Mon Dec 17 15:33:06.272 [interruptThread] closing listening socket: 12 Mon Dec 17 15:33:06.272 [interruptThread] removing socket file: /tmp/mongodb-27999.sock Mon Dec 17 15:33:06.272 [interruptThread] shutdown: going to flush diaglog... Mon Dec 17 15:33:06.272 [interruptThread] shutdown: going to close sockets... Mon Dec 17 15:33:06.272 [interruptThread] shutdown: waiting for fs preallocator... Mon Dec 17 15:33:06.272 [interruptThread] shutdown: closing all files... Mon Dec 17 15:33:06.273 [interruptThread] closeAllFiles() finished Mon Dec 17 15:33:06.273 [interruptThread] shutdown: removing fs lock... Mon Dec 17 15:33:06.297 dbexit: really exiting now