Wed Feb 27 01:57:28.938 [conn8] end connection 127.0.0.1:60680 (0 connections now open) Wed Feb 27 02:00:17.403 [initandlisten] connection accepted from 127.0.0.1:60774 #9 (1 connection now open) MongoDB shell version: 2.4.0-rc2-pre- Thiskeyisonlyforrunningthesuitewithauthenticationdontuseitinanytestsdirectly Resetting db path '/data/db/auto20' Wed Feb 27 02:00:17.855 shell: started program mongod.exe --port 30000 --dbpath /data/db/auto20 --keyFile D:\slave\Windows_64bit_2008+_Weekly_Slow_Tests\mongo\jstests\libs\authTestsKey --setParameter enableTestCommands=1 m30000| Wed Feb 27 02:00:17.902 [initandlisten] MongoDB starting : pid=8448 port=30000 dbpath=/data/db/auto20 64-bit host=AMAZONA-DFVK11N m30000| Wed Feb 27 02:00:17.902 [initandlisten] db version v2.4.0-rc2-pre-, pdfile version 4.5 m30000| Wed Feb 27 02:00:17.902 [initandlisten] git version: 7e7866cc263c2af803ba2479b212c245f73d57bc m30000| Wed Feb 27 02:00:17.902 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 m30000| Wed Feb 27 02:00:17.902 [initandlisten] allocator: system m30000| Wed Feb 27 02:00:17.902 [initandlisten] options: { dbpath: "/data/db/auto20", keyFile: "D:\slave\Windows_64bit_2008+_Weekly_Slow_Tests\mongo\jstests\libs\authTestsKey", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Wed Feb 27 02:00:17.902 [initandlisten] journal dir=/data/db/auto20\journal m30000| Wed Feb 27 02:00:17.902 [initandlisten] recover : no journal files present, no recovery needed m30000| Wed Feb 27 02:00:18.042 [FileAllocator] allocating new datafile /data/db/auto20\local.ns, filling with zeroes... m30000| Wed Feb 27 02:00:18.042 [FileAllocator] creating directory /data/db/auto20\_tmp m30000| Wed Feb 27 02:00:18.089 [FileAllocator] done allocating datafile /data/db/auto20\local.ns, size: 16MB, took 0.048 secs m30000| Wed Feb 27 02:00:18.089 [FileAllocator] allocating new datafile /data/db/auto20\local.0, filling with zeroes... m30000| Wed Feb 27 02:00:18.276 [FileAllocator] done allocating datafile /data/db/auto20\local.0, size: 64MB, took 0.188 secs m30000| Wed Feb 27 02:00:18.276 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 239ms m30000| Wed Feb 27 02:00:18.276 [initandlisten] waiting for connections on port 30000 m30000| Wed Feb 27 02:00:18.276 [websvr] admin web console waiting for connections on port 31000 m30000| Wed Feb 27 02:00:18.370 [initandlisten] connection accepted from 127.0.0.1:60783 #1 (1 connection now open) Resetting db path '/data/db/auto21' Wed Feb 27 02:00:18.386 shell: started program mongod.exe --port 30001 --dbpath /data/db/auto21 --keyFile D:\slave\Windows_64bit_2008+_Weekly_Slow_Tests\mongo\jstests\libs\authTestsKey --setParameter enableTestCommands=1 m30001| Wed Feb 27 02:00:18.417 [initandlisten] MongoDB starting : pid=9804 port=30001 dbpath=/data/db/auto21 64-bit host=AMAZONA-DFVK11N m30001| Wed Feb 27 02:00:18.417 [initandlisten] db version v2.4.0-rc2-pre-, pdfile version 4.5 m30001| Wed Feb 27 02:00:18.417 [initandlisten] git version: 7e7866cc263c2af803ba2479b212c245f73d57bc m30001| Wed Feb 27 02:00:18.417 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 m30001| Wed Feb 27 02:00:18.417 [initandlisten] allocator: system m30001| Wed Feb 27 02:00:18.417 [initandlisten] options: { dbpath: "/data/db/auto21", keyFile: "D:\slave\Windows_64bit_2008+_Weekly_Slow_Tests\mongo\jstests\libs\authTestsKey", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Wed Feb 27 02:00:18.432 [initandlisten] journal dir=/data/db/auto21\journal m30001| Wed Feb 27 02:00:18.432 [initandlisten] recover : no journal files present, no recovery needed m30001| Wed Feb 27 02:00:18.557 [FileAllocator] allocating new datafile /data/db/auto21\local.ns, filling with zeroes... m30001| Wed Feb 27 02:00:18.557 [FileAllocator] creating directory /data/db/auto21\_tmp m30001| Wed Feb 27 02:00:18.604 [FileAllocator] done allocating datafile /data/db/auto21\local.ns, size: 16MB, took 0.047 secs m30001| Wed Feb 27 02:00:18.604 [FileAllocator] allocating new datafile /data/db/auto21\local.0, filling with zeroes... m30001| Wed Feb 27 02:00:18.807 [FileAllocator] done allocating datafile /data/db/auto21\local.0, size: 64MB, took 0.193 secs m30001| Wed Feb 27 02:00:18.807 [initandlisten] command local.$cmd command: { create: "startup_log", size: 10485760, capped: true } ntoreturn:1 keyUpdates:0 reslen:37 243ms m30001| Wed Feb 27 02:00:18.807 [websvr] admin web console waiting for connections on port 31001 m30001| Wed Feb 27 02:00:18.807 [initandlisten] waiting for connections on port 30001 m30001| Wed Feb 27 02:00:18.900 [initandlisten] connection accepted from 127.0.0.1:60784 #1 (1 connection now open) "localhost:30000" m30000| Wed Feb 27 02:00:18.900 [initandlisten] connection accepted from 127.0.0.1:60785 #2 (2 connections now open) ShardingTest auto2 : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Wed Feb 27 02:00:18.900 shell: started program mongos.exe --port 30999 --configdb localhost:30000 -v --keyFile D:\slave\Windows_64bit_2008+_Weekly_Slow_Tests\mongo\jstests\libs\authTestsKey --chunkSize 50 --setParameter enableTestCommands=1 m30999| Wed Feb 27 02:00:18.900 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Wed Feb 27 02:00:18.900 security key: Thiskeyisonlyforrunningthesuitewithauthenticationdontuseitinanytestsdirectly m30999| Wed Feb 27 02:00:18.916 [mongosMain] MongoS version 2.4.0-rc2-pre- starting: pid=3804 port=30999 64-bit host=AMAZONA-DFVK11N (--help for usage) m30999| Wed Feb 27 02:00:18.916 [mongosMain] git version: 7e7866cc263c2af803ba2479b212c245f73d57bc m30999| Wed Feb 27 02:00:18.916 [mongosMain] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 m30999| Wed Feb 27 02:00:18.916 [mongosMain] options: { chunkSize: 50, configdb: "localhost:30000", keyFile: "D:\slave\Windows_64bit_2008+_Weekly_Slow_Tests\mongo\jstests\libs\authTestsKey", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Wed Feb 27 02:00:18.916 [mongosMain] config string : localhost:30000 m30999| Wed Feb 27 02:00:18.916 [mongosMain] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:00:18.916 BackgroundJob starting: ConnectBG m30000| Wed Feb 27 02:00:18.916 [initandlisten] connection accepted from 127.0.0.1:60787 #3 (3 connections now open) m30999| Wed Feb 27 02:00:18.932 [mongosMain] connected connection! m30000| Wed Feb 27 02:00:18.932 [conn3] note: no users configured in admin.system.users, allowing localhost access m30000| Wed Feb 27 02:00:18.932 [conn3] authenticate db: local { authenticate: 1, nonce: "e1032e5da98794a8", user: "__system", key: "ab8628b6d4702911521d551c4f0d5b0b" } m30999| Wed Feb 27 02:00:18.932 [mongosMain] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:00:18.932 BackgroundJob starting: CheckConfigServers m30999| Wed Feb 27 02:00:18.932 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:00:18.932 [mongosMain] connected connection! m30000| Wed Feb 27 02:00:18.932 [initandlisten] connection accepted from 127.0.0.1:60788 #4 (4 connections now open) m30000| Wed Feb 27 02:00:18.932 [conn4] authenticate db: local { authenticate: 1, nonce: "9568046841bfaf35", user: "__system", key: "677fb0d6b29cebe0f7291dc57b257e22" } m30000| Wed Feb 27 02:00:18.932 [conn4] CMD fsync: sync:1 lock:0 m30000| Wed Feb 27 02:00:20.679 [conn4] command admin.$cmd command: { fsync: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:402 reslen:51 1757ms m30999| Wed Feb 27 02:00:20.679 [mongosMain] created new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Wed Feb 27 02:00:20.694 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30999:1361948420:41 ) m30999| Wed Feb 27 02:00:20.694 [LockPinger] creating distributed lock ping thread for localhost:30000 and process AMAZONA-DFVK11N:30999:1361948420:41 (sleeping for 30000ms) m30999| Wed Feb 27 02:00:20.694 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30999| Wed Feb 27 02:00:20.694 [mongosMain] about to acquire distributed lock 'configUpgrade/AMAZONA-DFVK11N:30999:1361948420:41: m30999| { "state" : 1, m30999| "who" : "AMAZONA-DFVK11N:30999:1361948420:41:mongosMain:18467", m30999| "process" : "AMAZONA-DFVK11N:30999:1361948420:41", m30999| "when" : { "$date" : "Wed Feb 27 02:00:20 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "512daf040c9ae827b8ef238f" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m30000| Wed Feb 27 02:00:20.694 [FileAllocator] allocating new datafile /data/db/auto20\config.ns, filling with zeroes... m30000| Wed Feb 27 02:00:20.741 [FileAllocator] done allocating datafile /data/db/auto20\config.ns, size: 16MB, took 0.048 secs m30000| Wed Feb 27 02:00:20.741 [FileAllocator] allocating new datafile /data/db/auto20\config.0, filling with zeroes... m30000| Wed Feb 27 02:00:20.928 [FileAllocator] done allocating datafile /data/db/auto20\config.0, size: 64MB, took 0.189 secs m30000| Wed Feb 27 02:00:20.928 [FileAllocator] allocating new datafile /data/db/auto20\config.1, filling with zeroes... m30000| Wed Feb 27 02:00:20.928 [conn3] build index config.lockpings { _id: 1 } m30000| Wed Feb 27 02:00:20.928 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Wed Feb 27 02:00:20.928 [conn3] update config.lockpings query: { _id: "AMAZONA-DFVK11N:30999:1361948420:41" } update: { $set: { ping: new Date(1361948420694) } } nscanned:0 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) w:242629 242ms m30000| Wed Feb 27 02:00:20.928 [conn4] build index config.locks { _id: 1 } m30000| Wed Feb 27 02:00:20.928 [conn4] build index done. scanned 0 total records. 0 secs m30999| Wed Feb 27 02:00:20.928 [LockPinger] cluster localhost:30000 pinged successfully at Wed Feb 27 02:00:20 2013 by distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30999:1361948420:41', sleeping for 30000ms m30000| Wed Feb 27 02:00:20.928 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Wed Feb 27 02:00:20.928 [conn3] build index done. scanned 1 total records. 0.002 secs m30999| Wed Feb 27 02:00:20.944 [mongosMain] distributed lock 'configUpgrade/AMAZONA-DFVK11N:30999:1361948420:41' acquired, ts : 512daf040c9ae827b8ef238f m30999| Wed Feb 27 02:00:20.944 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Wed Feb 27 02:00:20.944 [mongosMain] starting next upgrade step from v0 to v4 m30999| Wed Feb 27 02:00:20.944 [mongosMain] about to log new metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:20-512daf040c9ae827b8ef2390", server: "AMAZONA-DFVK11N", clientAddr: "N/A", time: new Date(1361948420944), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Wed Feb 27 02:00:20.944 [conn4] build index config.changelog { _id: 1 } m30000| Wed Feb 27 02:00:20.944 [conn4] build index done. scanned 0 total records. 0 secs m30999| Wed Feb 27 02:00:20.944 [mongosMain] writing initial config version at v4 m30000| Wed Feb 27 02:00:20.944 [conn4] build index config.version { _id: 1 } m30000| Wed Feb 27 02:00:20.944 [conn4] build index done. scanned 0 total records. 0 secs m30999| Wed Feb 27 02:00:20.944 [mongosMain] about to log new metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:20-512daf040c9ae827b8ef2392", server: "AMAZONA-DFVK11N", clientAddr: "N/A", time: new Date(1361948420944), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Wed Feb 27 02:00:20.944 [mongosMain] upgrade of config server to v4 successful m30999| Wed Feb 27 02:00:20.944 [mongosMain] distributed lock 'configUpgrade/AMAZONA-DFVK11N:30999:1361948420:41' unlocked. m30000| Wed Feb 27 02:00:20.944 [conn3] build index config.settings { _id: 1 } m30999| Wed Feb 27 02:00:20.944 [mongosMain] waiting for connections on port 30999 m30999| Wed Feb 27 02:00:20.944 [websvr] admin web console waiting for connections on port 31999 m30999| Wed Feb 27 02:00:20.944 BackgroundJob starting: Balancer m30999| Wed Feb 27 02:00:20.944 [Balancer] about to contact config servers and shards m30000| Wed Feb 27 02:00:20.944 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Wed Feb 27 02:00:20.944 BackgroundJob starting: cursorTimeout m30999| Wed Feb 27 02:00:20.944 BackgroundJob starting: PeriodicTask::Runner m30000| Wed Feb 27 02:00:20.944 [conn3] build index config.chunks { _id: 1 } m30000| Wed Feb 27 02:00:20.944 [conn3] build index done. scanned 0 total records. 0 secs m30000| Wed Feb 27 02:00:20.944 [conn3] info: creating collection config.chunks on add index m30000| Wed Feb 27 02:00:20.944 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Wed Feb 27 02:00:20.944 [conn3] build index done. scanned 0 total records. 0 secs m30000| Wed Feb 27 02:00:20.944 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Wed Feb 27 02:00:20.944 [conn3] build index done. scanned 0 total records. 0 secs m30000| Wed Feb 27 02:00:20.944 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Wed Feb 27 02:00:20.944 [conn3] build index done. scanned 0 total records. 0 secs m30000| Wed Feb 27 02:00:20.944 [conn3] build index config.shards { _id: 1 } m30000| Wed Feb 27 02:00:20.944 [conn3] build index done. scanned 0 total records. 0 secs m30000| Wed Feb 27 02:00:20.944 [conn3] info: creating collection config.shards on add index m30000| Wed Feb 27 02:00:20.944 [conn3] build index config.shards { host: 1 } m30000| Wed Feb 27 02:00:20.944 [conn3] build index done. scanned 0 total records. 0 secs m30000| Wed Feb 27 02:00:20.960 [conn3] build index config.mongos { _id: 1 } m30000| Wed Feb 27 02:00:20.960 [initandlisten] connection accepted from 127.0.0.1:60791 #5 (5 connections now open) m30000| Wed Feb 27 02:00:20.960 [conn3] build index done. scanned 0 total records. 0 secs m30000| Wed Feb 27 02:00:20.960 [conn5] authenticate db: local { authenticate: 1, nonce: "bec4037caa351e58", user: "__system", key: "46464a5150d65928b730d42dfab85f24" } m30999| Wed Feb 27 02:00:20.960 [Balancer] config servers and shards contacted successfully m30999| Wed Feb 27 02:00:20.960 [Balancer] balancer id: AMAZONA-DFVK11N:30999 started at Feb 27 02:00:20 m30999| Wed Feb 27 02:00:20.960 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Wed Feb 27 02:00:20.960 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:00:20.960 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:00:20.960 [Balancer] connected connection! m30999| Wed Feb 27 02:00:20.960 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:00:20.960 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30999:1361948420:41 ) m30999| Wed Feb 27 02:00:20.960 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Wed Feb 27 02:00:20.960 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41: m30999| { "state" : 1, m30999| "who" : "AMAZONA-DFVK11N:30999:1361948420:41:Balancer:41", m30999| "process" : "AMAZONA-DFVK11N:30999:1361948420:41", m30999| "when" : { "$date" : "Wed Feb 27 02:00:20 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512daf040c9ae827b8ef2394" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Wed Feb 27 02:00:20.960 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' acquired, ts : 512daf040c9ae827b8ef2394 m30999| Wed Feb 27 02:00:20.960 [Balancer] *** start balancing round m30999| Wed Feb 27 02:00:20.960 [Balancer] waitForDelete: 0 m30999| Wed Feb 27 02:00:20.960 [Balancer] secondaryThrottle: 1 m30999| Wed Feb 27 02:00:20.960 [Balancer] no collections to balance m30999| Wed Feb 27 02:00:20.960 [Balancer] no need to move any chunk m30999| Wed Feb 27 02:00:20.960 [Balancer] *** end of balancing round m30999| Wed Feb 27 02:00:20.960 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' unlocked. m30999| Wed Feb 27 02:00:21.131 [mongosMain] connection accepted from 127.0.0.1:60789 #1 (1 connection now open) m30999| Wed Feb 27 02:00:21.131 [conn1] couldn't find database [admin] in config db m30000| Wed Feb 27 02:00:21.131 [conn3] build index config.databases { _id: 1 } Wed Feb 27 02:00:21.131 shell: started program mongos.exe --port 30998 --configdb localhost:30000 -v --keyFile D:\slave\Windows_64bit_2008+_Weekly_Slow_Tests\mongo\jstests\libs\authTestsKey --chunkSize 50 --setParameter enableTestCommands=1 m30000| Wed Feb 27 02:00:21.131 [conn3] build index done. scanned 0 total records. 0 secs m30999| Wed Feb 27 02:00:21.131 [conn1] put [admin] on: config:localhost:30000 m30999| Wed Feb 27 02:00:21.131 [conn1] note: no users configured in admin.system.users, allowing localhost access m30998| Wed Feb 27 02:00:21.131 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30998| Wed Feb 27 02:00:21.131 security key: Thiskeyisonlyforrunningthesuitewithauthenticationdontuseitinanytestsdirectly m30998| Wed Feb 27 02:00:21.131 [mongosMain] MongoS version 2.4.0-rc2-pre- starting: pid=8668 port=30998 64-bit host=AMAZONA-DFVK11N (--help for usage) m30998| Wed Feb 27 02:00:21.131 [mongosMain] git version: 7e7866cc263c2af803ba2479b212c245f73d57bc m30998| Wed Feb 27 02:00:21.131 [mongosMain] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 m30998| Wed Feb 27 02:00:21.131 [mongosMain] options: { chunkSize: 50, configdb: "localhost:30000", keyFile: "D:\slave\Windows_64bit_2008+_Weekly_Slow_Tests\mongo\jstests\libs\authTestsKey", port: 30998, setParameter: [ "enableTestCommands=1" ], verbose: true } m30998| Wed Feb 27 02:00:21.131 [mongosMain] config string : localhost:30000 m30998| Wed Feb 27 02:00:21.131 [mongosMain] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:00:21.131 BackgroundJob starting: ConnectBG m30000| Wed Feb 27 02:00:21.147 [initandlisten] connection accepted from 127.0.0.1:60794 #6 (6 connections now open) m30998| Wed Feb 27 02:00:21.162 [mongosMain] connected connection! m30000| Wed Feb 27 02:00:21.162 [conn6] authenticate db: local { authenticate: 1, nonce: "64ff7fcbbc4f3d5a", user: "__system", key: "519068104639285f8a50a8bc2d3724ca" } m30998| Wed Feb 27 02:00:21.162 [mongosMain] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:00:21.162 BackgroundJob starting: CheckConfigServers m30998| Wed Feb 27 02:00:21.162 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:00:21.162 [mongosMain] connected connection! m30000| Wed Feb 27 02:00:21.162 [initandlisten] connection accepted from 127.0.0.1:60795 #7 (7 connections now open) m30000| Wed Feb 27 02:00:21.162 [conn7] authenticate db: local { authenticate: 1, nonce: "e544f642bd08d766", user: "__system", key: "fc043041b89a81581c7998304209fa84" } m30998| Wed Feb 27 02:00:21.162 [mongosMain] MaxChunkSize: 50 m30998| Wed Feb 27 02:00:21.162 BackgroundJob starting: Balancer m30998| Wed Feb 27 02:00:21.162 [Balancer] about to contact config servers and shards m30998| Wed Feb 27 02:00:21.162 BackgroundJob starting: cursorTimeout m30998| Wed Feb 27 02:00:21.162 [websvr] admin web console waiting for connections on port 31998 m30998| Wed Feb 27 02:00:21.162 [mongosMain] waiting for connections on port 30998 m30998| Wed Feb 27 02:00:21.162 [Balancer] config servers and shards contacted successfully m30998| Wed Feb 27 02:00:21.162 [Balancer] balancer id: AMAZONA-DFVK11N:30998 started at Feb 27 02:00:21 m30998| Wed Feb 27 02:00:21.162 BackgroundJob starting: PeriodicTask::Runner m30998| Wed Feb 27 02:00:21.162 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30998| Wed Feb 27 02:00:21.162 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:00:21.162 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:00:21.162 [Balancer] connected connection! m30000| Wed Feb 27 02:00:21.162 [initandlisten] connection accepted from 127.0.0.1:60796 #8 (8 connections now open) m30000| Wed Feb 27 02:00:21.162 [conn8] authenticate db: local { authenticate: 1, nonce: "9e0b7990ecca9822", user: "__system", key: "9067d786f51832c5c72139f2bc7eff8f" } m30998| Wed Feb 27 02:00:21.162 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:00:21.162 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30998:1361948421:41 ) m30998| Wed Feb 27 02:00:21.162 [LockPinger] creating distributed lock ping thread for localhost:30000 and process AMAZONA-DFVK11N:30998:1361948421:41 (sleeping for 30000ms) m30998| Wed Feb 27 02:00:21.162 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41: m30998| { "state" : 1, m30998| "who" : "AMAZONA-DFVK11N:30998:1361948421:41:Balancer:18467", m30998| "process" : "AMAZONA-DFVK11N:30998:1361948421:41", m30998| "when" : { "$date" : "Wed Feb 27 02:00:21 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "512daf058fcf9d0e1dbd1e04" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "512daf040c9ae827b8ef2394" } } m30998| Wed Feb 27 02:00:21.162 [LockPinger] cluster localhost:30000 pinged successfully at Wed Feb 27 02:00:21 2013 by distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30998:1361948421:41', sleeping for 30000ms m30998| Wed Feb 27 02:00:21.162 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' acquired, ts : 512daf058fcf9d0e1dbd1e04 m30998| Wed Feb 27 02:00:21.162 [Balancer] *** start balancing round m30998| Wed Feb 27 02:00:21.162 [Balancer] waitForDelete: 0 m30998| Wed Feb 27 02:00:21.162 [Balancer] secondaryThrottle: 1 m30998| Wed Feb 27 02:00:21.162 [Balancer] no collections to balance m30998| Wed Feb 27 02:00:21.162 [Balancer] no need to move any chunk m30998| Wed Feb 27 02:00:21.162 [Balancer] *** end of balancing round m30998| Wed Feb 27 02:00:21.162 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' unlocked. m30000| Wed Feb 27 02:00:21.318 [FileAllocator] done allocating datafile /data/db/auto20\config.1, size: 128MB, took 0.383 secs m30998| Wed Feb 27 02:00:21.646 [mongosMain] connection accepted from 127.0.0.1:60793 #1 (1 connection now open) m30998| Wed Feb 27 02:00:21.646 [conn1] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30998| Wed Feb 27 02:00:21.646 [conn1] note: no users configured in admin.system.users, allowing localhost access m30999| Wed Feb 27 02:00:21.646 [conn1] authenticate db: admin { authenticate: 1, nonce: "61ebd3d51ce5c97c", user: "admin", key: "2d8c3e25acef3a6f5d2947d290624590" } m30999| Wed Feb 27 02:00:21.646 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:22.660 [conn1] authenticate db: admin { authenticate: 1, nonce: "72f404bb5a6ca0ba", user: "admin", key: "aa75644eac69b6b5e9f2eae16c97fdb5" } m30999| Wed Feb 27 02:00:22.660 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:23.674 [conn1] authenticate db: admin { authenticate: 1, nonce: "15e28e41a17ca74a", user: "admin", key: "1548e6cb57a141b02c93ca5942f566d1" } m30999| Wed Feb 27 02:00:23.674 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:24.688 [conn1] authenticate db: admin { authenticate: 1, nonce: "dd95f9e7d05c957b", user: "admin", key: "06ec74fdaa8b412e578ddae0590eadcc" } m30999| Wed Feb 27 02:00:24.688 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:25.702 [conn1] authenticate db: admin { authenticate: 1, nonce: "674bec1dff70387e", user: "admin", key: "f77e7656f660df58bbd7967f3975468e" } m30999| Wed Feb 27 02:00:25.702 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:26.716 [conn1] authenticate db: admin { authenticate: 1, nonce: "f823e5859fb0b68f", user: "admin", key: "0e3c653648ea9f4957147dc3b2495c5c" } m30999| Wed Feb 27 02:00:26.716 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30999| Wed Feb 27 02:00:26.966 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:00:26.966 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30999:1361948420:41 ) m30999| Wed Feb 27 02:00:26.966 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41: m30999| { "state" : 1, m30999| "who" : "AMAZONA-DFVK11N:30999:1361948420:41:Balancer:41", m30999| "process" : "AMAZONA-DFVK11N:30999:1361948420:41", m30999| "when" : { "$date" : "Wed Feb 27 02:00:26 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512daf0a0c9ae827b8ef2395" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512daf058fcf9d0e1dbd1e04" } } m30999| Wed Feb 27 02:00:26.966 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' acquired, ts : 512daf0a0c9ae827b8ef2395 m30999| Wed Feb 27 02:00:26.966 [Balancer] *** start balancing round m30999| Wed Feb 27 02:00:26.966 [Balancer] waitForDelete: 0 m30999| Wed Feb 27 02:00:26.966 [Balancer] secondaryThrottle: 1 m30999| Wed Feb 27 02:00:26.966 [Balancer] no collections to balance m30999| Wed Feb 27 02:00:26.966 [Balancer] no need to move any chunk m30999| Wed Feb 27 02:00:26.966 [Balancer] *** end of balancing round m30999| Wed Feb 27 02:00:26.966 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' unlocked. m30998| Wed Feb 27 02:00:27.168 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:00:27.168 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30998:1361948421:41 ) m30998| Wed Feb 27 02:00:27.168 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41: m30998| { "state" : 1, m30998| "who" : "AMAZONA-DFVK11N:30998:1361948421:41:Balancer:18467", m30998| "process" : "AMAZONA-DFVK11N:30998:1361948421:41", m30998| "when" : { "$date" : "Wed Feb 27 02:00:27 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "512daf0b8fcf9d0e1dbd1e05" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "512daf0a0c9ae827b8ef2395" } } m30998| Wed Feb 27 02:00:27.168 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' acquired, ts : 512daf0b8fcf9d0e1dbd1e05 m30998| Wed Feb 27 02:00:27.168 [Balancer] *** start balancing round m30998| Wed Feb 27 02:00:27.168 [Balancer] waitForDelete: 0 m30998| Wed Feb 27 02:00:27.168 [Balancer] secondaryThrottle: 1 m30998| Wed Feb 27 02:00:27.168 [Balancer] no collections to balance m30998| Wed Feb 27 02:00:27.168 [Balancer] no need to move any chunk m30998| Wed Feb 27 02:00:27.168 [Balancer] *** end of balancing round m30998| Wed Feb 27 02:00:27.168 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' unlocked. Caught exception while authenticating connection: "[Authenticating connection: connection to localhost:30999] timed out after 5000ms ( 6 tries )" Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:27.730 [conn1] authenticate db: admin { authenticate: 1, nonce: "fa722efe61debb29", user: "admin", key: "b1d1df0475569f7383513e4bff3836cc" } m30999| Wed Feb 27 02:00:27.730 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:28.744 [conn1] authenticate db: admin { authenticate: 1, nonce: "631bb8c99945d5fe", user: "admin", key: "d703539d7051c2c7aed735f937efbf9d" } m30999| Wed Feb 27 02:00:28.744 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:29.758 [conn1] authenticate db: admin { authenticate: 1, nonce: "c97bf32e624972fb", user: "admin", key: "1d0c084f6295dd4aaa24c3bac0e2849e" } m30999| Wed Feb 27 02:00:29.758 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:30.772 [conn1] authenticate db: admin { authenticate: 1, nonce: "e5e7794109208b32", user: "admin", key: "09605f39d63df7aa4658af39707a8283" } m30999| Wed Feb 27 02:00:30.772 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:31.786 [conn1] authenticate db: admin { authenticate: 1, nonce: "147b4282c7f0481b", user: "admin", key: "7b3ef9cd96709b18efe0862fdd815b28" } m30999| Wed Feb 27 02:00:31.786 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:32.800 [conn1] authenticate db: admin { authenticate: 1, nonce: "34773b72f21a51c7", user: "admin", key: "d05869832007e075cdda5fb0022dbd6f" } m30999| Wed Feb 27 02:00:32.800 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30999| Wed Feb 27 02:00:32.972 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:00:32.972 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30999:1361948420:41 ) m30999| Wed Feb 27 02:00:32.972 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41: m30999| { "state" : 1, m30999| "who" : "AMAZONA-DFVK11N:30999:1361948420:41:Balancer:41", m30999| "process" : "AMAZONA-DFVK11N:30999:1361948420:41", m30999| "when" : { "$date" : "Wed Feb 27 02:00:32 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512daf100c9ae827b8ef2396" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512daf0b8fcf9d0e1dbd1e05" } } m30999| Wed Feb 27 02:00:32.972 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' acquired, ts : 512daf100c9ae827b8ef2396 m30999| Wed Feb 27 02:00:32.972 [Balancer] *** start balancing round m30999| Wed Feb 27 02:00:32.972 [Balancer] waitForDelete: 0 m30999| Wed Feb 27 02:00:32.972 [Balancer] secondaryThrottle: 1 m30999| Wed Feb 27 02:00:32.972 [Balancer] no collections to balance m30999| Wed Feb 27 02:00:32.972 [Balancer] no need to move any chunk m30999| Wed Feb 27 02:00:32.972 [Balancer] *** end of balancing round m30999| Wed Feb 27 02:00:32.972 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' unlocked. m30998| Wed Feb 27 02:00:33.174 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:00:33.174 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30998:1361948421:41 ) m30998| Wed Feb 27 02:00:33.174 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41: m30998| { "state" : 1, m30998| "who" : "AMAZONA-DFVK11N:30998:1361948421:41:Balancer:18467", m30998| "process" : "AMAZONA-DFVK11N:30998:1361948421:41", m30998| "when" : { "$date" : "Wed Feb 27 02:00:33 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "512daf118fcf9d0e1dbd1e06" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "512daf100c9ae827b8ef2396" } } m30998| Wed Feb 27 02:00:33.174 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' acquired, ts : 512daf118fcf9d0e1dbd1e06 m30998| Wed Feb 27 02:00:33.174 [Balancer] *** start balancing round m30998| Wed Feb 27 02:00:33.174 [Balancer] waitForDelete: 0 m30998| Wed Feb 27 02:00:33.174 [Balancer] secondaryThrottle: 1 m30998| Wed Feb 27 02:00:33.174 [Balancer] no collections to balance m30998| Wed Feb 27 02:00:33.174 [Balancer] no need to move any chunk m30998| Wed Feb 27 02:00:33.174 [Balancer] *** end of balancing round m30998| Wed Feb 27 02:00:33.174 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' unlocked. Caught exception while authenticating connection: "[Authenticating connection: connection to localhost:30999] timed out after 5000ms ( 6 tries )" ShardingTest undefined going to add shard : localhost:30000 m30999| Wed Feb 27 02:00:33.814 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Wed Feb 27 02:00:33.814 [conn1] creating new connection to:localhost:30001 m30999| Wed Feb 27 02:00:33.814 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:00:33.814 [conn1] connected connection! m30001| Wed Feb 27 02:00:33.814 [initandlisten] connection accepted from 127.0.0.1:60808 #2 (2 connections now open) m30001| Wed Feb 27 02:00:33.814 [conn2] note: no users configured in admin.system.users, allowing localhost access m30001| Wed Feb 27 02:00:33.814 [conn2] authenticate db: local { authenticate: 1, nonce: "cc96d01ee2f52190", user: "__system", key: "0baaec4f217ae5f87809f6812832d0dd" } m30999| Wed Feb 27 02:00:33.814 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } Adding admin user on connection: connection to localhost:30000 Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30000 m30000| Wed Feb 27 02:00:33.830 [conn2] authenticate db: admin { authenticate: 1, nonce: "1cdf43b8b8ec8432", user: "admin", key: "76b414952c87654711462cbd845f644e" } m30000| Wed Feb 27 02:00:33.830 [conn2] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30000 m30000| Wed Feb 27 02:00:34.844 [conn2] authenticate db: admin { authenticate: 1, nonce: "88f046bceee8207b", user: "admin", key: "15be96f24180bc189d5321cc6d306139" } m30000| Wed Feb 27 02:00:34.844 [conn2] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30000 m30000| Wed Feb 27 02:00:35.858 [conn2] authenticate db: admin { authenticate: 1, nonce: "c941462ae8470223", user: "admin", key: "5284c4470f5e5af7e3e054628a525c00" } m30000| Wed Feb 27 02:00:35.858 [conn2] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30000 m30000| Wed Feb 27 02:00:36.872 [conn2] authenticate db: admin { authenticate: 1, nonce: "9c935ef8789fda97", user: "admin", key: "f5c379c2a60d604f671b4dfd64cd8001" } m30000| Wed Feb 27 02:00:36.872 [conn2] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30000 m30000| Wed Feb 27 02:00:37.886 [conn2] authenticate db: admin { authenticate: 1, nonce: "c370379d4fdeb62a", user: "admin", key: "a95dfbf31c245026e27c43dfd21ecfce" } m30000| Wed Feb 27 02:00:37.886 [conn2] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30000 m30000| Wed Feb 27 02:00:38.900 [conn2] authenticate db: admin { authenticate: 1, nonce: "48f6c3cba3f1aba6", user: "admin", key: "0f1fc493c99298f3a8c58ba6a040ef13" } m30000| Wed Feb 27 02:00:38.900 [conn2] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30999| Wed Feb 27 02:00:38.978 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:00:38.978 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30999:1361948420:41 ) m30999| Wed Feb 27 02:00:38.978 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41: m30999| { "state" : 1, m30999| "who" : "AMAZONA-DFVK11N:30999:1361948420:41:Balancer:41", m30999| "process" : "AMAZONA-DFVK11N:30999:1361948420:41", m30999| "when" : { "$date" : "Wed Feb 27 02:00:38 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512daf160c9ae827b8ef2397" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512daf118fcf9d0e1dbd1e06" } } m30999| Wed Feb 27 02:00:38.978 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' acquired, ts : 512daf160c9ae827b8ef2397 m30999| Wed Feb 27 02:00:38.978 [Balancer] *** start balancing round m30999| Wed Feb 27 02:00:38.978 [Balancer] waitForDelete: 0 m30999| Wed Feb 27 02:00:38.978 [Balancer] secondaryThrottle: 1 m30999| Wed Feb 27 02:00:38.978 [Balancer] no collections to balance m30999| Wed Feb 27 02:00:38.978 [Balancer] no need to move any chunk m30999| Wed Feb 27 02:00:38.978 [Balancer] *** end of balancing round m30999| Wed Feb 27 02:00:38.978 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' unlocked. m30998| Wed Feb 27 02:00:39.180 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:00:39.180 [Balancer] creating new connection to:localhost:30001 m30998| Wed Feb 27 02:00:39.180 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:00:39.180 [Balancer] connected connection! m30001| Wed Feb 27 02:00:39.180 [initandlisten] connection accepted from 127.0.0.1:60814 #3 (3 connections now open) m30001| Wed Feb 27 02:00:39.180 [conn3] authenticate db: local { authenticate: 1, nonce: "66efadb6af2095a6", user: "__system", key: "88a92ea247ef98ce0bf75b1229705518" } m30998| Wed Feb 27 02:00:39.180 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30998:1361948421:41 ) m30998| Wed Feb 27 02:00:39.180 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41: m30998| { "state" : 1, m30998| "who" : "AMAZONA-DFVK11N:30998:1361948421:41:Balancer:18467", m30998| "process" : "AMAZONA-DFVK11N:30998:1361948421:41", m30998| "when" : { "$date" : "Wed Feb 27 02:00:39 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "512daf178fcf9d0e1dbd1e07" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "512daf160c9ae827b8ef2397" } } m30998| Wed Feb 27 02:00:39.180 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' acquired, ts : 512daf178fcf9d0e1dbd1e07 m30998| Wed Feb 27 02:00:39.180 [Balancer] *** start balancing round m30998| Wed Feb 27 02:00:39.180 [Balancer] waitForDelete: 0 m30998| Wed Feb 27 02:00:39.180 [Balancer] secondaryThrottle: 1 m30998| Wed Feb 27 02:00:39.180 [Balancer] no collections to balance m30998| Wed Feb 27 02:00:39.180 [Balancer] no need to move any chunk m30998| Wed Feb 27 02:00:39.180 [Balancer] *** end of balancing round m30998| Wed Feb 27 02:00:39.180 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' unlocked. Caught exception while authenticating connection: "[Authenticating connection: connection to localhost:30000] timed out after 5000ms ( 6 tries )" { "user" : "admin", "readOnly" : false, "pwd" : "90f500568434c37b61c8c1ce05fdf3ae", "_id" : ObjectId("512daf17bc2c1eb1ee5f0cf5") } m30000| Wed Feb 27 02:00:39.914 [FileAllocator] allocating new datafile /data/db/auto20\admin.ns, filling with zeroes... m30000| Wed Feb 27 02:00:39.960 [FileAllocator] done allocating datafile /data/db/auto20\admin.ns, size: 16MB, took 0.048 secs m30000| Wed Feb 27 02:00:39.960 [FileAllocator] allocating new datafile /data/db/auto20\admin.0, filling with zeroes... m30000| Wed Feb 27 02:00:40.148 [FileAllocator] done allocating datafile /data/db/auto20\admin.0, size: 64MB, took 0.191 secs m30000| Wed Feb 27 02:00:40.148 [FileAllocator] allocating new datafile /data/db/auto20\admin.1, filling with zeroes... m30000| Wed Feb 27 02:00:40.148 [conn2] build index admin.system.users { _id: 1 } m30000| Wed Feb 27 02:00:40.148 [conn2] build index done. scanned 0 total records. 0.001 secs m30000| Wed Feb 27 02:00:40.148 [conn2] build index admin.system.users { user: 1, userSource: 1 } m30000| Wed Feb 27 02:00:40.148 [conn2] build index done. scanned 0 total records. 0 secs m30000| Wed Feb 27 02:00:40.148 [conn2] insert admin.system.users ninserted:1 keyUpdates:0 245ms Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30000 m30000| Wed Feb 27 02:00:40.148 [conn2] authenticate db: admin { authenticate: 1, nonce: "a7e9278266192679", user: "admin", key: "c9c9eef80b01b7d328af81e278912874" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30000 m30000| Wed Feb 27 02:00:40.163 [conn1] authenticate db: admin { authenticate: 1, nonce: "310d0d6fb45318", user: "admin", key: "14d78afc86272bf2a69a2c7bc620c301" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30000 m30000| Wed Feb 27 02:00:40.163 [conn1] authenticate db: admin { authenticate: 1, nonce: "7326adf7a61f4adc", user: "admin", key: "903b74d30091a86a515884636475ed55" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:40.163 [conn1] authenticate db: admin { authenticate: 1, nonce: "a49e2603d2f4ed22", user: "admin", key: "95066de30407b33366e1aa5df3e0eda0" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30999 m30999| Wed Feb 27 02:00:40.163 [conn1] authenticate db: admin { authenticate: 1, nonce: "92e1d72fdc523240", user: "admin", key: "1cf4538f7f08ba5660b9048872f66eb4" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30998 m30998| Wed Feb 27 02:00:40.163 [conn1] authenticate db: admin { authenticate: 1, nonce: "6af84e5ee8dd6392", user: "admin", key: "73b209984fca445fe5c4eeea45112d9f" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30998 m30998| Wed Feb 27 02:00:40.163 [conn1] authenticate db: admin { authenticate: 1, nonce: "42ff6f2f4592372a", user: "admin", key: "7d66ca5bd90cb51579b95278b80701c5" } m30999| Wed Feb 27 02:00:40.163 [conn1] couldn't find database [test] in config db m30999| Wed Feb 27 02:00:40.163 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 version: 2.4.0-rc2-pre- m30999| Wed Feb 27 02:00:40.163 [conn1] put [test] on: shard0001:localhost:30001 m30999| Wed Feb 27 02:00:40.163 [conn1] enabling sharding on: test m30001| Wed Feb 27 02:00:40.163 [FileAllocator] allocating new datafile /data/db/auto21\test.ns, filling with zeroes... m30001| Wed Feb 27 02:00:40.226 [FileAllocator] done allocating datafile /data/db/auto21\test.ns, size: 16MB, took 0.054 secs m30001| Wed Feb 27 02:00:40.226 [FileAllocator] allocating new datafile /data/db/auto21\test.0, filling with zeroes... m30001| Wed Feb 27 02:00:40.428 [FileAllocator] done allocating datafile /data/db/auto21\test.0, size: 64MB, took 0.209 secs m30001| Wed Feb 27 02:00:40.428 [FileAllocator] allocating new datafile /data/db/auto21\test.1, filling with zeroes... m30001| Wed Feb 27 02:00:40.444 [conn2] build index test.foo { _id: 1 } m30001| Wed Feb 27 02:00:40.444 [conn2] build index done. scanned 0 total records. 0.001 secs m30001| Wed Feb 27 02:00:40.444 [conn2] info: creating collection test.foo on add index m30001| Wed Feb 27 02:00:40.444 [conn2] build index test.foo { num: 1.0 } m30001| Wed Feb 27 02:00:40.444 [conn2] build index done. scanned 0 total records. 0.001 secs m30001| Wed Feb 27 02:00:40.444 [conn2] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:269951 269ms m30999| Wed Feb 27 02:00:40.444 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { num: 1.0 } } m30999| Wed Feb 27 02:00:40.444 [conn1] enable sharding on: test.foo with shard key: { num: 1.0 } m30999| Wed Feb 27 02:00:40.444 [conn1] going to create 1 chunk(s) for: test.foo using new epoch 512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:40.444 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||512daf180c9ae827b8ef2398 based on: (empty) m30000| Wed Feb 27 02:00:40.444 [conn3] build index config.collections { _id: 1 } m30000| Wed Feb 27 02:00:40.444 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Wed Feb 27 02:00:40.444 [conn1] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:00:40.444 BackgroundJob starting: ConnectBG m30000| Wed Feb 27 02:00:40.444 [initandlisten] connection accepted from 127.0.0.1:60817 #9 (9 connections now open) m30999| Wed Feb 27 02:00:40.444 [conn1] connected connection! m30000| Wed Feb 27 02:00:40.444 [conn9] authenticate db: local { authenticate: 1, nonce: "ce95731410b491a6", user: "__system", key: "545d3e30ab9b2996d4702b32489bbb45" } m30999| Wed Feb 27 02:00:40.444 [conn1] creating WriteBackListener for: localhost:30000 serverID: 512daf040c9ae827b8ef2393 m30999| Wed Feb 27 02:00:40.444 [conn1] initializing shard connection to localhost:30000 m30999| Wed Feb 27 02:00:40.444 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Wed Feb 27 02:00:40.444 [conn1] resetting shard version of test.foo on localhost:30000, version is zero m30999| Wed Feb 27 02:00:40.444 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0000", shardHost: "localhost:30000" } 000000000053BF90 2 m30999| Wed Feb 27 02:00:40.444 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Wed Feb 27 02:00:40.444 [conn1] creating new connection to:localhost:30001 m30999| Wed Feb 27 02:00:40.444 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:00:40.444 [conn1] connected connection! m30001| Wed Feb 27 02:00:40.444 [initandlisten] connection accepted from 127.0.0.1:60818 #4 (4 connections now open) m30001| Wed Feb 27 02:00:40.444 [conn4] authenticate db: local { authenticate: 1, nonce: "8f47600bd6f29e41", user: "__system", key: "4cddb1a5f2ea25490e45f86abf742235" } m30999| Wed Feb 27 02:00:40.444 [conn1] creating WriteBackListener for: localhost:30001 serverID: 512daf040c9ae827b8ef2393 m30999| Wed Feb 27 02:00:40.444 [conn1] initializing shard connection to localhost:30001 m30999| Wed Feb 27 02:00:40.444 BackgroundJob starting: WriteBackListener-localhost:30001 m30999| Wed Feb 27 02:00:40.444 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 2 m30999| Wed Feb 27 02:00:40.444 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Wed Feb 27 02:00:40.444 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 2 m30001| Wed Feb 27 02:00:40.444 [conn4] no current chunk manager found for this shard, will initialize m30000| Wed Feb 27 02:00:40.444 [initandlisten] connection accepted from 127.0.0.1:60819 #10 (10 connections now open) m30000| Wed Feb 27 02:00:40.444 [conn10] authenticate db: local { authenticate: 1, nonce: "fd3fb8c6e60a72e5", user: "__system", key: "aada9552bf63d642ade6262c3d45a3a4" } m30999| Wed Feb 27 02:00:40.444 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Wed Feb 27 02:00:40.460 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { num: MinKey }max: { num: MaxKey } dataWritten: 4256292 splitThreshold: 921 m30999| Wed Feb 27 02:00:40.460 [conn1] creating new connection to:localhost:30001 m30999| Wed Feb 27 02:00:40.460 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:00:40.460 [conn1] connected connection! m30001| Wed Feb 27 02:00:40.460 [initandlisten] connection accepted from 127.0.0.1:60820 #5 (5 connections now open) m30001| Wed Feb 27 02:00:40.460 [conn5] authenticate db: local { authenticate: 1, nonce: "7f66350382e51f96", user: "__system", key: "efcf0c14163658253d23c48218e98639" } m30001| Wed Feb 27 02:00:40.460 [conn5] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Wed Feb 27 02:00:40.460 [conn5] warning: chunk is larger than 1024 bytes because of key { num: 0.0 } m30999| Wed Feb 27 02:00:40.460 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:40.460 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { num: MinKey }max: { num: MaxKey } dataWritten: 51255 splitThreshold: 921 m30001| Wed Feb 27 02:00:40.460 [conn5] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Wed Feb 27 02:00:40.460 [conn5] warning: chunk is larger than 1024 bytes because of key { num: 0.0 } m30999| Wed Feb 27 02:00:40.460 [conn1] chunk not full enough to trigger auto-split { num: 1.0 } m30999| Wed Feb 27 02:00:40.460 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { num: MinKey }max: { num: MaxKey } dataWritten: 51255 splitThreshold: 921 m30001| Wed Feb 27 02:00:40.460 [conn5] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Wed Feb 27 02:00:40.460 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Wed Feb 27 02:00:40.460 [conn5] warning: chunk is larger than 1024 bytes because of key { num: 0.0 } m30001| Wed Feb 27 02:00:40.460 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: MinKey }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 0.0 } ], shardId: "test.foo-num_MinKey", configdb: "localhost:30000" } m30000| Wed Feb 27 02:00:40.460 [initandlisten] connection accepted from 127.0.0.1:60821 #11 (11 connections now open) m30000| Wed Feb 27 02:00:40.460 [conn11] authenticate db: local { authenticate: 1, nonce: "93b6111fb1c7102e", user: "__system", key: "e7597cf282a922398ce970ea50e24550" } m30001| Wed Feb 27 02:00:40.460 [LockPinger] creating distributed lock ping thread for localhost:30000 and process AMAZONA-DFVK11N:30001:1361948440:41 (sleeping for 30000ms) m30001| Wed Feb 27 02:00:40.460 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf18051f47eaec1d92a8 m30001| Wed Feb 27 02:00:40.460 [conn5] splitChunk accepted at version 1|0||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:40.460 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:40-512daf18051f47eaec1d92a9", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948440460), what: "split", ns: "test.foo", details: { before: { min: { num: MinKey }, max: { num: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: MinKey }, max: { num: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') }, right: { min: { num: 0.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') } } } m30001| Wed Feb 27 02:00:40.460 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30999| Wed Feb 27 02:00:40.460 [conn1] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:00:40.460 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:00:40.475 [conn1] connected connection! m30000| Wed Feb 27 02:00:40.475 [initandlisten] connection accepted from 127.0.0.1:60822 #12 (12 connections now open) m30000| Wed Feb 27 02:00:40.475 [conn12] authenticate db: local { authenticate: 1, nonce: "5cce93076a6d47b9", user: "__system", key: "53a76fe5856eababb7ecc2015bc93b41" } m30999| Wed Feb 27 02:00:40.475 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 3 version: 1|2||512daf180c9ae827b8ef2398 based on: 1|0||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:40.475 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { num: MinKey }max: { num: MaxKey } on: { num: 0.0 } (splitThreshold 921) m30999| Wed Feb 27 02:00:40.475 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 3 m30999| Wed Feb 27 02:00:40.475 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), ok: 1.0 } m30999| Wed Feb 27 02:00:40.475 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { num: 0.0 }max: { num: MaxKey } dataWritten: 4256292 splitThreshold: 471859 m30999| Wed Feb 27 02:00:40.475 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:40.475 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { num: 0.0 }max: { num: MaxKey } dataWritten: 102510 splitThreshold: 471859 m30999| Wed Feb 27 02:00:40.475 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:40.475 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { num: 0.0 }max: { num: MaxKey } dataWritten: 102510 splitThreshold: 471859 m30999| Wed Feb 27 02:00:40.475 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:40.475 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { num: 0.0 }max: { num: MaxKey } dataWritten: 102510 splitThreshold: 471859 m30001| Wed Feb 27 02:00:40.475 [conn5] request split points lookup for chunk test.foo { : 0.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:40.475 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 0.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:40.475 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 0.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 9.0 } ], shardId: "test.foo-num_0.0", configdb: "localhost:30000" } m30001| Wed Feb 27 02:00:40.475 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf18051f47eaec1d92aa m30001| Wed Feb 27 02:00:40.475 [conn5] splitChunk accepted at version 1|2||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:40.475 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:40-512daf18051f47eaec1d92ab", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948440475), what: "split", ns: "test.foo", details: { before: { min: { num: 0.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 0.0 }, max: { num: 9.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') }, right: { min: { num: 9.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') } } } m30001| Wed Feb 27 02:00:40.475 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30999| Wed Feb 27 02:00:40.475 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||512daf180c9ae827b8ef2398 based on: 1|2||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:40.475 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { num: 0.0 }max: { num: MaxKey } on: { num: 9.0 } (splitThreshold 471859) (migrate suggested) m30999| Wed Feb 27 02:00:40.491 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 160 writeLock: 0 version: 2.4.0-rc2-pre- m30999| Wed Feb 27 02:00:40.491 [conn1] recently split chunk: { min: { num: 9.0 }, max: { num: MaxKey } } already in the best shard: shard0001:localhost:30001 m30999| Wed Feb 27 02:00:40.491 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 4 m30999| Wed Feb 27 02:00:40.491 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), ok: 1.0 } m30999| Wed Feb 27 02:00:40.491 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { num: 9.0 }max: { num: MaxKey } dataWritten: 4256292 splitThreshold: 11796480 m30999| Wed Feb 27 02:00:40.491 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:40.522 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { num: 9.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30999| Wed Feb 27 02:00:40.522 [conn1] chunk not full enough to trigger auto-split no split entry j:0 : 94 m30999| Wed Feb 27 02:00:40.569 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { num: 9.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30999| Wed Feb 27 02:00:40.569 [conn1] chunk not full enough to trigger auto-split no split entry m30000| Wed Feb 27 02:00:40.569 [FileAllocator] done allocating datafile /data/db/auto20\admin.1, size: 128MB, took 0.424 secs m30999| Wed Feb 27 02:00:40.631 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { num: 9.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30999| Wed Feb 27 02:00:40.631 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:40.662 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { num: 9.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30999| Wed Feb 27 02:00:40.662 [conn1] chunk not full enough to trigger auto-split no split entry j:1 : 109 m30999| Wed Feb 27 02:00:40.694 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { num: 9.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30999| Wed Feb 27 02:00:40.694 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:40.725 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { num: 9.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:40.725 [conn5] request split points lookup for chunk test.foo { : 9.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:40.725 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 9.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:40.725 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 9.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 292.0 } ], shardId: "test.foo-num_9.0", configdb: "localhost:30000" } m30001| Wed Feb 27 02:00:40.725 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf18051f47eaec1d92ac m30001| Wed Feb 27 02:00:40.725 [conn5] splitChunk accepted at version 1|4||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:40.725 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:40-512daf18051f47eaec1d92ad", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948440725), what: "split", ns: "test.foo", details: { before: { min: { num: 9.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 9.0 }, max: { num: 292.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') }, right: { min: { num: 292.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') } } } m30001| Wed Feb 27 02:00:40.725 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30999| Wed Feb 27 02:00:40.725 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||512daf180c9ae827b8ef2398 based on: 1|4||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:40.725 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { num: 9.0 }max: { num: MaxKey } on: { num: 292.0 } (splitThreshold 11796480) (migrate suggested) m30999| Wed Feb 27 02:00:40.725 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 160 writeLock: 0 version: 2.4.0-rc2-pre- m30999| Wed Feb 27 02:00:40.725 [conn1] recently split chunk: { min: { num: 292.0 }, max: { num: MaxKey } } already in the best shard: shard0001:localhost:30001 m30999| Wed Feb 27 02:00:40.725 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 5 m30999| Wed Feb 27 02:00:40.725 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), ok: 1.0 } m30999| Wed Feb 27 02:00:40.725 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { num: 292.0 }max: { num: MaxKey } dataWritten: 4256292 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:40.725 [conn5] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:40.725 [conn1] chunk not full enough to trigger auto-split no split entry j:2 : 63 m30999| Wed Feb 27 02:00:40.787 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { num: 292.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:40.787 [conn5] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:40.787 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:40.818 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { num: 292.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:40.818 [conn5] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:40.818 [conn1] chunk not full enough to trigger auto-split no split entry j:3 : 93 m30999| Wed Feb 27 02:00:40.850 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { num: 292.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:40.850 [conn5] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:40.850 [conn1] chunk not full enough to trigger auto-split { num: 415.0 } m30001| Wed Feb 27 02:00:40.865 [FileAllocator] done allocating datafile /data/db/auto21\test.1, size: 128MB, took 0.422 secs m30999| Wed Feb 27 02:00:40.865 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { num: 292.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:40.865 [conn5] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:40.865 [conn1] chunk not full enough to trigger auto-split { num: 415.0 } j:4 : 63 m30999| Wed Feb 27 02:00:40.896 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { num: 292.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:40.896 [conn5] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:40.896 [conn1] chunk not full enough to trigger auto-split { num: 415.0 } m30999| Wed Feb 27 02:00:40.959 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { num: 292.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:40.959 [conn5] request split points lookup for chunk test.foo { : 292.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:40.959 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 292.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:40.959 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 292.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 575.0 } ], shardId: "test.foo-num_292.0", configdb: "localhost:30000" } m30001| Wed Feb 27 02:00:40.959 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf18051f47eaec1d92ae m30001| Wed Feb 27 02:00:40.959 [conn5] splitChunk accepted at version 1|6||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:40.959 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:40-512daf18051f47eaec1d92af", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948440959), what: "split", ns: "test.foo", details: { before: { min: { num: 292.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 292.0 }, max: { num: 575.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') }, right: { min: { num: 575.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') } } } m30001| Wed Feb 27 02:00:40.959 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30999| Wed Feb 27 02:00:40.959 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 1|8||512daf180c9ae827b8ef2398 based on: 1|6||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:40.959 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { num: 292.0 }max: { num: MaxKey } on: { num: 575.0 } (splitThreshold 11796480) (migrate suggested) m30999| Wed Feb 27 02:00:40.974 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 160 writeLock: 0 version: 2.4.0-rc2-pre- m30999| Wed Feb 27 02:00:40.974 [conn1] recently split chunk: { min: { num: 575.0 }, max: { num: MaxKey } } already in the best shard: shard0001:localhost:30001 m30999| Wed Feb 27 02:00:40.974 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|8, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 6 m30999| Wed Feb 27 02:00:40.974 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), ok: 1.0 } m30999| Wed Feb 27 02:00:40.974 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { num: 575.0 }max: { num: MaxKey } dataWritten: 4256292 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:40.974 [conn5] request split points lookup for chunk test.foo { : 575.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:40.974 [conn1] chunk not full enough to trigger auto-split no split entry j:5 : 93 m30999| Wed Feb 27 02:00:40.990 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { num: 575.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:40.990 [conn5] request split points lookup for chunk test.foo { : 575.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:40.990 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:41.021 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { num: 575.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:41.021 [conn5] request split points lookup for chunk test.foo { : 575.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:41.021 [conn1] chunk not full enough to trigger auto-split no split entry j:6 : 63 m30999| Wed Feb 27 02:00:41.037 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { num: 575.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:41.037 [conn5] request split points lookup for chunk test.foo { : 575.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:41.037 [conn1] chunk not full enough to trigger auto-split { num: 698.0 } m30999| Wed Feb 27 02:00:41.068 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { num: 575.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:41.068 [conn5] request split points lookup for chunk test.foo { : 575.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:41.068 [conn1] chunk not full enough to trigger auto-split { num: 698.0 } j:7 : 46 m30999| Wed Feb 27 02:00:41.115 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { num: 575.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:41.115 [conn5] request split points lookup for chunk test.foo { : 575.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:41.115 [conn1] chunk not full enough to trigger auto-split { num: 698.0 } m30999| Wed Feb 27 02:00:41.146 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { num: 575.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:41.146 [conn5] request split points lookup for chunk test.foo { : 575.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:41.146 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 575.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:41.146 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 575.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 858.0 } ], shardId: "test.foo-num_575.0", configdb: "localhost:30000" } m30001| Wed Feb 27 02:00:41.146 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf19051f47eaec1d92b0 m30001| Wed Feb 27 02:00:41.146 [conn5] splitChunk accepted at version 1|8||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:41.146 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:41-512daf19051f47eaec1d92b1", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948441146), what: "split", ns: "test.foo", details: { before: { min: { num: 575.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 575.0 }, max: { num: 858.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') }, right: { min: { num: 858.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') } } } m30001| Wed Feb 27 02:00:41.146 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30999| Wed Feb 27 02:00:41.146 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 1|10||512daf180c9ae827b8ef2398 based on: 1|8||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:41.146 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { num: 575.0 }max: { num: MaxKey } on: { num: 858.0 } (splitThreshold 11796480) (migrate suggested) m30999| Wed Feb 27 02:00:41.146 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 160 writeLock: 0 version: 2.4.0-rc2-pre- m30999| Wed Feb 27 02:00:41.146 [conn1] recently split chunk: { min: { num: 858.0 }, max: { num: MaxKey } } already in the best shard: shard0001:localhost:30001 m30999| Wed Feb 27 02:00:41.146 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|10, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 7 m30999| Wed Feb 27 02:00:41.146 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), ok: 1.0 } m30999| Wed Feb 27 02:00:41.146 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { num: 858.0 }max: { num: MaxKey } dataWritten: 4254235 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:41.146 [conn5] request split points lookup for chunk test.foo { : 858.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:41.146 [conn1] chunk not full enough to trigger auto-split no split entry j:8 : 94 m30999| Wed Feb 27 02:00:41.177 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { num: 858.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:41.177 [conn5] request split points lookup for chunk test.foo { : 858.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:41.177 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:41.208 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { num: 858.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:41.208 [conn5] request split points lookup for chunk test.foo { : 858.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:41.208 [conn1] chunk not full enough to trigger auto-split no split entry j:9 : 47 m30999| Wed Feb 27 02:00:41.224 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { num: 858.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:41.224 [conn5] request split points lookup for chunk test.foo { : 858.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:41.224 [conn1] chunk not full enough to trigger auto-split { num: 981.0 } m30999| Wed Feb 27 02:00:41.286 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { num: 858.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:41.286 [conn5] request split points lookup for chunk test.foo { : 858.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:41.286 [conn1] chunk not full enough to trigger auto-split { num: 981.0 } m30999| Wed Feb 27 02:00:41.318 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { num: 858.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:41.318 [conn5] request split points lookup for chunk test.foo { : 858.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:41.318 [conn1] chunk not full enough to trigger auto-split { num: 981.0 } j:10 : 93 m30001| Wed Feb 27 02:00:41.318 [FileAllocator] allocating new datafile /data/db/auto21\test.2, filling with zeroes... m30999| Wed Feb 27 02:00:41.333 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { num: 858.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:41.333 [conn5] request split points lookup for chunk test.foo { : 858.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:41.333 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 858.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:41.333 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 858.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 1141.0 } ], shardId: "test.foo-num_858.0", configdb: "localhost:30000" } m30001| Wed Feb 27 02:00:41.333 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf19051f47eaec1d92b2 m30001| Wed Feb 27 02:00:41.333 [conn5] splitChunk accepted at version 1|10||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:41.333 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:41-512daf19051f47eaec1d92b3", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948441333), what: "split", ns: "test.foo", details: { before: { min: { num: 858.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 858.0 }, max: { num: 1141.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') }, right: { min: { num: 1141.0 }, max: { num: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') } } } m30001| Wed Feb 27 02:00:41.349 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30999| Wed Feb 27 02:00:41.349 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 1|12||512daf180c9ae827b8ef2398 based on: 1|10||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:41.349 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { num: 858.0 }max: { num: MaxKey } on: { num: 1141.0 } (splitThreshold 11796480) (migrate suggested) m30999| Wed Feb 27 02:00:41.349 [conn1] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 240 writeLock: 0 version: 2.4.0-rc2-pre- m30999| Wed Feb 27 02:00:41.349 [conn1] moving chunk (auto): ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { num: 1141.0 }max: { num: MaxKey } to: shard0000:localhost:30000 m30999| Wed Feb 27 02:00:41.349 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { num: 1141.0 }max: { num: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Wed Feb 27 02:00:41.349 [conn5] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 1141.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_1141.0", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Wed Feb 27 02:00:41.349 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf19051f47eaec1d92b4 m30001| Wed Feb 27 02:00:41.349 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:41-512daf19051f47eaec1d92b5", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948441349), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 1141.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } } m30001| Wed Feb 27 02:00:41.349 [conn5] moveChunk request accepted at version 1|12||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:41.349 [conn5] moveChunk number of documents: 1 m30000| Wed Feb 27 02:00:41.349 [migrateThread] starting receiving-end of migration of chunk { num: 1141.0 } -> { num: MaxKey } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Wed Feb 27 02:00:41.349 [initandlisten] connection accepted from 127.0.0.1:60825 #6 (6 connections now open) m30001| Wed Feb 27 02:00:41.349 [conn6] authenticate db: local { authenticate: 1, nonce: "945d0f0b873f7147", user: "__system", key: "2aeea8ed6e96b804807bfc446a275ad5" } m30000| Wed Feb 27 02:00:41.349 [FileAllocator] allocating new datafile /data/db/auto20\test.ns, filling with zeroes... m30001| Wed Feb 27 02:00:41.364 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 1141.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:41.380 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 1141.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:41.396 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 1141.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:41.411 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 1141.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Wed Feb 27 02:00:41.411 [FileAllocator] done allocating datafile /data/db/auto20\test.ns, size: 16MB, took 0.053 secs m30000| Wed Feb 27 02:00:41.411 [FileAllocator] allocating new datafile /data/db/auto20\test.0, filling with zeroes... m30001| Wed Feb 27 02:00:41.442 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 1141.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:41.489 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 1141.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:41.567 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 1141.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Wed Feb 27 02:00:41.614 [FileAllocator] done allocating datafile /data/db/auto20\test.0, size: 64MB, took 0.213 secs m30000| Wed Feb 27 02:00:41.614 [FileAllocator] allocating new datafile /data/db/auto20\test.1, filling with zeroes... m30000| Wed Feb 27 02:00:41.614 [migrateThread] build index test.foo { _id: 1 } m30000| Wed Feb 27 02:00:41.630 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30000| Wed Feb 27 02:00:41.630 [migrateThread] info: creating collection test.foo on add index m30000| Wed Feb 27 02:00:41.630 [migrateThread] build index test.foo { num: 1.0 } m30000| Wed Feb 27 02:00:41.630 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30000| Wed Feb 27 02:00:41.692 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Wed Feb 27 02:00:41.692 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 1141.0 } -> { num: MaxKey } m30001| Wed Feb 27 02:00:41.708 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 1141.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Wed Feb 27 02:00:41.770 [migrateThread] migrate commit flushed to journal for 'test.foo' { num: 1141.0 } -> { num: MaxKey } m30001| Wed Feb 27 02:00:41.973 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 1141.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:41.973 [conn5] moveChunk setting version to: 2|0||512daf180c9ae827b8ef2398 m30000| Wed Feb 27 02:00:41.973 [initandlisten] connection accepted from 127.0.0.1:60826 #13 (13 connections now open) m30000| Wed Feb 27 02:00:41.973 [conn13] authenticate db: local { authenticate: 1, nonce: "7bfa9352e8bef5fc", user: "__system", key: "1f46f0b90dd15fb919b18deac306571e" } m30000| Wed Feb 27 02:00:41.973 [conn13] Waiting for commit to finish m30000| Wed Feb 27 02:00:41.988 [conn13] Waiting for commit to finish m30000| Wed Feb 27 02:00:41.988 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 1141.0 } -> { num: MaxKey } m30000| Wed Feb 27 02:00:41.988 [migrateThread] migrate commit flushed to journal for 'test.foo' { num: 1141.0 } -> { num: MaxKey } m30000| Wed Feb 27 02:00:41.988 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:41-512daf1983c83aaea83d4624", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1361948441988), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 1141.0 }, max: { num: MaxKey }, step1 of 5: 276, step2 of 5: 0, step3 of 5: 68, step4 of 5: 0, step5 of 5: 288 } } m30000| Wed Feb 27 02:00:41.988 [initandlisten] connection accepted from 127.0.0.1:60827 #14 (14 connections now open) m30000| Wed Feb 27 02:00:41.988 [conn14] authenticate db: local { authenticate: 1, nonce: "de0fbc8a020a0531", user: "__system", key: "597fa212f83aa9f0b919fde219c61531" } m30001| Wed Feb 27 02:00:42.004 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { num: 1141.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Wed Feb 27 02:00:42.004 [conn5] moveChunk updating self version to: 2|1||512daf180c9ae827b8ef2398 through { num: MinKey } -> { num: 0.0 } for collection 'test.foo' m30000| Wed Feb 27 02:00:42.004 [initandlisten] connection accepted from 127.0.0.1:60828 #15 (15 connections now open) m30000| Wed Feb 27 02:00:42.004 [conn15] authenticate db: local { authenticate: 1, nonce: "d8beeded31d3eba9", user: "__system", key: "fd3e42b556d0ce1f80627c7a902a34dd" } m30001| Wed Feb 27 02:00:42.004 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:42-512daf1a051f47eaec1d92b6", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948442004), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 1141.0 }, max: { num: MaxKey }, from: "shard0001", to: "shard0000" } } m30001| Wed Feb 27 02:00:42.004 [conn5] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Wed Feb 27 02:00:42.004 [conn5] MigrateFromStatus::done Global lock acquired m30001| Wed Feb 27 02:00:42.004 [conn5] forking for cleanup of chunk data m30001| Wed Feb 27 02:00:42.004 [conn5] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Wed Feb 27 02:00:42.004 [conn5] MigrateFromStatus::done Global lock acquired m30001| Wed Feb 27 02:00:42.004 [cleanupOldData-512daf1a051f47eaec1d92b7] (start) waiting to cleanup test.foo from { num: 1141.0 } -> { num: MaxKey }, # cursors remaining: 0 m30001| Wed Feb 27 02:00:42.004 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30001| Wed Feb 27 02:00:42.004 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:42-512daf1a051f47eaec1d92b8", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948442004), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 1141.0 }, max: { num: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 618, step5 of 6: 33, step6 of 6: 0 } } m30001| Wed Feb 27 02:00:42.004 [conn5] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 1141.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_1141.0", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:32 r:68 w:23 reslen:37 654ms m30999| Wed Feb 27 02:00:42.004 [conn1] moveChunk result: { ok: 1.0 } m30999| Wed Feb 27 02:00:42.004 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 2|1||512daf180c9ae827b8ef2398 based on: 1|12||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:42.004 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0000", shardHost: "localhost:30000" } 000000000053BF90 9 m30999| Wed Feb 27 02:00:42.004 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Wed Feb 27 02:00:42.004 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 000000000053BF90 9 m30000| Wed Feb 27 02:00:42.004 [conn9] no current chunk manager found for this shard, will initialize m30999| Wed Feb 27 02:00:42.004 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Wed Feb 27 02:00:42.004 [conn1] about to initiate autosplit: ns:test.fooshard: shard0000:localhost:30000lastmod: 2|0||000000000000000000000000min: { num: 1141.0 }max: { num: MaxKey } dataWritten: 4260406 splitThreshold: 11796480 m30999| Wed Feb 27 02:00:42.004 [conn1] chunk not full enough to trigger auto-split no split entry m30001| Wed Feb 27 02:00:42.035 [cleanupOldData-512daf1a051f47eaec1d92b7] waiting to remove documents for test.foo from { num: 1141.0 } -> { num: MaxKey } m30001| Wed Feb 27 02:00:42.035 [cleanupOldData-512daf1a051f47eaec1d92b7] moveChunk starting delete for: test.foo from { num: 1141.0 } -> { num: MaxKey } m30000| Wed Feb 27 02:00:42.035 [FileAllocator] done allocating datafile /data/db/auto20\test.1, size: 128MB, took 0.409 secs m30001| Wed Feb 27 02:00:42.035 [cleanupOldData-512daf1a051f47eaec1d92b7] moveChunk deleted 1 documents for test.foo from { num: 1141.0 } -> { num: MaxKey } m30999| Wed Feb 27 02:00:42.035 [conn1] about to initiate autosplit: ns:test.fooshard: shard0000:localhost:30000lastmod: 2|0||000000000000000000000000min: { num: 1141.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30999| Wed Feb 27 02:00:42.035 [conn1] chunk not full enough to trigger auto-split no split entry j:11 : 718 m30999| Wed Feb 27 02:00:42.051 [conn1] about to initiate autosplit: ns:test.fooshard: shard0000:localhost:30000lastmod: 2|0||000000000000000000000000min: { num: 1141.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30999| Wed Feb 27 02:00:42.051 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:42.082 [conn1] about to initiate autosplit: ns:test.fooshard: shard0000:localhost:30000lastmod: 2|0||000000000000000000000000min: { num: 1141.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30999| Wed Feb 27 02:00:42.082 [conn1] chunk not full enough to trigger auto-split no split entry j:12 : 47 m30999| Wed Feb 27 02:00:42.098 [conn1] about to initiate autosplit: ns:test.fooshard: shard0000:localhost:30000lastmod: 2|0||000000000000000000000000min: { num: 1141.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30999| Wed Feb 27 02:00:42.098 [conn1] chunk not full enough to trigger auto-split no split entry m30001| Wed Feb 27 02:00:42.144 [FileAllocator] done allocating datafile /data/db/auto21\test.2, size: 256MB, took 0.829 secs m30000| Wed Feb 27 02:00:42.566 [conn9] insert test.foo ninserted:1 keyUpdates:0 locks(micros) w:279 445ms m30999| Wed Feb 27 02:00:42.597 [conn1] about to initiate autosplit: ns:test.fooshard: shard0000:localhost:30000lastmod: 2|0||000000000000000000000000min: { num: 1141.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30999| Wed Feb 27 02:00:42.597 [conn1] chunk not full enough to trigger auto-split no split entry j:13 : 530 m30999| Wed Feb 27 02:00:42.628 [conn1] about to initiate autosplit: ns:test.fooshard: shard0000:localhost:30000lastmod: 2|0||000000000000000000000000min: { num: 1141.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30000| Wed Feb 27 02:00:42.628 [conn12] request split points lookup for chunk test.foo { : 1141.0 } -->> { : MaxKey } m30000| Wed Feb 27 02:00:42.628 [conn12] max number of requested split points reached (2) before the end of chunk test.foo { : 1141.0 } -->> { : MaxKey } m30000| Wed Feb 27 02:00:42.628 [conn12] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1141.0 }, max: { num: MaxKey }, from: "shard0000", splitKeys: [ { num: 1424.0 } ], shardId: "test.foo-num_1141.0", configdb: "localhost:30000" } m30000| Wed Feb 27 02:00:42.628 [initandlisten] connection accepted from 127.0.0.1:60829 #16 (16 connections now open) m30000| Wed Feb 27 02:00:42.628 [conn16] authenticate db: local { authenticate: 1, nonce: "77d6240aa195b01b", user: "__system", key: "1cbc3c875d788ad61e6ea2227713f575" } m30000| Wed Feb 27 02:00:42.628 [LockPinger] creating distributed lock ping thread for localhost:30000 and process AMAZONA-DFVK11N:30000:1361948442:41 (sleeping for 30000ms) m30000| Wed Feb 27 02:00:42.628 [conn12] distributed lock 'test.foo/AMAZONA-DFVK11N:30000:1361948442:41' acquired, ts : 512daf1a83c83aaea83d4625 m30000| Wed Feb 27 02:00:42.628 [conn12] splitChunk accepted at version 2|0||512daf180c9ae827b8ef2398 m30000| Wed Feb 27 02:00:42.628 [conn12] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:42-512daf1a83c83aaea83d4626", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60822", time: new Date(1361948442628), what: "split", ns: "test.foo", details: { before: { min: { num: 1141.0 }, max: { num: MaxKey }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 1141.0 }, max: { num: 1424.0 }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') }, right: { min: { num: 1424.0 }, max: { num: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') } } } m30000| Wed Feb 27 02:00:42.628 [conn12] distributed lock 'test.foo/AMAZONA-DFVK11N:30000:1361948442:41' unlocked. m30999| Wed Feb 27 02:00:42.628 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 2|3||512daf180c9ae827b8ef2398 based on: 2|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:42.628 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 2|0||000000000000000000000000min: { num: 1141.0 }max: { num: MaxKey } on: { num: 1424.0 } (splitThreshold 11796480) (migrate suggested) m30999| Wed Feb 27 02:00:42.628 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 288 writeLock: 0 version: 2.4.0-rc2-pre- m30999| Wed Feb 27 02:00:42.628 [conn1] moving chunk (auto): ns:test.fooshard: shard0000:localhost:30000lastmod: 2|3||000000000000000000000000min: { num: 1424.0 }max: { num: MaxKey } to: shard0001:localhost:30001 m30999| Wed Feb 27 02:00:42.628 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 2|3||000000000000000000000000min: { num: 1424.0 }max: { num: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Wed Feb 27 02:00:42.628 [conn12] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { num: 1424.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_1424.0", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30000| Wed Feb 27 02:00:42.628 [conn12] distributed lock 'test.foo/AMAZONA-DFVK11N:30000:1361948442:41' acquired, ts : 512daf1a83c83aaea83d4627 m30000| Wed Feb 27 02:00:42.628 [conn12] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:42-512daf1a83c83aaea83d4628", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60822", time: new Date(1361948442628), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 1424.0 }, max: { num: MaxKey }, from: "shard0000", to: "shard0001" } } m30000| Wed Feb 27 02:00:42.628 [conn12] moveChunk request accepted at version 2|3||512daf180c9ae827b8ef2398 m30000| Wed Feb 27 02:00:42.628 [conn12] moveChunk number of documents: 1 m30001| Wed Feb 27 02:00:42.644 [migrateThread] starting receiving-end of migration of chunk { num: 1424.0 } -> { num: MaxKey } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Wed Feb 27 02:00:42.644 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Wed Feb 27 02:00:42.644 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 1424.0 } -> { num: MaxKey } m30000| Wed Feb 27 02:00:42.659 [conn12] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { num: 1424.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Wed Feb 27 02:00:42.675 [conn12] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { num: 1424.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Wed Feb 27 02:00:42.690 [conn12] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { num: 1424.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:42.690 [migrateThread] migrate commit flushed to journal for 'test.foo' { num: 1424.0 } -> { num: MaxKey } m30000| Wed Feb 27 02:00:42.706 [conn12] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { num: 1424.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Wed Feb 27 02:00:43.190 [conn12] moveChunk setting version to: 3|0||512daf180c9ae827b8ef2398 m30000| Wed Feb 27 02:00:43.190 [conn11] command admin.$cmd command: { _transferMods: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:31 reslen:51 478ms m30001| Wed Feb 27 02:00:43.190 [initandlisten] connection accepted from 127.0.0.1:60832 #7 (7 connections now open) m30001| Wed Feb 27 02:00:43.190 [conn7] authenticate db: local { authenticate: 1, nonce: "b039ff88414491b3", user: "__system", key: "593b17a91f5d89415ae4bbeb33ffc124" } m30001| Wed Feb 27 02:00:43.190 [conn7] Waiting for commit to finish m30001| Wed Feb 27 02:00:43.205 [conn7] Waiting for commit to finish m30001| Wed Feb 27 02:00:43.205 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 1424.0 } -> { num: MaxKey } m30001| Wed Feb 27 02:00:43.205 [migrateThread] migrate commit flushed to journal for 'test.foo' { num: 1424.0 } -> { num: MaxKey } m30001| Wed Feb 27 02:00:43.205 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:43-512daf1b051f47eaec1d92b9", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1361948443205), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 1424.0 }, max: { num: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 559 } } m30000| Wed Feb 27 02:00:43.221 [conn12] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { num: 1424.0 }, max: { num: MaxKey }, shardKeyPattern: { num: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 51255, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Wed Feb 27 02:00:43.221 [conn12] moveChunk updating self version to: 3|1||512daf180c9ae827b8ef2398 through { num: 1141.0 } -> { num: 1424.0 } for collection 'test.foo' m30000| Wed Feb 27 02:00:43.221 [initandlisten] connection accepted from 127.0.0.1:60833 #17 (17 connections now open) m30000| Wed Feb 27 02:00:43.221 [conn17] authenticate db: local { authenticate: 1, nonce: "5a5c90e26ea80fb5", user: "__system", key: "06f47c499cbe88587f53fcd149ced13f" } m30000| Wed Feb 27 02:00:43.221 [conn12] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:43-512daf1b83c83aaea83d4629", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60822", time: new Date(1361948443221), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 1424.0 }, max: { num: MaxKey }, from: "shard0000", to: "shard0001" } } m30000| Wed Feb 27 02:00:43.221 [conn12] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Wed Feb 27 02:00:43.221 [conn12] MigrateFromStatus::done Global lock acquired m30000| Wed Feb 27 02:00:43.221 [conn12] forking for cleanup of chunk data m30000| Wed Feb 27 02:00:43.221 [conn12] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Wed Feb 27 02:00:43.221 [conn12] MigrateFromStatus::done Global lock acquired m30000| Wed Feb 27 02:00:43.221 [cleanupOldData-512daf1b83c83aaea83d462a] (start) waiting to cleanup test.foo from { num: 1424.0 } -> { num: MaxKey }, # cursors remaining: 0 m30000| Wed Feb 27 02:00:43.221 [conn12] distributed lock 'test.foo/AMAZONA-DFVK11N:30000:1361948442:41' unlocked. m30000| Wed Feb 27 02:00:43.221 [conn12] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:43-512daf1b83c83aaea83d462b", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60822", time: new Date(1361948443221), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 1424.0 }, max: { num: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 62, step5 of 6: 516, step6 of 6: 0 } } m30000| Wed Feb 27 02:00:43.221 [conn12] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { num: 1424.0 }, max: { num: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_1424.0", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:66 w:26 reslen:37 582ms m30999| Wed Feb 27 02:00:43.221 [conn1] moveChunk result: { ok: 1.0 } m30999| Wed Feb 27 02:00:43.221 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 3|1||512daf180c9ae827b8ef2398 based on: 2|3||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:43.221 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 11 m30999| Wed Feb 27 02:00:43.221 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), ok: 1.0 } m30999| Wed Feb 27 02:00:43.221 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|0||000000000000000000000000min: { num: 1424.0 }max: { num: MaxKey } dataWritten: 4258349 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.221 [conn5] request split points lookup for chunk test.foo { : 1424.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.221 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:43.236 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|0||000000000000000000000000min: { num: 1424.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.236 [conn5] request split points lookup for chunk test.foo { : 1424.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.236 [conn1] chunk not full enough to trigger auto-split no split entry m30000| Wed Feb 27 02:00:43.252 [cleanupOldData-512daf1b83c83aaea83d462a] waiting to remove documents for test.foo from { num: 1424.0 } -> { num: MaxKey } m30000| Wed Feb 27 02:00:43.252 [cleanupOldData-512daf1b83c83aaea83d462a] moveChunk starting delete for: test.foo from { num: 1424.0 } -> { num: MaxKey } m30000| Wed Feb 27 02:00:43.252 [cleanupOldData-512daf1b83c83aaea83d462a] moveChunk deleted 1 documents for test.foo from { num: 1424.0 } -> { num: MaxKey } j:14 : 655 m30999| Wed Feb 27 02:00:43.268 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|0||000000000000000000000000min: { num: 1424.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.283 [conn5] request split points lookup for chunk test.foo { : 1424.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.283 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:43.299 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|0||000000000000000000000000min: { num: 1424.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.299 [conn5] request split points lookup for chunk test.foo { : 1424.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.299 [conn1] chunk not full enough to trigger auto-split { num: 1547.0 } j:15 : 47 m30999| Wed Feb 27 02:00:43.314 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|0||000000000000000000000000min: { num: 1424.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.314 [conn5] request split points lookup for chunk test.foo { : 1424.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.314 [conn1] chunk not full enough to trigger auto-split { num: 1547.0 } m30999| Wed Feb 27 02:00:43.346 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|0||000000000000000000000000min: { num: 1424.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.346 [conn5] request split points lookup for chunk test.foo { : 1424.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.346 [conn1] chunk not full enough to trigger auto-split { num: 1547.0 } j:16 : 47 m30999| Wed Feb 27 02:00:43.361 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|0||000000000000000000000000min: { num: 1424.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.361 [conn5] request split points lookup for chunk test.foo { : 1424.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:43.361 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 1424.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:43.361 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1424.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 1707.0 } ], shardId: "test.foo-num_1424.0", configdb: "localhost:30000" } m30001| Wed Feb 27 02:00:43.361 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf1b051f47eaec1d92ba m30001| Wed Feb 27 02:00:43.361 [conn5] splitChunk accepted at version 3|0||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:43.377 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:43-512daf1b051f47eaec1d92bb", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948443377), what: "split", ns: "test.foo", details: { before: { min: { num: 1424.0 }, max: { num: MaxKey }, lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 1424.0 }, max: { num: 1707.0 }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') }, right: { min: { num: 1707.0 }, max: { num: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') } } } m30001| Wed Feb 27 02:00:43.377 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30999| Wed Feb 27 02:00:43.377 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 3|3||512daf180c9ae827b8ef2398 based on: 3|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:43.377 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|0||000000000000000000000000min: { num: 1424.0 }max: { num: MaxKey } on: { num: 1707.0 } (splitThreshold 11796480) (migrate suggested) m30999| Wed Feb 27 02:00:43.377 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 288 writeLock: 0 version: 2.4.0-rc2-pre- m30999| Wed Feb 27 02:00:43.377 [conn1] recently split chunk: { min: { num: 1707.0 }, max: { num: MaxKey } } already in the best shard: shard0001:localhost:30001 m30999| Wed Feb 27 02:00:43.377 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|3, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 12 m30999| Wed Feb 27 02:00:43.377 [conn1] setShardVersion success: { oldVersion: Timestamp 3000|0, oldVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), ok: 1.0 } m30999| Wed Feb 27 02:00:43.377 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|3||000000000000000000000000min: { num: 1707.0 }max: { num: MaxKey } dataWritten: 4258349 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.377 [conn5] request split points lookup for chunk test.foo { : 1707.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.377 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:43.424 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|3||000000000000000000000000min: { num: 1707.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.424 [conn5] request split points lookup for chunk test.foo { : 1707.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.424 [conn1] chunk not full enough to trigger auto-split no split entry j:17 : 94 m30999| Wed Feb 27 02:00:43.455 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|3||000000000000000000000000min: { num: 1707.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.455 [conn5] request split points lookup for chunk test.foo { : 1707.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.455 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:43.470 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|3||000000000000000000000000min: { num: 1707.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.470 [conn5] request split points lookup for chunk test.foo { : 1707.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.470 [conn1] chunk not full enough to trigger auto-split { num: 1830.0 } m30999| Wed Feb 27 02:00:43.502 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|3||000000000000000000000000min: { num: 1707.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.502 [conn5] request split points lookup for chunk test.foo { : 1707.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.502 [conn1] chunk not full enough to trigger auto-split { num: 1830.0 } j:18 : 46 m30999| Wed Feb 27 02:00:43.517 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|3||000000000000000000000000min: { num: 1707.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.517 [conn5] request split points lookup for chunk test.foo { : 1707.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.517 [conn1] chunk not full enough to trigger auto-split { num: 1830.0 } m30999| Wed Feb 27 02:00:43.548 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|3||000000000000000000000000min: { num: 1707.0 }max: { num: MaxKey } dataWritten: 2408985 splitThreshold: 11796480 m30001| Wed Feb 27 02:00:43.548 [conn5] request split points lookup for chunk test.foo { : 1707.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:43.548 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 1707.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:43.548 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1707.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 1990.0 } ], shardId: "test.foo-num_1707.0", configdb: "localhost:30000" } m30001| Wed Feb 27 02:00:43.548 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf1b051f47eaec1d92bc m30001| Wed Feb 27 02:00:43.548 [conn5] splitChunk accepted at version 3|3||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:43.548 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:43-512daf1b051f47eaec1d92bd", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948443548), what: "split", ns: "test.foo", details: { before: { min: { num: 1707.0 }, max: { num: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 1707.0 }, max: { num: 1990.0 }, lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') }, right: { min: { num: 1990.0 }, max: { num: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') } } } m30001| Wed Feb 27 02:00:43.548 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30999| Wed Feb 27 02:00:43.548 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 3|5||512daf180c9ae827b8ef2398 based on: 3|3||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:43.548 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|3||000000000000000000000000min: { num: 1707.0 }max: { num: MaxKey } on: { num: 1990.0 } (splitThreshold 11796480) (migrate suggested) m30999| Wed Feb 27 02:00:43.548 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 288 writeLock: 0 version: 2.4.0-rc2-pre- m30999| Wed Feb 27 02:00:43.548 [conn1] recently split chunk: { min: { num: 1990.0 }, max: { num: MaxKey } } already in the best shard: shard0001:localhost:30001 m30999| Wed Feb 27 02:00:43.548 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|5, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 13 m30999| Wed Feb 27 02:00:43.548 [conn1] setShardVersion success: { oldVersion: Timestamp 3000|0, oldVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), ok: 1.0 } j:19 : 47 m30999| Wed Feb 27 02:00:43.548 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|5||000000000000000000000000min: { num: 1990.0 }max: { num: MaxKey } dataWritten: 4719644 splitThreshold: 23592960 m30001| Wed Feb 27 02:00:43.548 [conn5] request split points lookup for chunk test.foo { : 1990.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:43.548 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:44.984 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:00:44.984 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30999:1361948420:41 ) m30999| Wed Feb 27 02:00:44.984 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41: m30999| { "state" : 1, m30999| "who" : "AMAZONA-DFVK11N:30999:1361948420:41:Balancer:41", m30999| "process" : "AMAZONA-DFVK11N:30999:1361948420:41", m30999| "when" : { "$date" : "Wed Feb 27 02:00:44 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512daf1c0c9ae827b8ef2399" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512daf178fcf9d0e1dbd1e07" } } m30999| Wed Feb 27 02:00:44.984 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' acquired, ts : 512daf1c0c9ae827b8ef2399 m30999| Wed Feb 27 02:00:44.984 [Balancer] *** start balancing round m30999| Wed Feb 27 02:00:44.984 [Balancer] waitForDelete: 0 m30999| Wed Feb 27 02:00:44.984 [Balancer] secondaryThrottle: 1 m30000| Wed Feb 27 02:00:44.984 [conn3] build index config.tags { _id: 1 } m30000| Wed Feb 27 02:00:44.984 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Wed Feb 27 02:00:44.984 [conn3] info: creating collection config.tags on add index m30000| Wed Feb 27 02:00:44.984 [conn3] build index config.tags { ns: 1, min: 1 } m30000| Wed Feb 27 02:00:44.984 [conn3] build index done. scanned 0 total records. 0 secs m30999| Wed Feb 27 02:00:44.984 [Balancer] shard0001 has more chunks me:9 best: shard0000:1 m30999| Wed Feb 27 02:00:44.984 [Balancer] collection : test.foo m30999| Wed Feb 27 02:00:44.984 [Balancer] donor : shard0001 chunks on 9 m30999| Wed Feb 27 02:00:44.984 [Balancer] receiver : shard0000 chunks on 1 m30999| Wed Feb 27 02:00:44.984 [Balancer] threshold : 2 m30999| Wed Feb 27 02:00:44.984 [Balancer] ns: test.foo going to move { _id: "test.foo-num_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398'), ns: "test.foo", min: { num: MinKey }, max: { num: 0.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Wed Feb 27 02:00:44.984 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { num: MinKey }max: { num: 0.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Wed Feb 27 02:00:44.984 [conn5] warning: secondaryThrottle selected but no replication m30001| Wed Feb 27 02:00:44.984 [conn5] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: MinKey }, max: { num: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Wed Feb 27 02:00:44.984 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf1c051f47eaec1d92be m30001| Wed Feb 27 02:00:44.984 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:44-512daf1c051f47eaec1d92bf", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948444984), what: "moveChunk.start", ns: "test.foo", details: { min: { num: MinKey }, max: { num: 0.0 }, from: "shard0001", to: "shard0000" } } m30001| Wed Feb 27 02:00:44.984 [conn5] moveChunk request accepted at version 3|5||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:44.984 [conn5] moveChunk number of documents: 0 m30000| Wed Feb 27 02:00:44.984 [migrateThread] starting receiving-end of migration of chunk { num: MinKey } -> { num: 0.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Wed Feb 27 02:00:44.984 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Wed Feb 27 02:00:44.984 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: MinKey } -> { num: 0.0 } m30001| Wed Feb 27 02:00:44.999 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 0.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:45.015 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 0.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:45.030 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 0.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:45.046 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 0.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Wed Feb 27 02:00:45.062 [migrateThread] migrate commit flushed to journal for 'test.foo' { num: MinKey } -> { num: 0.0 } m30001| Wed Feb 27 02:00:45.077 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 0.0 }, shardKeyPattern: { num: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:45.077 [conn5] moveChunk setting version to: 4|0||512daf180c9ae827b8ef2398 m30000| Wed Feb 27 02:00:45.077 [conn13] Waiting for commit to finish m30000| Wed Feb 27 02:00:45.093 [conn13] Waiting for commit to finish m30000| Wed Feb 27 02:00:45.093 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: MinKey } -> { num: 0.0 } m30000| Wed Feb 27 02:00:45.093 [migrateThread] migrate commit flushed to journal for 'test.foo' { num: MinKey } -> { num: 0.0 } m30000| Wed Feb 27 02:00:45.093 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:45-512daf1d83c83aaea83d462c", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1361948445093), what: "moveChunk.to", ns: "test.foo", details: { min: { num: MinKey }, max: { num: 0.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 97 } } m30001| Wed Feb 27 02:00:45.108 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { num: MinKey }, max: { num: 0.0 }, shardKeyPattern: { num: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Wed Feb 27 02:00:45.108 [conn5] moveChunk updating self version to: 4|1||512daf180c9ae827b8ef2398 through { num: 0.0 } -> { num: 9.0 } for collection 'test.foo' m30001| Wed Feb 27 02:00:45.140 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:45-512daf1d051f47eaec1d92c0", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948445140), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: MinKey }, max: { num: 0.0 }, from: "shard0001", to: "shard0000" } } m30001| Wed Feb 27 02:00:45.140 [conn5] MigrateFromStatus::done About to acquire global write lock to exit critical section m30998| Wed Feb 27 02:00:45.186 [Balancer] Refreshing MaxChunkSize: 50 m30001| Wed Feb 27 02:00:45.233 [conn5] MigrateFromStatus::done Global lock acquired m30001| Wed Feb 27 02:00:45.233 [conn4] insert test.foo ninserted:1 keyUpdates:0 locks(micros) w:247 1678ms m30001| Wed Feb 27 02:00:45.233 [conn5] forking for cleanup of chunk data m30998| Wed Feb 27 02:00:45.249 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30998:1361948421:41 ) m30001| Wed Feb 27 02:00:45.249 [conn5] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Wed Feb 27 02:00:45.249 [conn5] MigrateFromStatus::done Global lock acquired m30001| Wed Feb 27 02:00:45.249 [cleanupOldData-512daf1d051f47eaec1d92c1] (start) waiting to cleanup test.foo from { num: MinKey } -> { num: 0.0 }, # cursors remaining: 0 m30998| Wed Feb 27 02:00:45.249 [Balancer] checking last ping for lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' against process and ping Wed Dec 31 19:00:00 1969 m30998| Wed Feb 27 02:00:45.249 [Balancer] creating new connection to:localhost:30000 m30001| Wed Feb 27 02:00:45.249 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30001| Wed Feb 27 02:00:45.249 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:45-512daf1d051f47eaec1d92c2", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948445249), what: "moveChunk.from", ns: "test.foo", details: { min: { num: MinKey }, max: { num: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 83, step5 of 6: 170, step6 of 6: 0 } } m30001| Wed Feb 27 02:00:45.249 [conn5] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: MinKey }, max: { num: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:91 r:71 w:22 reslen:37 259ms m30999| Wed Feb 27 02:00:45.249 [Balancer] moveChunk result: { ok: 1.0 } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000000') }, ok: 1.0 } m30998| Wed Feb 27 02:00:45.249 BackgroundJob starting: ConnectBG m30000| Wed Feb 27 02:00:45.249 [initandlisten] connection accepted from 127.0.0.1:60836 #18 (18 connections now open) m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000000 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 3|5||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14ca'), num: 2004.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:00:45.249 [Balancer] connected connection! m30999| Wed Feb 27 02:00:45.249 BackgroundJob starting: ConnectBG m30000| Wed Feb 27 02:00:45.249 [initandlisten] connection accepted from 127.0.0.1:60837 #19 (19 connections now open) m30999| Wed Feb 27 02:00:45.249 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 4|1||512daf180c9ae827b8ef2398 based on: 3|5||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] connected connection! m30999| Wed Feb 27 02:00:45.249 [Balancer] *** end of balancing round m30999| Wed Feb 27 02:00:45.249 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|1, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 14 m30999| Wed Feb 27 02:00:45.249 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' unlocked. m30000| Wed Feb 27 02:00:45.249 [conn18] authenticate db: local { authenticate: 1, nonce: "44f297472bf7bfab", user: "__system", key: "2e4afe54a4424a2507c33917dd394bba" } m30000| Wed Feb 27 02:00:45.249 [conn19] authenticate db: local { authenticate: 1, nonce: "8f86cd3bcf45d62a", user: "__system", key: "75dde620996e4b11bbe325c239a3dca5" } m30998| Wed Feb 27 02:00:45.249 [Balancer] could not force lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' because elapsed time 0 <= takeover time 900000 m30998| Wed Feb 27 02:00:45.249 [Balancer] skipping balancing round because another balancer is active m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 4|1||512daf180c9ae827b8ef2398 based on: 3|5||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:00:45.249 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] connected connection! m30000| Wed Feb 27 02:00:45.249 [initandlisten] connection accepted from 127.0.0.1:60838 #20 (20 connections now open) m30000| Wed Feb 27 02:00:45.249 [conn20] authenticate db: local { authenticate: 1, nonce: "679fc52c7ac39a89", user: "__system", key: "3b631f82a6fd4665f3f314f1c25b9eae" } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] initializing shard connection to localhost:30000 m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0000", shardHost: "localhost:30000" } 000000000232E450 14 m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001 m30999| Wed Feb 27 02:00:45.249 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:00:45.249 [conn1] setShardVersion success: { oldVersion: Timestamp 3000|0, oldVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), ok: 1.0 } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] connected connection! m30001| Wed Feb 27 02:00:45.249 [initandlisten] connection accepted from 127.0.0.1:60839 #8 (8 connections now open) m30001| Wed Feb 27 02:00:45.249 [conn8] authenticate db: local { authenticate: 1, nonce: "207f070351f99c34", user: "__system", key: "dcc3a832d6fa3c7b529f811a032ddc66" } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] initializing shard connection to localhost:30001 m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|1, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 000000000232E550 14 m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000001') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000001 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14cb'), num: 2005.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000002') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000002 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14cc'), num: 2006.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000003') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000003 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14cd'), num: 2007.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.249 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.249 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|5||000000000000000000000000min: { num: 1990.0 }max: { num: MaxKey } dataWritten: 4758557 splitThreshold: 23592960 m30001| Wed Feb 27 02:00:45.264 [conn2] request split points lookup for chunk test.foo { : 1990.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:45.264 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000004') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000004 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14ce'), num: 2008.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000005') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000005 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14cf'), num: 2009.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000006') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000006 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14d0'), num: 2010.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000007') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000007 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14d1'), num: 2011.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000008') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000008 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14d2'), num: 2012.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000009') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000009 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14d3'), num: 2013.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d000000000000000a') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d000000000000000a needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14d4'), num: 2014.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d000000000000000b') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d000000000000000b needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14d5'), num: 2015.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.264 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:45.280 [cleanupOldData-512daf1d051f47eaec1d92c1] waiting to remove documents for test.foo from { num: MinKey } -> { num: 0.0 } m30001| Wed Feb 27 02:00:45.280 [cleanupOldData-512daf1d051f47eaec1d92c1] moveChunk starting delete for: test.foo from { num: MinKey } -> { num: 0.0 } m30001| Wed Feb 27 02:00:45.280 [cleanupOldData-512daf1d051f47eaec1d92c1] moveChunk deleted 0 documents for test.foo from { num: MinKey } -> { num: 0.0 } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d000000000000000c') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d000000000000000c needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14d6'), num: 2016.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d000000000000000d') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d000000000000000d needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14d7'), num: 2017.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d000000000000000e') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d000000000000000e needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14d8'), num: 2018.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d000000000000000f') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d000000000000000f needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14d9'), num: 2019.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000010') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000010 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14da'), num: 2020.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000011') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000011 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14db'), num: 2021.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000012') }, ok: 1.0 } j:20 : 1732 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000012 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14dc'), num: 2022.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000013') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000013 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14dd'), num: 2023.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.280 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000014') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000014 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14de'), num: 2024.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000015') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000015 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14df'), num: 2025.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000016') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000016 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14e0'), num: 2026.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000017') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000017 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14e1'), num: 2027.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000018') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000018 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14e2'), num: 2028.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000019') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000019 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14e3'), num: 2029.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d000000000000001a') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d000000000000001a needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14e4'), num: 2030.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d000000000000001b') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d000000000000001b needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14e5'), num: 2031.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d000000000000001c') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d000000000000001c needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14e6'), num: 2032.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d000000000000001d') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d000000000000001d needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14e7'), num: 2033.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.296 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.311 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d000000000000001e') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.311 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d000000000000001e needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.311 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14e8'), num: 2034.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.311 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.311 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d000000000000001f') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.311 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d000000000000001f needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.311 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14e9'), num: 2035.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:45.311 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.311 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000020') }, ok: 1.0 } m30999| Wed Feb 27 02:00:45.311 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000020 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.311 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14ea'), num: 2036.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30001| Wed Feb 27 02:00:45.374 [conn2] request split points lookup for chunk test.foo { : 1990.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:45.311 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|5||000000000000000000000000min: { num: 1990.0 }max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960 m30999| Wed Feb 27 02:00:45.374 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:45.374 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Wed Feb 27 02:00:46.263 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:00:46.263 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30999:1361948420:41 ) m30999| Wed Feb 27 02:00:46.263 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41: m30999| { "state" : 1, m30999| "who" : "AMAZONA-DFVK11N:30999:1361948420:41:Balancer:41", m30999| "process" : "AMAZONA-DFVK11N:30999:1361948420:41", m30999| "when" : { "$date" : "Wed Feb 27 02:00:46 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512daf1e0c9ae827b8ef239a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512daf1c0c9ae827b8ef2399" } } m30999| Wed Feb 27 02:00:46.263 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' acquired, ts : 512daf1e0c9ae827b8ef239a m30999| Wed Feb 27 02:00:46.263 [Balancer] *** start balancing round m30999| Wed Feb 27 02:00:46.263 [Balancer] waitForDelete: 0 m30999| Wed Feb 27 02:00:46.263 [Balancer] secondaryThrottle: 1 m30999| Wed Feb 27 02:00:46.263 [Balancer] shard0001 is unavailable m30999| Wed Feb 27 02:00:46.263 [Balancer] collection : test.foo m30999| Wed Feb 27 02:00:46.263 [Balancer] donor : shard0000 chunks on 2 m30999| Wed Feb 27 02:00:46.263 [Balancer] receiver : shard0000 chunks on 2 m30999| Wed Feb 27 02:00:46.263 [Balancer] threshold : 2 m30999| Wed Feb 27 02:00:46.263 [Balancer] no need to move any chunk m30999| Wed Feb 27 02:00:46.263 [Balancer] *** end of balancing round m30999| Wed Feb 27 02:00:46.263 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' unlocked. m30001| Wed Feb 27 02:00:46.528 [conn8] insert test.foo ninserted:1 keyUpdates:0 locks(micros) w:376 1155ms m30001| Wed Feb 27 02:00:46.528 [conn4] insert test.foo ninserted:1 keyUpdates:0 locks(micros) w:221 1155ms m30999| Wed Feb 27 02:00:46.528 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000021') }, ok: 1.0 } m30999| Wed Feb 27 02:00:46.528 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000021 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:46.528 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14eb'), num: 2037.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:46.528 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:46.528 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000022') }, ok: 1.0 } m30999| Wed Feb 27 02:00:46.528 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000022 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:46.528 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1bbc2c1eb1ee5f14ec'), num: 2038.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:46.528 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 j:21 : 1248 m30999| Wed Feb 27 02:00:46.528 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000023') }, ok: 1.0 } m30999| Wed Feb 27 02:00:46.528 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000023 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:46.528 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1dbc2c1eb1ee5f14ed'), num: 2039.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:46.528 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:46.544 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000024') }, ok: 1.0 } m30999| Wed Feb 27 02:00:46.544 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000024 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:46.544 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1dbc2c1eb1ee5f14ee'), num: 2040.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:46.544 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:46.544 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000025') }, ok: 1.0 } m30999| Wed Feb 27 02:00:46.544 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000025 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:46.544 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1dbc2c1eb1ee5f14ef'), num: 2041.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:46.544 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:46.544 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 4, instanceIdent: "AMAZONA-DFVK11N:30001", version: Timestamp 4000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), yourVersion: Timestamp 3000|0, yourVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), msg: BinData, id: ObjectId('512daf1d0000000000000026') }, ok: 1.0 } m30999| Wed Feb 27 02:00:46.544 [WriteBackListener-localhost:30001] connectionId: AMAZONA-DFVK11N:30001:4 writebackId: 512daf1d0000000000000026 needVersion : 4|0||512daf180c9ae827b8ef2398 mine : 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:46.544 [WriteBackListener-localhost:30001] op: insert len: 51284 ns: test.foo{ _id: ObjectId('512daf1dbc2c1eb1ee5f14f0'), num: 2042.0, s: "asocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfasdfnsadofnsadlkfnsaldknfsadasocsancdnsjfnsdnfsjdhfasdfasdfa..." } m30999| Wed Feb 27 02:00:46.544 [WriteBackListener-localhost:30001] wbl already reloaded config information for version 4|0||512daf180c9ae827b8ef2398, at version 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:46.559 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|5||000000000000000000000000min: { num: 1990.0 }max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960 m30001| Wed Feb 27 02:00:46.559 [conn5] request split points lookup for chunk test.foo { : 1990.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:46.559 [conn1] chunk not full enough to trigger auto-split { num: 2236.0 } j:22 : 62 m30999| Wed Feb 27 02:00:46.606 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|5||000000000000000000000000min: { num: 1990.0 }max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960 m30001| Wed Feb 27 02:00:46.606 [conn5] request split points lookup for chunk test.foo { : 1990.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:46.606 [conn1] chunk not full enough to trigger auto-split { num: 2236.0 } j:23 : 47 m30999| Wed Feb 27 02:00:46.653 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|5||000000000000000000000000min: { num: 1990.0 }max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960 m30001| Wed Feb 27 02:00:46.653 [conn5] request split points lookup for chunk test.foo { : 1990.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:46.653 [conn1] chunk not full enough to trigger auto-split { num: 2236.0 } j:24 : 47 m30001| Wed Feb 27 02:00:47.948 [conn4] insert test.foo ninserted:1 keyUpdates:0 locks(micros) w:236 1282ms m30999| Wed Feb 27 02:00:47.979 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|5||000000000000000000000000min: { num: 1990.0 }max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960 m30001| Wed Feb 27 02:00:47.979 [conn5] request split points lookup for chunk test.foo { : 1990.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:47.979 [conn5] max number of requested split points reached (2) before the end of chunk test.foo { : 1990.0 } -->> { : MaxKey } m30001| Wed Feb 27 02:00:47.979 [conn5] received splitChunk request: { splitChunk: "test.foo", keyPattern: { num: 1.0 }, min: { num: 1990.0 }, max: { num: MaxKey }, from: "shard0001", splitKeys: [ { num: 2518.0 } ], shardId: "test.foo-num_1990.0", configdb: "localhost:30000" } m30001| Wed Feb 27 02:00:47.979 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf1f051f47eaec1d92c3 m30001| Wed Feb 27 02:00:47.979 [conn5] splitChunk accepted at version 4|1||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:47.979 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:47-512daf1f051f47eaec1d92c4", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948447979), what: "split", ns: "test.foo", details: { before: { min: { num: 1990.0 }, max: { num: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { num: 1990.0 }, max: { num: 2518.0 }, lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') }, right: { min: { num: 2518.0 }, max: { num: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398') } } } m30001| Wed Feb 27 02:00:47.979 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30999| Wed Feb 27 02:00:47.979 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 4|3||512daf180c9ae827b8ef2398 based on: 4|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:47.979 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 3|5||000000000000000000000000min: { num: 1990.0 }max: { num: MaxKey } on: { num: 2518.0 } (splitThreshold 23592960) (migrate suggested) m30999| Wed Feb 27 02:00:47.979 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 288 writeLock: 0 version: 2.4.0-rc2-pre- m30999| Wed Feb 27 02:00:47.979 [conn1] recently split chunk: { min: { num: 2518.0 }, max: { num: MaxKey } } already in the best shard: shard0001:localhost:30001 m30999| Wed Feb 27 02:00:47.979 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|3, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 16 m30999| Wed Feb 27 02:00:47.979 [conn1] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), ok: 1.0 } m30999| Wed Feb 27 02:00:47.994 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 4|3||000000000000000000000000min: { num: 2518.0 }max: { num: MaxKey } dataWritten: 4762671 splitThreshold: 23592960 m30001| Wed Feb 27 02:00:47.994 [conn5] request split points lookup for chunk test.foo { : 2518.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:47.994 [conn1] chunk not full enough to trigger auto-split no split entry j:25 : 1341 m30999| Wed Feb 27 02:00:48.041 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 4|3||000000000000000000000000min: { num: 2518.0 }max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960 m30001| Wed Feb 27 02:00:48.041 [conn5] request split points lookup for chunk test.foo { : 2518.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:48.041 [conn1] chunk not full enough to trigger auto-split no split entry j:26 : 47 m30999| Wed Feb 27 02:00:48.072 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 4|3||000000000000000000000000min: { num: 2518.0 }max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960 m30001| Wed Feb 27 02:00:48.072 [conn5] request split points lookup for chunk test.foo { : 2518.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:48.072 [conn1] chunk not full enough to trigger auto-split no split entry m30001| Wed Feb 27 02:00:49.336 [conn4] insert test.foo ninserted:1 keyUpdates:0 locks(micros) w:297 1244ms j:27 : 1264 m30999| Wed Feb 27 02:00:49.367 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 4|3||000000000000000000000000min: { num: 2518.0 }max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960 m30001| Wed Feb 27 02:00:49.367 [conn5] request split points lookup for chunk test.foo { : 2518.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:49.367 [conn1] chunk not full enough to trigger auto-split { num: 2764.0 } m30001| Wed Feb 27 02:00:49.367 [FileAllocator] allocating new datafile /data/db/auto21\test.3, filling with zeroes... j:28 : 78 m30999| Wed Feb 27 02:00:49.414 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 4|3||000000000000000000000000min: { num: 2518.0 }max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960 m30001| Wed Feb 27 02:00:49.414 [conn5] request split points lookup for chunk test.foo { : 2518.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:49.414 [conn1] chunk not full enough to trigger auto-split { num: 2764.0 } m30999| Wed Feb 27 02:00:49.461 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 4|3||000000000000000000000000min: { num: 2518.0 }max: { num: MaxKey } dataWritten: 4766715 splitThreshold: 23592960 m30001| Wed Feb 27 02:00:49.461 [conn5] request split points lookup for chunk test.foo { : 2518.0 } -->> { : MaxKey } m30999| Wed Feb 27 02:00:49.461 [conn1] chunk not full enough to trigger auto-split { num: 2764.0 } j:29 : 47 done inserting data Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:00:49.461 [conn1] authenticate db: admin { authenticate: 1, nonce: "55e6f17407f48ad4", user: "admin", key: "9f75ae97885bb3b3c51b31c273ecb212" } m30001| Wed Feb 27 02:00:49.461 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:00:50.475 [conn1] authenticate db: admin { authenticate: 1, nonce: "c642dbb1132283ba", user: "admin", key: "9d79ee3200db208ac50deaf2ab9dadf2" } m30001| Wed Feb 27 02:00:50.475 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30999| Wed Feb 27 02:00:50.943 [LockPinger] cluster localhost:30000 pinged successfully at Wed Feb 27 02:00:50 2013 by distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30999:1361948420:41', sleeping for 30000ms m30001| Wed Feb 27 02:00:50.990 [FileAllocator] done allocating datafile /data/db/auto21\test.3, size: 512MB, took 1.619 secs m30998| Wed Feb 27 02:00:51.177 [LockPinger] cluster localhost:30000 pinged successfully at Wed Feb 27 02:00:51 2013 by distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30998:1361948421:41', sleeping for 30000ms m30998| Wed Feb 27 02:00:51.255 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:00:51.255 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30998:1361948421:41 ) m30998| Wed Feb 27 02:00:51.255 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41: m30998| { "state" : 1, m30998| "who" : "AMAZONA-DFVK11N:30998:1361948421:41:Balancer:18467", m30998| "process" : "AMAZONA-DFVK11N:30998:1361948421:41", m30998| "when" : { "$date" : "Wed Feb 27 02:00:51 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "512daf238fcf9d0e1dbd1e08" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "512daf1e0c9ae827b8ef239a" } } m30998| Wed Feb 27 02:00:51.255 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' acquired, ts : 512daf238fcf9d0e1dbd1e08 m30998| Wed Feb 27 02:00:51.255 [Balancer] *** start balancing round m30998| Wed Feb 27 02:00:51.255 [Balancer] waitForDelete: 0 m30998| Wed Feb 27 02:00:51.255 [Balancer] secondaryThrottle: 1 m30998| Wed Feb 27 02:00:51.255 [Balancer] DBConfig unserialize: test { _id: "test", partitioned: true, primary: "shard0001" } m30998| Wed Feb 27 02:00:51.255 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 4|3||512daf180c9ae827b8ef2398 based on: (empty) m30998| Wed Feb 27 02:00:51.255 [Balancer] shard0001 has more chunks me:9 best: shard0000:2 m30998| Wed Feb 27 02:00:51.255 [Balancer] collection : test.foo m30998| Wed Feb 27 02:00:51.255 [Balancer] donor : shard0001 chunks on 9 m30998| Wed Feb 27 02:00:51.255 [Balancer] receiver : shard0000 chunks on 2 m30998| Wed Feb 27 02:00:51.255 [Balancer] threshold : 2 m30998| Wed Feb 27 02:00:51.255 [Balancer] ns: test.foo going to move { _id: "test.foo-num_0.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398'), ns: "test.foo", min: { num: 0.0 }, max: { num: 9.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30998| Wed Feb 27 02:00:51.255 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { num: 0.0 }max: { num: 9.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Wed Feb 27 02:00:51.255 [conn3] warning: secondaryThrottle selected but no replication m30001| Wed Feb 27 02:00:51.255 [conn3] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 0.0 }, max: { num: 9.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_0.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Wed Feb 27 02:00:51.255 [conn3] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf23051f47eaec1d92c5 m30001| Wed Feb 27 02:00:51.255 [conn3] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:51-512daf23051f47eaec1d92c6", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60814", time: new Date(1361948451255), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 0.0 }, max: { num: 9.0 }, from: "shard0001", to: "shard0000" } } m30001| Wed Feb 27 02:00:51.255 [conn3] moveChunk request accepted at version 4|3||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:51.255 [conn3] moveChunk number of documents: 9 m30000| Wed Feb 27 02:00:51.255 [migrateThread] starting receiving-end of migration of chunk { num: 0.0 } -> { num: 9.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Wed Feb 27 02:00:51.270 [conn3] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 0.0 }, max: { num: 9.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Wed Feb 27 02:00:51.270 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Wed Feb 27 02:00:51.270 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 0.0 } -> { num: 9.0 } m30001| Wed Feb 27 02:00:51.286 [conn3] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 0.0 }, max: { num: 9.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 9, clonedBytes: 461295, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:51.302 [conn3] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 0.0 }, max: { num: 9.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 9, clonedBytes: 461295, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:51.317 [conn3] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 0.0 }, max: { num: 9.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 9, clonedBytes: 461295, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Wed Feb 27 02:00:51.317 [migrateThread] migrate commit flushed to journal for 'test.foo' { num: 0.0 } -> { num: 9.0 } m30001| Wed Feb 27 02:00:51.348 [conn3] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 0.0 }, max: { num: 9.0 }, shardKeyPattern: { num: 1.0 }, state: "steady", counts: { cloned: 9, clonedBytes: 461295, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:51.348 [conn3] moveChunk setting version to: 5|0||512daf180c9ae827b8ef2398 m30000| Wed Feb 27 02:00:51.348 [conn13] Waiting for commit to finish m30000| Wed Feb 27 02:00:51.364 [conn13] Waiting for commit to finish m30000| Wed Feb 27 02:00:51.364 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 0.0 } -> { num: 9.0 } m30000| Wed Feb 27 02:00:51.364 [migrateThread] migrate commit flushed to journal for 'test.foo' { num: 0.0 } -> { num: 9.0 } m30000| Wed Feb 27 02:00:51.364 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:51-512daf2383c83aaea83d462d", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1361948451364), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 0.0 }, max: { num: 9.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 91 } } m30001| Wed Feb 27 02:00:51.380 [conn3] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { num: 0.0 }, max: { num: 9.0 }, shardKeyPattern: { num: 1.0 }, state: "done", counts: { cloned: 9, clonedBytes: 461295, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Wed Feb 27 02:00:51.380 [conn3] moveChunk updating self version to: 5|1||512daf180c9ae827b8ef2398 through { num: 9.0 } -> { num: 292.0 } for collection 'test.foo' Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:00:51.489 [conn1] authenticate db: admin { authenticate: 1, nonce: "10de25e6a1920bcd", user: "admin", key: "554de931dfeec80f31cf4b2885ab7e6b" } m30001| Wed Feb 27 02:00:51.489 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30000| Wed Feb 27 02:00:51.707 [conn15] command config.$cmd command: { applyOps: [ { op: "u", b: false, ns: "config.chunks", o: { _id: "test.foo-num_0.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398'), ns: "test.foo", min: { num: 0.0 }, max: { num: 9.0 }, shard: "shard0000" }, o2: { _id: "test.foo-num_0.0" } }, { op: "u", b: false, ns: "config.chunks", o: { _id: "test.foo-num_9.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398'), ns: "test.foo", min: { num: 9.0 }, max: { num: 292.0 }, shard: "shard0001" }, o2: { _id: "test.foo-num_9.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "test.foo" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 4000|3 } } ] } ntoreturn:1 keyUpdates:0 locks(micros) W:488 reslen:72 339ms m30001| Wed Feb 27 02:00:51.707 [conn3] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:51-512daf23051f47eaec1d92c7", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60814", time: new Date(1361948451707), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 0.0 }, max: { num: 9.0 }, from: "shard0001", to: "shard0000" } } m30001| Wed Feb 27 02:00:51.707 [conn3] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Wed Feb 27 02:00:51.707 [conn3] MigrateFromStatus::done Global lock acquired m30001| Wed Feb 27 02:00:51.707 [conn3] forking for cleanup of chunk data m30001| Wed Feb 27 02:00:51.707 [conn3] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Wed Feb 27 02:00:51.707 [conn3] MigrateFromStatus::done Global lock acquired m30001| Wed Feb 27 02:00:51.707 [cleanupOldData-512daf23051f47eaec1d92c8] (start) waiting to cleanup test.foo from { num: 0.0 } -> { num: 9.0 }, # cursors remaining: 0 m30001| Wed Feb 27 02:00:51.707 [conn3] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30001| Wed Feb 27 02:00:51.707 [conn3] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:51-512daf23051f47eaec1d92c9", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60814", time: new Date(1361948451707), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 0.0 }, max: { num: 9.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 84, step5 of 6: 371, step6 of 6: 0 } } m30001| Wed Feb 27 02:00:51.707 [conn3] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 0.0 }, max: { num: 9.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_0.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:24 r:114 w:24 reslen:37 460ms m30998| Wed Feb 27 02:00:51.707 [Balancer] moveChunk result: { ok: 1.0 } m30998| Wed Feb 27 02:00:51.707 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 5|1||512daf180c9ae827b8ef2398 based on: 4|3||512daf180c9ae827b8ef2398 m30998| Wed Feb 27 02:00:51.707 [Balancer] *** end of balancing round m30998| Wed Feb 27 02:00:51.723 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' unlocked. m30001| Wed Feb 27 02:00:51.738 [cleanupOldData-512daf23051f47eaec1d92c8] waiting to remove documents for test.foo from { num: 0.0 } -> { num: 9.0 } m30001| Wed Feb 27 02:00:51.738 [cleanupOldData-512daf23051f47eaec1d92c8] moveChunk starting delete for: test.foo from { num: 0.0 } -> { num: 9.0 } m30001| Wed Feb 27 02:00:51.738 [cleanupOldData-512daf23051f47eaec1d92c8] moveChunk deleted 9 documents for test.foo from { num: 0.0 } -> { num: 9.0 } m30999| Wed Feb 27 02:00:52.269 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:00:52.269 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30999:1361948420:41 ) m30999| Wed Feb 27 02:00:52.269 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41: m30999| { "state" : 1, m30999| "who" : "AMAZONA-DFVK11N:30999:1361948420:41:Balancer:41", m30999| "process" : "AMAZONA-DFVK11N:30999:1361948420:41", m30999| "when" : { "$date" : "Wed Feb 27 02:00:52 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512daf240c9ae827b8ef239b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512daf238fcf9d0e1dbd1e08" } } m30999| Wed Feb 27 02:00:52.269 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' acquired, ts : 512daf240c9ae827b8ef239b m30999| Wed Feb 27 02:00:52.269 [Balancer] *** start balancing round m30999| Wed Feb 27 02:00:52.269 [Balancer] waitForDelete: 0 m30999| Wed Feb 27 02:00:52.269 [Balancer] secondaryThrottle: 1 m30999| Wed Feb 27 02:00:52.269 [Balancer] shard0001 has more chunks me:8 best: shard0000:3 m30999| Wed Feb 27 02:00:52.269 [Balancer] collection : test.foo m30999| Wed Feb 27 02:00:52.269 [Balancer] donor : shard0001 chunks on 8 m30999| Wed Feb 27 02:00:52.269 [Balancer] receiver : shard0000 chunks on 3 m30999| Wed Feb 27 02:00:52.269 [Balancer] threshold : 2 m30999| Wed Feb 27 02:00:52.269 [Balancer] ns: test.foo going to move { _id: "test.foo-num_9.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398'), ns: "test.foo", min: { num: 9.0 }, max: { num: 292.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Wed Feb 27 02:00:52.269 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|5||000000000000000000000000min: { num: 9.0 }max: { num: 292.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Wed Feb 27 02:00:52.269 [conn5] warning: secondaryThrottle selected but no replication m30001| Wed Feb 27 02:00:52.269 [conn5] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 9.0 }, max: { num: 292.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_9.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Wed Feb 27 02:00:52.269 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf24051f47eaec1d92ca m30001| Wed Feb 27 02:00:52.269 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:52-512daf24051f47eaec1d92cb", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948452269), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 9.0 }, max: { num: 292.0 }, from: "shard0001", to: "shard0000" } } m30001| Wed Feb 27 02:00:52.269 [conn5] moveChunk request accepted at version 5|1||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:52.269 [conn5] moveChunk number of documents: 283 m30000| Wed Feb 27 02:00:52.269 [migrateThread] starting receiving-end of migration of chunk { num: 9.0 } -> { num: 292.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Wed Feb 27 02:00:52.284 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:52.300 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:52.316 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:52.331 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:52.362 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:52.409 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:52.487 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:00:52.503 [conn1] authenticate db: admin { authenticate: 1, nonce: "1ae94075944c487d", user: "admin", key: "d14d97a825ad68e9eb18653cd276a650" } m30001| Wed Feb 27 02:00:52.503 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30000| Wed Feb 27 02:00:52.596 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Wed Feb 27 02:00:52.596 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 9.0 } -> { num: 292.0 } m30001| Wed Feb 27 02:00:52.628 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 283, clonedBytes: 14505165, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:52.893 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 283, clonedBytes: 14505165, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:53.408 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 283, clonedBytes: 14505165, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:00:53.517 [conn1] authenticate db: admin { authenticate: 1, nonce: "e4da67f7eef47db8", user: "admin", key: "b7a97aa072cc876f216c44463c6c65b1" } m30001| Wed Feb 27 02:00:53.517 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30001| Wed Feb 27 02:00:54.437 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 283, clonedBytes: 14505165, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Wed Feb 27 02:00:54.437 [migrateThread] migrate commit flushed to journal for 'test.foo' { num: 9.0 } -> { num: 292.0 } m30998| Wed Feb 27 02:00:54.437 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:00:54.437 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30998:1361948421:41 ) m30998| Wed Feb 27 02:00:54.437 [Balancer] checking last ping for lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' against process and ping Wed Dec 31 19:00:00 1969 m30998| Wed Feb 27 02:00:54.437 [Balancer] could not force lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' because elapsed time 0 <= takeover time 900000 m30998| Wed Feb 27 02:00:54.437 [Balancer] skipping balancing round because another balancer is active Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:00:54.531 [conn1] authenticate db: admin { authenticate: 1, nonce: "1c0b33798c9944af", user: "admin", key: "184511fb4ab8d01b0437dc0099eee019" } m30001| Wed Feb 27 02:00:54.531 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30001| Wed Feb 27 02:00:55.467 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "steady", counts: { cloned: 283, clonedBytes: 14505165, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:55.467 [conn5] moveChunk setting version to: 6|0||512daf180c9ae827b8ef2398 m30000| Wed Feb 27 02:00:55.467 [conn13] Waiting for commit to finish m30000| Wed Feb 27 02:00:55.482 [conn13] Waiting for commit to finish m30000| Wed Feb 27 02:00:55.482 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 9.0 } -> { num: 292.0 } m30000| Wed Feb 27 02:00:55.482 [migrateThread] migrate commit flushed to journal for 'test.foo' { num: 9.0 } -> { num: 292.0 } m30000| Wed Feb 27 02:00:55.482 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:55-512daf2783c83aaea83d462e", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1361948455482), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 9.0 }, max: { num: 292.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 327, step4 of 5: 0, step5 of 5: 2875 } } m30001| Wed Feb 27 02:00:55.498 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { num: 9.0 }, max: { num: 292.0 }, shardKeyPattern: { num: 1.0 }, state: "done", counts: { cloned: 283, clonedBytes: 14505165, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Wed Feb 27 02:00:55.498 [conn5] moveChunk updating self version to: 6|1||512daf180c9ae827b8ef2398 through { num: 292.0 } -> { num: 575.0 } for collection 'test.foo' m30001| Wed Feb 27 02:00:55.498 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:55-512daf27051f47eaec1d92cc", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948455498), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 9.0 }, max: { num: 292.0 }, from: "shard0001", to: "shard0000" } } m30001| Wed Feb 27 02:00:55.498 [conn5] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Wed Feb 27 02:00:55.498 [conn5] MigrateFromStatus::done Global lock acquired m30001| Wed Feb 27 02:00:55.498 [conn5] forking for cleanup of chunk data m30001| Wed Feb 27 02:00:55.498 [conn5] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Wed Feb 27 02:00:55.498 [conn5] MigrateFromStatus::done Global lock acquired m30001| Wed Feb 27 02:00:55.498 [cleanupOldData-512daf27051f47eaec1d92cd] (start) waiting to cleanup test.foo from { num: 9.0 } -> { num: 292.0 }, # cursors remaining: 0 m30001| Wed Feb 27 02:00:55.498 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30001| Wed Feb 27 02:00:55.498 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:55-512daf27051f47eaec1d92ce", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948455498), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 9.0 }, max: { num: 292.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 3189, step5 of 6: 31, step6 of 6: 0 } } m30001| Wed Feb 27 02:00:55.498 [conn5] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 9.0 }, max: { num: 292.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_9.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:25 r:612 w:23 reslen:37 3226ms m30999| Wed Feb 27 02:00:55.498 [Balancer] moveChunk result: { ok: 1.0 } m30999| Wed Feb 27 02:00:55.498 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 6|1||512daf180c9ae827b8ef2398 based on: 4|3||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:55.498 [Balancer] *** end of balancing round m30999| Wed Feb 27 02:00:55.498 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' unlocked. m30001| Wed Feb 27 02:00:55.529 [cleanupOldData-512daf27051f47eaec1d92cd] waiting to remove documents for test.foo from { num: 9.0 } -> { num: 292.0 } m30001| Wed Feb 27 02:00:55.529 [cleanupOldData-512daf27051f47eaec1d92cd] moveChunk starting delete for: test.foo from { num: 9.0 } -> { num: 292.0 } Caught exception while authenticating connection: "[Authenticating connection: connection to localhost:30001] timed out after 5000ms ( 6 tries )" datasize: { "estimate" : false, "size" : 132707200, "numObjects" : 2493, "millis" : 8, "ok" : 1 } ShardingTest test.foo-num_MinKey 4000|0 { "num" : { "$minKey" : 1 } } -> { "num" : 0 } shard0000 test.foo test.foo-num_0.0 5000|0 { "num" : 0 } -> { "num" : 9 } shard0000 test.foo test.foo-num_9.0 6000|0 { "num" : 9 } -> { "num" : 292 } shard0000 test.foo test.foo-num_292.0 6000|1 { "num" : 292 } -> { "num" : 575 } shard0001 test.foo test.foo-num_575.0 1000|9 { "num" : 575 } -> { "num" : 858 } shard0001 test.foo test.foo-num_858.0 1000|11 { "num" : 858 } -> { "num" : 1141 } shard0001 test.foo test.foo-num_1141.0 3000|1 { "num" : 1141 } -> { "num" : 1424 } shard0000 test.foo test.foo-num_1424.0 3000|2 { "num" : 1424 } -> { "num" : 1707 } shard0001 test.foo test.foo-num_1707.0 3000|4 { "num" : 1707 } -> { "num" : 1990 } shard0001 test.foo test.foo-num_1990.0 4000|2 { "num" : 1990 } -> { "num" : 2518 } shard0001 test.foo test.foo-num_2518.0 4000|3 { "num" : 2518 } -> { "num" : { "$maxKey" : 1 } } shard0001 test.foo Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:00:55.545 [conn1] authenticate db: admin { authenticate: 1, nonce: "ca2758b412cc0545", user: "admin", key: "d8668c8a60e413346f63215038e79de9" } m30001| Wed Feb 27 02:00:55.545 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30001| Wed Feb 27 02:00:55.560 [cleanupOldData-512daf27051f47eaec1d92cd] moveChunk deleted 283 documents for test.foo from { num: 9.0 } -> { num: 292.0 } m30999| Wed Feb 27 02:00:56.512 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:00:56.512 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30999:1361948420:41 ) m30999| Wed Feb 27 02:00:56.512 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41: m30999| { "state" : 1, m30999| "who" : "AMAZONA-DFVK11N:30999:1361948420:41:Balancer:41", m30999| "process" : "AMAZONA-DFVK11N:30999:1361948420:41", m30999| "when" : { "$date" : "Wed Feb 27 02:00:56 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512daf280c9ae827b8ef239c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512daf240c9ae827b8ef239b" } } m30999| Wed Feb 27 02:00:56.512 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' acquired, ts : 512daf280c9ae827b8ef239c m30999| Wed Feb 27 02:00:56.512 [Balancer] *** start balancing round m30999| Wed Feb 27 02:00:56.512 [Balancer] waitForDelete: 0 m30999| Wed Feb 27 02:00:56.512 [Balancer] secondaryThrottle: 1 m30999| Wed Feb 27 02:00:56.512 [Balancer] shard0001 has more chunks me:7 best: shard0000:4 m30999| Wed Feb 27 02:00:56.512 [Balancer] collection : test.foo m30999| Wed Feb 27 02:00:56.512 [Balancer] donor : shard0001 chunks on 7 m30999| Wed Feb 27 02:00:56.512 [Balancer] receiver : shard0000 chunks on 4 m30999| Wed Feb 27 02:00:56.512 [Balancer] threshold : 2 m30999| Wed Feb 27 02:00:56.512 [Balancer] ns: test.foo going to move { _id: "test.foo-num_292.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('512daf180c9ae827b8ef2398'), ns: "test.foo", min: { num: 292.0 }, max: { num: 575.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Wed Feb 27 02:00:56.512 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 6|1||000000000000000000000000min: { num: 292.0 }max: { num: 575.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Wed Feb 27 02:00:56.512 [conn5] warning: secondaryThrottle selected but no replication m30001| Wed Feb 27 02:00:56.512 [conn5] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 292.0 }, max: { num: 575.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_292.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Wed Feb 27 02:00:56.512 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' acquired, ts : 512daf28051f47eaec1d92cf m30001| Wed Feb 27 02:00:56.512 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:56-512daf28051f47eaec1d92d0", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948456512), what: "moveChunk.start", ns: "test.foo", details: { min: { num: 292.0 }, max: { num: 575.0 }, from: "shard0001", to: "shard0000" } } m30001| Wed Feb 27 02:00:56.512 [conn5] moveChunk request accepted at version 6|1||512daf180c9ae827b8ef2398 m30001| Wed Feb 27 02:00:56.512 [conn5] moveChunk number of documents: 283 m30000| Wed Feb 27 02:00:56.512 [migrateThread] starting receiving-end of migration of chunk { num: 292.0 } -> { num: 575.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Wed Feb 27 02:00:56.528 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:56.543 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:00:56.559 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:56.574 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:56.590 [conn1] authenticate db: admin { authenticate: 1, nonce: "a0f812ef9f3d0218", user: "admin", key: "bb9b3c0cc231eb27bb936210089c75b4" } m30001| Wed Feb 27 02:00:56.590 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30001| Wed Feb 27 02:00:56.606 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:56.652 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:56.730 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Wed Feb 27 02:00:56.840 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Wed Feb 27 02:00:56.840 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 292.0 } -> { num: 575.0 } m30001| Wed Feb 27 02:00:56.871 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 283, clonedBytes: 14505165, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:57.136 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 283, clonedBytes: 14505165, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:00:57.604 [conn1] authenticate db: admin { authenticate: 1, nonce: "867dcf15f4def87f", user: "admin", key: "1ccff7fdabf35768652e14069e7ba558" } m30001| Wed Feb 27 02:00:57.604 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30001| Wed Feb 27 02:00:57.651 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 283, clonedBytes: 14505165, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:00:58.618 [conn1] authenticate db: admin { authenticate: 1, nonce: "b31b2709b425e0aa", user: "admin", key: "15fe395805944f74ac9cd1c5b1ac95dc" } m30001| Wed Feb 27 02:00:58.618 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30001| Wed Feb 27 02:00:58.680 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "catchup", counts: { cloned: 283, clonedBytes: 14505165, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Wed Feb 27 02:00:59.195 [migrateThread] migrate commit flushed to journal for 'test.foo' { num: 292.0 } -> { num: 575.0 } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:00:59.632 [conn1] authenticate db: admin { authenticate: 1, nonce: "848095750741aac4", user: "admin", key: "4d126daf69639eed45ad1d383b897e63" } m30001| Wed Feb 27 02:00:59.632 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30001| Wed Feb 27 02:00:59.710 [conn5] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "steady", counts: { cloned: 283, clonedBytes: 14505165, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Wed Feb 27 02:00:59.710 [conn5] moveChunk setting version to: 7|0||512daf180c9ae827b8ef2398 m30000| Wed Feb 27 02:00:59.710 [conn13] Waiting for commit to finish m30000| Wed Feb 27 02:00:59.726 [conn13] Waiting for commit to finish m30000| Wed Feb 27 02:00:59.726 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { num: 292.0 } -> { num: 575.0 } m30000| Wed Feb 27 02:00:59.726 [migrateThread] migrate commit flushed to journal for 'test.foo' { num: 292.0 } -> { num: 575.0 } m30000| Wed Feb 27 02:00:59.726 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:59-512daf2b83c83aaea83d462f", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1361948459726), what: "moveChunk.to", ns: "test.foo", details: { min: { num: 292.0 }, max: { num: 575.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 327, step4 of 5: 0, step5 of 5: 2876 } } m30001| Wed Feb 27 02:00:59.741 [conn5] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { num: 292.0 }, max: { num: 575.0 }, shardKeyPattern: { num: 1.0 }, state: "done", counts: { cloned: 283, clonedBytes: 14505165, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Wed Feb 27 02:00:59.741 [conn5] moveChunk updating self version to: 7|1||512daf180c9ae827b8ef2398 through { num: 575.0 } -> { num: 858.0 } for collection 'test.foo' m30001| Wed Feb 27 02:00:59.741 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:59-512daf2b051f47eaec1d92d1", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948459741), what: "moveChunk.commit", ns: "test.foo", details: { min: { num: 292.0 }, max: { num: 575.0 }, from: "shard0001", to: "shard0000" } } m30001| Wed Feb 27 02:00:59.741 [conn5] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Wed Feb 27 02:00:59.741 [conn5] MigrateFromStatus::done Global lock acquired m30001| Wed Feb 27 02:00:59.741 [conn5] forking for cleanup of chunk data m30001| Wed Feb 27 02:00:59.741 [conn5] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Wed Feb 27 02:00:59.741 [conn5] MigrateFromStatus::done Global lock acquired m30001| Wed Feb 27 02:00:59.741 [cleanupOldData-512daf2b051f47eaec1d92d2] (start) waiting to cleanup test.foo from { num: 292.0 } -> { num: 575.0 }, # cursors remaining: 0 m30001| Wed Feb 27 02:00:59.741 [conn5] distributed lock 'test.foo/AMAZONA-DFVK11N:30001:1361948440:41' unlocked. m30001| Wed Feb 27 02:00:59.741 [conn5] about to log metadata event: { _id: "AMAZONA-DFVK11N-2013-02-27T07:00:59-512daf2b051f47eaec1d92d3", server: "AMAZONA-DFVK11N", clientAddr: "127.0.0.1:60820", time: new Date(1361948459741), what: "moveChunk.from", ns: "test.foo", details: { min: { num: 292.0 }, max: { num: 575.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 3189, step5 of 6: 32, step6 of 6: 0 } } m30001| Wed Feb 27 02:00:59.741 [conn5] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { num: 292.0 }, max: { num: 575.0 }, maxChunkSizeBytes: 52428800, shardId: "test.foo-num_292.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:67 r:564 w:21 reslen:37 3226ms m30999| Wed Feb 27 02:00:59.741 [Balancer] moveChunk result: { ok: 1.0 } m30999| Wed Feb 27 02:00:59.741 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 7|1||512daf180c9ae827b8ef2398 based on: 6|1||512daf180c9ae827b8ef2398 m30999| Wed Feb 27 02:00:59.741 [Balancer] *** end of balancing round m30999| Wed Feb 27 02:00:59.741 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' unlocked. m30001| Wed Feb 27 02:00:59.772 [cleanupOldData-512daf2b051f47eaec1d92d2] waiting to remove documents for test.foo from { num: 292.0 } -> { num: 575.0 } m30001| Wed Feb 27 02:00:59.772 [cleanupOldData-512daf2b051f47eaec1d92d2] moveChunk starting delete for: test.foo from { num: 292.0 } -> { num: 575.0 } m30001| Wed Feb 27 02:00:59.788 [cleanupOldData-512daf2b051f47eaec1d92d2] moveChunk deleted 283 documents for test.foo from { num: 292.0 } -> { num: 575.0 } m30998| Wed Feb 27 02:01:00.443 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:01:00.443 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30998:1361948421:41 ) m30998| Wed Feb 27 02:01:00.443 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41: m30998| { "state" : 1, m30998| "who" : "AMAZONA-DFVK11N:30998:1361948421:41:Balancer:18467", m30998| "process" : "AMAZONA-DFVK11N:30998:1361948421:41", m30998| "when" : { "$date" : "Wed Feb 27 02:01:00 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "512daf2c8fcf9d0e1dbd1e09" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "512daf280c9ae827b8ef239c" } } m30998| Wed Feb 27 02:01:00.443 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' acquired, ts : 512daf2c8fcf9d0e1dbd1e09 m30998| Wed Feb 27 02:01:00.443 [Balancer] *** start balancing round m30998| Wed Feb 27 02:01:00.443 [Balancer] waitForDelete: 0 m30998| Wed Feb 27 02:01:00.443 [Balancer] secondaryThrottle: 1 m30998| Wed Feb 27 02:01:00.443 [Balancer] shard0001 has more chunks me:6 best: shard0000:5 m30998| Wed Feb 27 02:01:00.443 [Balancer] collection : test.foo m30998| Wed Feb 27 02:01:00.443 [Balancer] donor : shard0001 chunks on 6 m30998| Wed Feb 27 02:01:00.443 [Balancer] receiver : shard0000 chunks on 5 m30998| Wed Feb 27 02:01:00.443 [Balancer] threshold : 2 m30998| Wed Feb 27 02:01:00.443 [Balancer] no need to move any chunk m30998| Wed Feb 27 02:01:00.443 [Balancer] *** end of balancing round m30998| Wed Feb 27 02:01:00.443 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' unlocked. Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:01:00.646 [conn1] authenticate db: admin { authenticate: 1, nonce: "86aaa44a6cb6f3a8", user: "admin", key: "ddc608e97d0cb3014020b8b8d6714aa4" } m30001| Wed Feb 27 02:01:00.646 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30999| Wed Feb 27 02:01:00.755 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:01:00.755 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30999:1361948420:41 ) m30999| Wed Feb 27 02:01:00.755 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41: m30999| { "state" : 1, m30999| "who" : "AMAZONA-DFVK11N:30999:1361948420:41:Balancer:41", m30999| "process" : "AMAZONA-DFVK11N:30999:1361948420:41", m30999| "when" : { "$date" : "Wed Feb 27 02:01:00 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512daf2c0c9ae827b8ef239d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512daf2c8fcf9d0e1dbd1e09" } } m30999| Wed Feb 27 02:01:00.755 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' acquired, ts : 512daf2c0c9ae827b8ef239d m30999| Wed Feb 27 02:01:00.755 [Balancer] *** start balancing round m30999| Wed Feb 27 02:01:00.755 [Balancer] waitForDelete: 0 m30999| Wed Feb 27 02:01:00.755 [Balancer] secondaryThrottle: 1 m30999| Wed Feb 27 02:01:00.755 [Balancer] shard0001 has more chunks me:6 best: shard0000:5 m30999| Wed Feb 27 02:01:00.755 [Balancer] collection : test.foo m30999| Wed Feb 27 02:01:00.755 [Balancer] donor : shard0001 chunks on 6 m30999| Wed Feb 27 02:01:00.755 [Balancer] receiver : shard0000 chunks on 5 m30999| Wed Feb 27 02:01:00.755 [Balancer] threshold : 2 m30999| Wed Feb 27 02:01:00.755 [Balancer] no need to move any chunk m30999| Wed Feb 27 02:01:00.755 [Balancer] *** end of balancing round m30999| Wed Feb 27 02:01:00.755 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' unlocked. Caught exception while authenticating connection: "[Authenticating connection: connection to localhost:30001] timed out after 5000ms ( 6 tries )" Counts: 5752142 checkpoint B m30999| Wed Feb 27 02:01:01.660 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 7000|0, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0000", shardHost: "localhost:30000" } 000000000053BF90 18 m30999| Wed Feb 27 02:01:01.660 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), ok: 1.0 } m30999| Wed Feb 27 02:01:02.284 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 7000|1, versionEpoch: ObjectId('512daf180c9ae827b8ef2398'), serverID: ObjectId('512daf040c9ae827b8ef2393'), shard: "shard0001", shardHost: "localhost:30001" } 0000000000549430 18 m30999| Wed Feb 27 02:01:02.284 [conn1] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('512daf180c9ae827b8ef2398'), ok: 1.0 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:20 GMT-0500 (Eastern Standard Time) starting upgrade of config database config.version { "from" : 0, "to" : 4 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:20 GMT-0500 (Eastern Standard Time) finished upgrade of config database config.version { "from" : 0, "to" : 4 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:40 GMT-0500 (Eastern Standard Time) split test.foo { "num" : { "$minKey" : 1 } } -> { "num" : { "$maxKey" : 1 } } -->> ({ "num" : { "$minKey" : 1 } } -> { "num" : 0 }),({ "num" : 0 } -> { "num" : { "$maxKey" : 1 } }) ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:40 GMT-0500 (Eastern Standard Time) split test.foo { "num" : 0 } -> { "num" : { "$maxKey" : 1 } } -->> ({ "num" : 0 } -> { "num" : 9 }),({ "num" : 9 } -> { "num" : { "$maxKey" : 1 } }) ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:40 GMT-0500 (Eastern Standard Time) split test.foo { "num" : 9 } -> { "num" : { "$maxKey" : 1 } } -->> ({ "num" : 9 } -> { "num" : 292 }),({ "num" : 292 } -> { "num" : { "$maxKey" : 1 } }) ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:40 GMT-0500 (Eastern Standard Time) split test.foo { "num" : 292 } -> { "num" : { "$maxKey" : 1 } } -->> ({ "num" : 292 } -> { "num" : 575 }),({ "num" : 575 } -> { "num" : { "$maxKey" : 1 } }) ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:41 GMT-0500 (Eastern Standard Time) split test.foo { "num" : 575 } -> { "num" : { "$maxKey" : 1 } } -->> ({ "num" : 575 } -> { "num" : 858 }),({ "num" : 858 } -> { "num" : { "$maxKey" : 1 } }) ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:41 GMT-0500 (Eastern Standard Time) split test.foo { "num" : 858 } -> { "num" : { "$maxKey" : 1 } } -->> ({ "num" : 858 } -> { "num" : 1141 }),({ "num" : 1141 } -> { "num" : { "$maxKey" : 1 } }) ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:41 GMT-0500 (Eastern Standard Time) moveChunk.start test.foo { "min" : { "num" : 1141 }, "max" : { "num" : { "$maxKey" : 1 } }, "from" : "shard0001", "to" : "shard0000" } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:41 GMT-0500 (Eastern Standard Time) moveChunk.to test.foo { "min" : { "num" : 1141 }, "max" : { "num" : { "$maxKey" : 1 } }, "step1 of 5" : 276, "step2 of 5" : 0, "step3 of 5" : 68, "step4 of 5" : 0, "step5 of 5" : 288 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:42 GMT-0500 (Eastern Standard Time) moveChunk.commit test.foo { "min" : { "num" : 1141 }, "max" : { "num" : { "$maxKey" : 1 } }, "from" : "shard0001", "to" : "shard0000" } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:42 GMT-0500 (Eastern Standard Time) moveChunk.from test.foo { "min" : { "num" : 1141 }, "max" : { "num" : { "$maxKey" : 1 } }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 0, "step4 of 6" : 618, "step5 of 6" : 33, "step6 of 6" : 0 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:42 GMT-0500 (Eastern Standard Time) split test.foo { "num" : 1141 } -> { "num" : { "$maxKey" : 1 } } -->> ({ "num" : 1141 } -> { "num" : 1424 }),({ "num" : 1424 } -> { "num" : { "$maxKey" : 1 } }) ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:42 GMT-0500 (Eastern Standard Time) moveChunk.start test.foo { "min" : { "num" : 1424 }, "max" : { "num" : { "$maxKey" : 1 } }, "from" : "shard0000", "to" : "shard0001" } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:43 GMT-0500 (Eastern Standard Time) moveChunk.to test.foo { "min" : { "num" : 1424 }, "max" : { "num" : { "$maxKey" : 1 } }, "step1 of 5" : 0, "step2 of 5" : 0, "step3 of 5" : 0, "step4 of 5" : 0, "step5 of 5" : 559 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:43 GMT-0500 (Eastern Standard Time) moveChunk.commit test.foo { "min" : { "num" : 1424 }, "max" : { "num" : { "$maxKey" : 1 } }, "from" : "shard0000", "to" : "shard0001" } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:43 GMT-0500 (Eastern Standard Time) moveChunk.from test.foo { "min" : { "num" : 1424 }, "max" : { "num" : { "$maxKey" : 1 } }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 0, "step4 of 6" : 62, "step5 of 6" : 516, "step6 of 6" : 0 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:43 GMT-0500 (Eastern Standard Time) split test.foo { "num" : 1424 } -> { "num" : { "$maxKey" : 1 } } -->> ({ "num" : 1424 } -> { "num" : 1707 }),({ "num" : 1707 } -> { "num" : { "$maxKey" : 1 } }) ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:43 GMT-0500 (Eastern Standard Time) split test.foo { "num" : 1707 } -> { "num" : { "$maxKey" : 1 } } -->> ({ "num" : 1707 } -> { "num" : 1990 }),({ "num" : 1990 } -> { "num" : { "$maxKey" : 1 } }) ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:44 GMT-0500 (Eastern Standard Time) moveChunk.start test.foo { "min" : { "num" : { "$minKey" : 1 } }, "max" : { "num" : 0 }, "from" : "shard0001", "to" : "shard0000" } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:45 GMT-0500 (Eastern Standard Time) moveChunk.to test.foo { "min" : { "num" : { "$minKey" : 1 } }, "max" : { "num" : 0 }, "step1 of 5" : 0, "step2 of 5" : 0, "step3 of 5" : 0, "step4 of 5" : 0, "step5 of 5" : 97 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:45 GMT-0500 (Eastern Standard Time) moveChunk.commit test.foo { "min" : { "num" : { "$minKey" : 1 } }, "max" : { "num" : 0 }, "from" : "shard0001", "to" : "shard0000" } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:45 GMT-0500 (Eastern Standard Time) moveChunk.from test.foo { "min" : { "num" : { "$minKey" : 1 } }, "max" : { "num" : 0 }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 0, "step4 of 6" : 83, "step5 of 6" : 170, "step6 of 6" : 0 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:47 GMT-0500 (Eastern Standard Time) split test.foo { "num" : 1990 } -> { "num" : { "$maxKey" : 1 } } -->> ({ "num" : 1990 } -> { "num" : 2518 }),({ "num" : 2518 } -> { "num" : { "$maxKey" : 1 } }) ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:51 GMT-0500 (Eastern Standard Time) moveChunk.start test.foo { "min" : { "num" : 0 }, "max" : { "num" : 9 }, "from" : "shard0001", "to" : "shard0000" } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:51 GMT-0500 (Eastern Standard Time) moveChunk.to test.foo { "min" : { "num" : 0 }, "max" : { "num" : 9 }, "step1 of 5" : 0, "step2 of 5" : 0, "step3 of 5" : 7, "step4 of 5" : 0, "step5 of 5" : 91 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:51 GMT-0500 (Eastern Standard Time) moveChunk.commit test.foo { "min" : { "num" : 0 }, "max" : { "num" : 9 }, "from" : "shard0001", "to" : "shard0000" } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:51 GMT-0500 (Eastern Standard Time) moveChunk.from test.foo { "min" : { "num" : 0 }, "max" : { "num" : 9 }, "step1 of 6" : 0, "step2 of 6" : 2, "step3 of 6" : 0, "step4 of 6" : 84, "step5 of 6" : 371, "step6 of 6" : 0 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:52 GMT-0500 (Eastern Standard Time) moveChunk.start test.foo { "min" : { "num" : 9 }, "max" : { "num" : 292 }, "from" : "shard0001", "to" : "shard0000" } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:55 GMT-0500 (Eastern Standard Time) moveChunk.to test.foo { "min" : { "num" : 9 }, "max" : { "num" : 292 }, "step1 of 5" : 0, "step2 of 5" : 0, "step3 of 5" : 327, "step4 of 5" : 0, "step5 of 5" : 2875 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:55 GMT-0500 (Eastern Standard Time) moveChunk.commit test.foo { "min" : { "num" : 9 }, "max" : { "num" : 292 }, "from" : "shard0001", "to" : "shard0000" } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:55 GMT-0500 (Eastern Standard Time) moveChunk.from test.foo { "min" : { "num" : 9 }, "max" : { "num" : 292 }, "step1 of 6" : 0, "step2 of 6" : 2, "step3 of 6" : 1, "step4 of 6" : 3189, "step5 of 6" : 31, "step6 of 6" : 0 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:56 GMT-0500 (Eastern Standard Time) moveChunk.start test.foo { "min" : { "num" : 292 }, "max" : { "num" : 575 }, "from" : "shard0001", "to" : "shard0000" } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:59 GMT-0500 (Eastern Standard Time) moveChunk.to test.foo { "min" : { "num" : 292 }, "max" : { "num" : 575 }, "step1 of 5" : 0, "step2 of 5" : 0, "step3 of 5" : 327, "step4 of 5" : 0, "step5 of 5" : 2876 } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:59 GMT-0500 (Eastern Standard Time) moveChunk.commit test.foo { "min" : { "num" : 292 }, "max" : { "num" : 575 }, "from" : "shard0001", "to" : "shard0000" } ShardingTest AMAZONA-DFVK11N Wed Feb 27 2013 02:00:59 GMT-0500 (Eastern Standard Time) moveChunk.from test.foo { "min" : { "num" : 292 }, "max" : { "num" : 575 }, "step1 of 6" : 0, "step2 of 6" : 2, "step3 of 6" : 0, "step4 of 6" : 3189, "step5 of 6" : 32, "step6 of 6" : 0 } missing: [ ] Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:01:04.889 [conn1] authenticate db: admin { authenticate: 1, nonce: "2160044e01c717d2", user: "admin", key: "bad672e43d7e52b1934548e831f33f1b" } m30001| Wed Feb 27 02:01:04.889 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:01:05.903 [conn1] authenticate db: admin { authenticate: 1, nonce: "771b000c735f43ee", user: "admin", key: "64ac863d49614acf0c484f8934cd290c" } m30001| Wed Feb 27 02:01:05.903 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } m30998| Wed Feb 27 02:01:06.449 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:01:06.449 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30998:1361948421:41 ) m30998| Wed Feb 27 02:01:06.449 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41: m30998| { "state" : 1, m30998| "who" : "AMAZONA-DFVK11N:30998:1361948421:41:Balancer:18467", m30998| "process" : "AMAZONA-DFVK11N:30998:1361948421:41", m30998| "when" : { "$date" : "Wed Feb 27 02:01:06 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "512daf328fcf9d0e1dbd1e0a" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "512daf2c0c9ae827b8ef239d" } } m30998| Wed Feb 27 02:01:06.449 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' acquired, ts : 512daf328fcf9d0e1dbd1e0a m30998| Wed Feb 27 02:01:06.449 [Balancer] *** start balancing round m30998| Wed Feb 27 02:01:06.449 [Balancer] waitForDelete: 0 m30998| Wed Feb 27 02:01:06.449 [Balancer] secondaryThrottle: 1 m30998| Wed Feb 27 02:01:06.449 [Balancer] shard0001 has more chunks me:6 best: shard0000:5 m30998| Wed Feb 27 02:01:06.449 [Balancer] collection : test.foo m30998| Wed Feb 27 02:01:06.449 [Balancer] donor : shard0001 chunks on 6 m30998| Wed Feb 27 02:01:06.449 [Balancer] receiver : shard0000 chunks on 5 m30998| Wed Feb 27 02:01:06.449 [Balancer] threshold : 2 m30998| Wed Feb 27 02:01:06.449 [Balancer] no need to move any chunk m30998| Wed Feb 27 02:01:06.449 [Balancer] *** end of balancing round m30998| Wed Feb 27 02:01:06.449 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' unlocked. m30999| Wed Feb 27 02:01:06.761 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:01:06.761 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30999:1361948420:41 ) m30999| Wed Feb 27 02:01:06.761 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41: m30999| { "state" : 1, m30999| "who" : "AMAZONA-DFVK11N:30999:1361948420:41:Balancer:41", m30999| "process" : "AMAZONA-DFVK11N:30999:1361948420:41", m30999| "when" : { "$date" : "Wed Feb 27 02:01:06 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512daf320c9ae827b8ef239e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512daf328fcf9d0e1dbd1e0a" } } m30999| Wed Feb 27 02:01:06.761 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' acquired, ts : 512daf320c9ae827b8ef239e m30999| Wed Feb 27 02:01:06.761 [Balancer] *** start balancing round m30999| Wed Feb 27 02:01:06.761 [Balancer] waitForDelete: 0 m30999| Wed Feb 27 02:01:06.761 [Balancer] secondaryThrottle: 1 m30999| Wed Feb 27 02:01:06.761 [Balancer] shard0001 has more chunks me:6 best: shard0000:5 m30999| Wed Feb 27 02:01:06.761 [Balancer] collection : test.foo m30999| Wed Feb 27 02:01:06.761 [Balancer] donor : shard0001 chunks on 6 m30999| Wed Feb 27 02:01:06.761 [Balancer] receiver : shard0000 chunks on 5 m30999| Wed Feb 27 02:01:06.761 [Balancer] threshold : 2 m30999| Wed Feb 27 02:01:06.761 [Balancer] no need to move any chunk m30999| Wed Feb 27 02:01:06.761 [Balancer] *** end of balancing round m30999| Wed Feb 27 02:01:06.761 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' unlocked. Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:01:06.917 [conn1] authenticate db: admin { authenticate: 1, nonce: "6765359c6ac76ad6", user: "admin", key: "235fac4ad829a1432717c2cd64f91f57" } m30001| Wed Feb 27 02:01:06.917 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:01:07.931 [conn1] authenticate db: admin { authenticate: 1, nonce: "386d41199cf4f4af", user: "admin", key: "78197b62835e3ae3f6696d296182904d" } m30001| Wed Feb 27 02:01:07.931 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:01:08.945 [conn1] authenticate db: admin { authenticate: 1, nonce: "ed315e637429790b", user: "admin", key: "434a65838dfe05aa774d866d29a24829" } m30001| Wed Feb 27 02:01:08.945 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Authenticating to admin database as admin with mechanism MONGODB-CR on connection: connection to localhost:30001 m30001| Wed Feb 27 02:01:09.959 [conn1] authenticate db: admin { authenticate: 1, nonce: "d40462310deffa6b", user: "admin", key: "2aad45e3a6dd1c069996b77c37b9e2da" } m30001| Wed Feb 27 02:01:09.959 [conn1] auth: couldn't find user admin@admin, admin.system.users Error: 18 { ok: 0.0, errmsg: "auth fails" } Caught exception while authenticating connection: "[Authenticating connection: connection to localhost:30001] timed out after 5000ms ( 6 tries )" checkpoint B.a ShardingTest test.foo-num_MinKey 4000|0 { "num" : { "$minKey" : 1 } } -> { "num" : 0 } shard0000 test.foo test.foo-num_0.0 5000|0 { "num" : 0 } -> { "num" : 9 } shard0000 test.foo test.foo-num_9.0 6000|0 { "num" : 9 } -> { "num" : 292 } shard0000 test.foo test.foo-num_292.0 7000|0 { "num" : 292 } -> { "num" : 575 } shard0000 test.foo test.foo-num_575.0 7000|1 { "num" : 575 } -> { "num" : 858 } shard0001 test.foo test.foo-num_858.0 1000|11 { "num" : 858 } -> { "num" : 1141 } shard0001 test.foo test.foo-num_1141.0 3000|1 { "num" : 1141 } -> { "num" : 1424 } shard0000 test.foo test.foo-num_1424.0 3000|2 { "num" : 1424 } -> { "num" : 1707 } shard0001 test.foo test.foo-num_1707.0 3000|4 { "num" : 1707 } -> { "num" : 1990 } shard0001 test.foo test.foo-num_1990.0 4000|2 { "num" : 1990 } -> { "num" : 2518 } shard0001 test.foo test.foo-num_2518.0 4000|3 { "num" : 2518 } -> { "num" : { "$maxKey" : 1 } } shard0001 test.foo m30998| Wed Feb 27 02:01:12.455 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:01:12.455 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30998:1361948421:41 ) m30998| Wed Feb 27 02:01:12.455 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41: m30998| { "state" : 1, m30998| "who" : "AMAZONA-DFVK11N:30998:1361948421:41:Balancer:18467", m30998| "process" : "AMAZONA-DFVK11N:30998:1361948421:41", m30998| "when" : { "$date" : "Wed Feb 27 02:01:12 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "512daf388fcf9d0e1dbd1e0b" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "512daf320c9ae827b8ef239e" } } m30998| Wed Feb 27 02:01:12.455 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' acquired, ts : 512daf388fcf9d0e1dbd1e0b m30998| Wed Feb 27 02:01:12.455 [Balancer] *** start balancing round m30998| Wed Feb 27 02:01:12.455 [Balancer] waitForDelete: 0 m30998| Wed Feb 27 02:01:12.455 [Balancer] secondaryThrottle: 1 m30998| Wed Feb 27 02:01:12.455 [Balancer] shard0001 has more chunks me:6 best: shard0000:5 m30998| Wed Feb 27 02:01:12.455 [Balancer] collection : test.foo m30998| Wed Feb 27 02:01:12.455 [Balancer] donor : shard0001 chunks on 6 m30998| Wed Feb 27 02:01:12.455 [Balancer] receiver : shard0000 chunks on 5 m30998| Wed Feb 27 02:01:12.455 [Balancer] threshold : 2 m30998| Wed Feb 27 02:01:12.455 [Balancer] no need to move any chunk m30998| Wed Feb 27 02:01:12.455 [Balancer] *** end of balancing round m30998| Wed Feb 27 02:01:12.455 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30998:1361948421:41' unlocked. m30999| Wed Feb 27 02:01:12.767 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:01:12.767 [Balancer] creating new connection to:localhost:30001 m30999| Wed Feb 27 02:01:12.767 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:01:12.767 [Balancer] connected connection! m30001| Wed Feb 27 02:01:12.767 [initandlisten] connection accepted from 127.0.0.1:60865 #9 (9 connections now open) m30001| Wed Feb 27 02:01:12.767 [conn9] authenticate db: local { authenticate: 1, nonce: "a65c68c30ed345b7", user: "__system", key: "a0dbcad25c14011007a50ad1d3790cde" } m30999| Wed Feb 27 02:01:12.767 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : AMAZONA-DFVK11N:30999:1361948420:41 ) m30999| Wed Feb 27 02:01:12.767 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41: m30999| { "state" : 1, m30999| "who" : "AMAZONA-DFVK11N:30999:1361948420:41:Balancer:41", m30999| "process" : "AMAZONA-DFVK11N:30999:1361948420:41", m30999| "when" : { "$date" : "Wed Feb 27 02:01:12 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512daf380c9ae827b8ef239f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512daf388fcf9d0e1dbd1e0b" } } m30999| Wed Feb 27 02:01:12.767 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' acquired, ts : 512daf380c9ae827b8ef239f m30999| Wed Feb 27 02:01:12.767 [Balancer] *** start balancing round m30999| Wed Feb 27 02:01:12.767 [Balancer] waitForDelete: 0 m30999| Wed Feb 27 02:01:12.767 [Balancer] secondaryThrottle: 1 m30999| Wed Feb 27 02:01:12.783 [Balancer] shard0001 has more chunks me:6 best: shard0000:5 m30999| Wed Feb 27 02:01:12.783 [Balancer] collection : test.foo m30999| Wed Feb 27 02:01:12.783 [Balancer] donor : shard0001 chunks on 6 m30999| Wed Feb 27 02:01:12.783 [Balancer] receiver : shard0000 chunks on 5 m30999| Wed Feb 27 02:01:12.783 [Balancer] threshold : 2 m30999| Wed Feb 27 02:01:12.783 [Balancer] no need to move any chunk m30999| Wed Feb 27 02:01:12.783 [Balancer] *** end of balancing round m30999| Wed Feb 27 02:01:12.783 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1361948420:41' unlocked. checkpoint C m30001| Wed Feb 27 02:01:13.656 [conn4] info DFM::findAll(): extent 0:4000 was empty, skipping ahead. ns:test.foo m30001| Wed Feb 27 02:01:14.234 [conn4] info DFM::findAll(): extent 0:4000 was empty, skipping ahead. ns:test.foo m30001| Wed Feb 27 02:01:14.826 [conn4] info DFM::findAll(): extent 0:4000 was empty, skipping ahead. ns:test.foo checkpoint D m30999| Wed Feb 27 02:01:14.967 [conn1] couldn't find database [test2] in config db m30999| Wed Feb 27 02:01:14.967 [conn1] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 320 writeLock: 0 version: 2.4.0-rc2-pre- m30999| Wed Feb 27 02:01:14.967 [conn1] put [test2] on: shard0000:localhost:30000 m30000| Wed Feb 27 02:01:14.967 [FileAllocator] allocating new datafile /data/db/auto20\test2.ns, filling with zeroes... m30000| Wed Feb 27 02:01:15.014 [FileAllocator] done allocating datafile /data/db/auto20\test2.ns, size: 16MB, took 0.048 secs m30000| Wed Feb 27 02:01:15.014 [FileAllocator] allocating new datafile /data/db/auto20\test2.0, filling with zeroes... m30000| Wed Feb 27 02:01:15.216 [FileAllocator] done allocating datafile /data/db/auto20\test2.0, size: 64MB, took 0.194 secs m30000| Wed Feb 27 02:01:15.216 [FileAllocator] allocating new datafile /data/db/auto20\test2.1, filling with zeroes... m30000| Wed Feb 27 02:01:15.216 [conn9] build index test2.foobar { _id: 1 } m30000| Wed Feb 27 02:01:15.216 [conn9] build index done. scanned 0 total records. 0.001 secs m30000| Wed Feb 27 02:01:15.216 [conn9] update test2.foobar query: { _id: 0.0 } update: { _id: 0.0 } nscanned:0 nupdated:1 upsert:1 keyUpdates:0 locks(micros) w:247126 246ms m30000| Wed Feb 27 02:01:15.606 [FileAllocator] done allocating datafile /data/db/auto20\test2.1, size: 128MB, took 0.395 secs m30999| Wed Feb 27 02:01:20.957 [LockPinger] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:01:20.957 BackgroundJob starting: ConnectBG m30000| Wed Feb 27 02:01:20.957 [initandlisten] connection accepted from 127.0.0.1:60868 #21 (21 connections now open) m30999| Wed Feb 27 02:01:20.973 [LockPinger] connected connection! m30998| Wed Feb 27 02:01:21.176 [CheckConfigServers] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:01:21.176 BackgroundJob starting: ConnectBG m30000| Wed Feb 27 02:01:21.176 [initandlisten] connection accepted from 127.0.0.1:60869 #22 (22 connections now open) m30998| Wed Feb 27 02:01:21.191 [CheckConfigServers] connected connection! m30998| Wed Feb 27 02:01:21.191 [LockPinger] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:01:21.191 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:01:21.191 [LockPinger] connected connection! m30000| Wed Feb 27 02:01:21.191 [initandlisten] connection accepted from 127.0.0.1:60870 #23 (23 connections now open) m30000| Wed Feb 27 02:01:25.809 [conn23] authenticate db: local { authenticate: 1, nonce: "468b517a3cfb4030", user: "__system", key: "fc4e22c2a5acd77db9e13988deec5fa5" } m30000| Wed Feb 27 02:01:25.809 [conn22] authenticate db: local { authenticate: 1, nonce: "cea88d7858cfc8f5", user: "__system", key: "a26d31d42f8bd988e1d67e3e98c7d3d2" } m30000| Wed Feb 27 02:01:25.809 [conn21] authenticate db: local { authenticate: 1, nonce: "a30b19dffbb5d8f5", user: "__system", key: "5c52733593bc177fdb6ce1ee80ab17ab" } Waiting for active hosts... Waiting for the balancer lock... Waiting again for active hosts after balancer is off... m30998| Wed Feb 27 02:01:25.809 [Balancer] Refreshing MaxChunkSize: 50 checkpoint E { "hosts" : { "localhost:30000::0" : { "available" : 1, "created" : 2 }, "localhost:30000::30" : { "available" : 0, "created" : 4 }, "localhost:30001::0" : { "available" : 2, "created" : 3 } }, "replicaSets" : { }, "createdByType" : { "master" : 9 }, "totalAvailable" : 3, "totalCreated" : 9, "numDBClientConnection" : 13, "numAScopedConnection" : 6, "ok" : 1 } assert: 0 is not less than 0 : pool: localhost:30000::30 Error: Printing Stack Trace at printStackTrace (src/mongo/shell/utils.js:37:7) at doassert (src/mongo/shell/assert.js:6:1) at Function.assert.lt (src/mongo/shell/assert.js:179:1) at D:\slave\Windows_64bit_2008+_Weekly_Slow_Tests\mongo\jstests\sharding\auto2.js:128:12 Wed Feb 27 02:01:25.809 JavaScript execution failed: 0 is not less than 0 : pool: localhost:30000::30 at src/mongo/shell/assert.js:L7 failed to load: D:\slave\Windows_64bit_2008+_Weekly_Slow_Tests\mongo\jstests\sharding\auto2.js m30998| Wed Feb 27 02:01:25.809 [Balancer] skipping balancing round because balancing is disabled m30999| Wed Feb 27 02:01:25.809 [LockPinger] cluster localhost:30000 pinged successfully at Wed Feb 27 02:01:25 2013 by distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30999:1361948420:41', sleeping for 30000ms m30998| Wed Feb 27 02:01:25.809 [LockPinger] cluster localhost:30000 pinged successfully at Wed Feb 27 02:01:25 2013 by distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30998:1361948421:41', sleeping for 30000ms m30999| Wed Feb 27 02:01:25.809 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:01:25.809 [Balancer] skipping balancing round because balancing is disabled m30000| Wed Feb 27 02:01:25.824 [initandlisten] connection accepted from 127.0.0.1:60873 #24 (24 connections now open) m30000| Wed Feb 27 02:01:25.840 [conn24] command denied: { shutdown: 1, force: 1 } m30000| Wed Feb 27 02:01:25.840 [conn24] end connection 127.0.0.1:60873 (23 connections now open) m30999| Wed Feb 27 02:01:31.815 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:01:31.815 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:01:31.815 [Balancer] skipping balancing round because balancing is disabled m30998| Wed Feb 27 02:01:31.815 [Balancer] skipping balancing round because balancing is disabled m30998| Wed Feb 27 02:01:37.821 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:01:37.821 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:01:37.821 [Balancer] skipping balancing round because balancing is disabled m30999| Wed Feb 27 02:01:37.821 [Balancer] skipping balancing round because balancing is disabled m30999| Wed Feb 27 02:01:43.827 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:01:43.827 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:01:43.827 [Balancer] skipping balancing round because balancing is disabled m30998| Wed Feb 27 02:01:43.827 [Balancer] skipping balancing round because balancing is disabled m30999| Wed Feb 27 02:01:49.833 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:01:49.833 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:01:49.833 [Balancer] skipping balancing round because balancing is disabled m30998| Wed Feb 27 02:01:49.833 [Balancer] skipping balancing round because balancing is disabled m30999| Wed Feb 27 02:01:55.823 [LockPinger] cluster localhost:30000 pinged successfully at Wed Feb 27 02:01:55 2013 by distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30999:1361948420:41', sleeping for 30000ms m30998| Wed Feb 27 02:01:55.823 [LockPinger] cluster localhost:30000 pinged successfully at Wed Feb 27 02:01:55 2013 by distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30998:1361948421:41', sleeping for 30000ms m30998| Wed Feb 27 02:01:55.839 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:01:55.839 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:01:55.839 [Balancer] skipping balancing round because balancing is disabled m30999| Wed Feb 27 02:01:55.839 [Balancer] skipping balancing round because balancing is disabled m30999| Wed Feb 27 02:02:01.845 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:02:01.845 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:02:01.845 [Balancer] skipping balancing round because balancing is disabled m30998| Wed Feb 27 02:02:01.845 [Balancer] skipping balancing round because balancing is disabled m30999| Wed Feb 27 02:02:07.851 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:02:07.851 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:02:07.851 [Balancer] skipping balancing round because balancing is disabled m30998| Wed Feb 27 02:02:07.851 [Balancer] skipping balancing round because balancing is disabled m30998| Wed Feb 27 02:02:13.857 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:02:13.857 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:02:13.857 [Balancer] skipping balancing round because balancing is disabled m30999| Wed Feb 27 02:02:13.857 [Balancer] skipping balancing round because balancing is disabled m30998| Wed Feb 27 02:02:19.863 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:02:19.863 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:02:19.863 [Balancer] skipping balancing round because balancing is disabled m30999| Wed Feb 27 02:02:19.863 [Balancer] skipping balancing round because balancing is disabled m30998| Wed Feb 27 02:02:25.838 [LockPinger] cluster localhost:30000 pinged successfully at Wed Feb 27 02:02:25 2013 by distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30998:1361948421:41', sleeping for 30000ms m30999| Wed Feb 27 02:02:25.838 [LockPinger] cluster localhost:30000 pinged successfully at Wed Feb 27 02:02:25 2013 by distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30999:1361948420:41', sleeping for 30000ms m30998| Wed Feb 27 02:02:25.869 [Balancer] Refreshing MaxChunkSize: 50 m30999| Wed Feb 27 02:02:25.869 [Balancer] Refreshing MaxChunkSize: 50 m30998| Wed Feb 27 02:02:25.869 [Balancer] skipping balancing round because balancing is disabled m30999| Wed Feb 27 02:02:25.869 [Balancer] skipping balancing round because balancing is disabled Wed Feb 27 02:02:26.680 Wed Feb 27 02:02:26 process on port 30000, with pid 8448 not terminated, sending sigkill m30999| Wed Feb 27 02:02:26.680 [WriteBackListener-localhost:30000] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:30000 m30001| Wed Feb 27 02:02:26.680 [conn6] end connection 127.0.0.1:60825 (8 connections now open) m30001| Wed Feb 27 02:02:26.680 [conn7] end connection 127.0.0.1:60832 (8 connections now open) m30999| Wed Feb 27 02:02:26.680 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [1] server [127.0.0.1:30000] m30999| Wed Feb 27 02:02:26.680 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed m30999| Wed Feb 27 02:02:26.680 [WriteBackListener-localhost:30000] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('512daf040c9ae827b8ef2393') } m30999| Wed Feb 27 02:02:26.680 [WriteBackListener-localhost:30000] Detected bad connection created at 1361948418934203 microSec, clearing pool for localhost:30000 m30999| Wed Feb 27 02:02:26.680 [WriteBackListener-localhost:30000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('512daf040c9ae827b8ef2393') } m30999| Wed Feb 27 02:02:27.694 [WriteBackListener-localhost:30000] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:30000 m30999| Wed Feb 27 02:02:27.694 [WriteBackListener-localhost:30000] Detected bad connection created at 1361948480973219 microSec, clearing pool for localhost:30000 m30999| Wed Feb 27 02:02:27.694 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [SEND_ERROR] for 127.0.0.1:30000 m30999| Wed Feb 27 02:02:29.706 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:29.706 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:30.720 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30001| Wed Feb 27 02:02:31.703 [initandlisten] connection accepted from 127.0.0.1:60880 #10 (8 connections now open) m30001| Wed Feb 27 02:02:31.719 [conn10] terminating, shutdown command received m30001| Wed Feb 27 02:02:31.719 dbexit: shutdown called m30001| Wed Feb 27 02:02:31.719 [conn10] shutdown: going to close listening sockets... m30001| Wed Feb 27 02:02:31.719 [conn10] closing listening socket: 416 m30001| Wed Feb 27 02:02:31.719 [conn10] closing listening socket: 424 m30001| Wed Feb 27 02:02:31.719 [conn10] shutdown: going to flush diaglog... m30001| Wed Feb 27 02:02:31.719 [conn10] shutdown: going to close sockets... m30001| Wed Feb 27 02:02:31.719 [conn10] shutdown: waiting for fs preallocator... m30001| Wed Feb 27 02:02:31.719 [conn10] shutdown: lock for final commit... m30001| Wed Feb 27 02:02:31.719 [conn10] shutdown: final commit... m30001| Wed Feb 27 02:02:31.719 [conn1] end connection 127.0.0.1:60784 (7 connections now open) m30001| Wed Feb 27 02:02:31.719 [conn5] end connection 127.0.0.1:60820 (7 connections now open) m30001| Wed Feb 27 02:02:31.719 [conn4] end connection 127.0.0.1:60818 (7 connections now open) m30999| Wed Feb 27 02:02:31.719 [WriteBackListener-localhost:30001] SocketException: remote: 127.0.0.1:30001 error: 9001 socket exception [0] server [127.0.0.1:30001] m30999| Wed Feb 27 02:02:31.719 [WriteBackListener-localhost:30001] DBClientCursor::init call() failed m30999| Wed Feb 27 02:02:31.719 [WriteBackListener-localhost:30001] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30001 ns: admin.$cmd query: { writebacklisten: ObjectId('512daf040c9ae827b8ef2393') } m30999| Wed Feb 27 02:02:31.719 [WriteBackListener-localhost:30001] Detected bad connection created at 1361948433822084 microSec, clearing pool for localhost:30001 Wed Feb 27 02:02:31.719 DBClientCursor::init call() failed m30001| Wed Feb 27 02:02:31.719 [conn3] end connection 127.0.0.1:60814 (7 connections now open) m30001| Wed Feb 27 02:02:31.719 [conn8] end connection 127.0.0.1:60839 (4 connections now open) m30999| Wed Feb 27 02:02:31.719 [WriteBackListener-localhost:30001] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30001 ns: admin.$cmd query: { writebacklisten: ObjectId('512daf040c9ae827b8ef2393') } m30001| Wed Feb 27 02:02:31.719 [conn9] end connection 127.0.0.1:60865 (2 connections now open) m30001| Wed Feb 27 02:02:31.734 [conn10] shutdown: closing all files... m30001| Wed Feb 27 02:02:31.750 [conn10] closeAllFiles() finished m30001| Wed Feb 27 02:02:31.750 [conn10] journalCleanup... m30001| Wed Feb 27 02:02:31.750 [conn10] removeJournalFiles m30001| Wed Feb 27 02:02:31.750 [conn10] shutdown: removing fs lock... m30001| Wed Feb 27 02:02:31.750 dbexit: really exiting now m30999| Wed Feb 27 02:02:31.875 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:02:31.875 [Balancer] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:30000 m30999| Wed Feb 27 02:02:31.875 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:02:31.875 [Balancer] Detected bad connection created at 1361948481191619 microSec, clearing pool for localhost:30000 m30998| Wed Feb 27 02:02:31.875 [Balancer] caught exception while doing balance: socket exception [SEND_ERROR] for 127.0.0.1:30000 m30998| Wed Feb 27 02:02:31.875 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:02:32.733 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:32.733 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:02:32.733 [mongosMain] connection accepted from 127.0.0.1:60884 #2 (2 connections now open) m30998| Wed Feb 27 02:02:32.733 [conn2] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:02:32.733 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:32.889 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:02:32.889 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:02:33.731 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:33.731 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:33.747 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:02:33.747 [conn2] SocketException handling request, closing client connection: 11002 socket exception [6] server [localhost:30000] mongos connectionpool error: couldn't connect to server localhost:30000 Wed Feb 27 02:02:33.747 Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:30998 Wed Feb 27 02:02:33.747 SocketException: remote: 127.0.0.1:30998 error: 9001 socket exception [1] server [127.0.0.1:30998] Wed Feb 27 02:02:33.747 DBClientCursor::init call() failed m30999| Wed Feb 27 02:02:34.761 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:02:35.759 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:35.759 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:36.789 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:02:37.881 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:02:37.881 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:38.770 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:38.770 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:38.895 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:02:38.895 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:02:38.895 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:02:38.895 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:39.800 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:39.800 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:02:39.800 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:39.909 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:02:39.909 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:02:40.829 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:02:44.807 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:44.807 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:44.838 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:44.838 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:02:44.901 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:02:44.901 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:45.806 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:02:45.868 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:02:45.915 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:45.915 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:02:45.930 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:02:45.930 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:02:46.944 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:02:46.944 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:02:50.876 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:50.876 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:51.812 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:51.812 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:51.874 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:02:51.936 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:02:51.936 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:52.841 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:02:52.950 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:52.950 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:02:52.966 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:02:52.966 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:02:53.980 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:02:53.980 [Balancer] *** End of balancing round m30998| Wed Feb 27 02:02:55.852 [LockPinger] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:55.852 [LockPinger] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:55.852 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:02:55.852 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:56.866 [LockPinger] warning: distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30999:1361948420:41' detected an exception while pinging. :: caused by :: socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:02:56.882 [LockPinger] warning: distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30998:1361948421:41' detected an exception while pinging. :: caused by :: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:02:57.880 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:57.880 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:58.910 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:02:58.972 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:02:58.972 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:59.846 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:02:59.846 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:02:59.986 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:02:59.986 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:02:59.986 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:02:59.986 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:00.860 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:01.000 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:01.000 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:05.914 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:05.914 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:03:05.992 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:03:05.992 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:06.928 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:07.006 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:07.006 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:03:07.022 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:03:07.022 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:08.020 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:08.020 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:08.862 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:08.862 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:09.876 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:03:13.028 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:03:13.028 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:14.026 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:14.026 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:03:14.057 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:03:14.057 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:14.931 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:14.931 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:15.040 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:15.040 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:15.945 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:18.878 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:18.878 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:19.907 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:03:20.063 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:03:20.063 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:21.046 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:21.046 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:03:21.077 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:03:21.077 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:22.060 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:22.060 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:24.946 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:24.946 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:03:25.835 [CheckConfigServers] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:25.835 [CheckConfigServers] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:03:25.835 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:25.835 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:25.960 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:03:26.834 [CheckConfigServers] warning: couldn't check on config server:localhost:30000 ok for now : 11002 socket exception [6] server [localhost:30000] mongos connectionpool error: couldn't connect to server localhost:30000 m30999| Wed Feb 27 02:03:26.849 [CheckConfigServers] warning: couldn't check on config server:localhost:30000 ok for now : 11002 socket exception [6] server [localhost:30000] mongos connectionpool error: couldn't connect to server localhost:30000 m30999| Wed Feb 27 02:03:26.880 [LockPinger] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:26.880 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:03:26.896 [LockPinger] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:03:26.896 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:03:27.083 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:03:27.083 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:03:27.894 [LockPinger] warning: distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30998:1361948421:41' detected an exception while pinging. :: caused by :: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:27.910 [LockPinger] warning: distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30999:1361948420:41' detected an exception while pinging. :: caused by :: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:28.066 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:28.066 BackgroundJob starting: ConnectBG m30998| Wed Feb 27 02:03:28.082 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:03:28.082 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:29.096 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:29.096 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:29.922 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:29.922 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:30.936 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30998| Wed Feb 27 02:03:34.088 [Balancer] creating new connection to:localhost:30000 m30998| Wed Feb 27 02:03:34.088 BackgroundJob starting: ConnectBG Wed Feb 27 02:03:34.587 Wed Feb 27 02:03:34 process on port 30998, with pid 8668 not terminated, sending sigkill m30999| Wed Feb 27 02:03:35.102 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:35.102 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:35.975 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:35.975 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:36.131 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:36.131 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:36.989 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:39.610 [mongosMain] connection accepted from 127.0.0.1:60939 #2 (2 connections now open) m30999| Wed Feb 27 02:03:39.610 [conn2] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:39.610 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:40.640 [conn2] SocketException handling request, closing client connection: 11002 socket exception [6] server [localhost:30000] mongos connectionpool error: couldn't connect to server localhost:30000 Wed Feb 27 02:03:40.640 Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:30999 Wed Feb 27 02:03:40.640 SocketException: remote: 127.0.0.1:30999 error: 9001 socket exception [1] server [127.0.0.1:30999] Wed Feb 27 02:03:40.640 DBClientCursor::init call() failed m30999| Wed Feb 27 02:03:41.950 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:41.950 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:42.137 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:42.137 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:42.964 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:43.151 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:43.151 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:43.978 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:43.978 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:45.008 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:47.020 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:47.020 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:48.003 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:48.003 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:48.050 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:49.017 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:49.157 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:49.157 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:50.031 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:50.031 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:50.171 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:50.171 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:51.060 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:51.060 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:51.060 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:52.090 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:53.073 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:53.073 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:54.087 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:56.099 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:56.099 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:56.177 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:56.177 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:57.098 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:57.098 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:57.113 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:57.191 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:57.191 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:03:57.924 [LockPinger] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:03:57.924 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:03:58.112 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:03:58.938 [LockPinger] warning: distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30999:1361948420:41' detected an exception while pinging. :: caused by :: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:02.121 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:02.121 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:02.121 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:02.121 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:03.150 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:03.150 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:03.197 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:03.197 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:04.196 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:04.196 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:04:08.158 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:08.158 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:09.156 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:09.156 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:09.188 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:10.170 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:10.202 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:10.202 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:11.231 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:11.231 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:04:15.194 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:15.194 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:16.208 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:17.175 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:17.175 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:17.237 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:17.237 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:18.189 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:18.251 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:18.251 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:04:23.212 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:23.212 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:24.242 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:24.257 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:24.257 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:25.271 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:25.271 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:04:26.192 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:26.192 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:26.863 [CheckConfigServers] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:26.863 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:27.221 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:27.877 [CheckConfigServers] warning: couldn't check on config server:localhost:30000 ok for now : 11002 socket exception [6] server [localhost:30000] mongos connectionpool error: couldn't connect to server localhost:30000 m30999| Wed Feb 27 02:04:28.953 [LockPinger] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:28.953 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:29.967 [LockPinger] warning: distributed lock pinger 'localhost:30000/AMAZONA-DFVK11N:30999:1361948420:41' detected an exception while pinging. :: caused by :: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:31.277 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:31.277 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:32.245 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:32.245 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:32.291 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:32.291 [Balancer] *** End of balancing round m30999| Wed Feb 27 02:04:33.274 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:36.223 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:36.223 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:37.237 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:38.297 [Balancer] creating new connection to:localhost:30000 m30999| Wed Feb 27 02:04:38.297 BackgroundJob starting: ConnectBG m30999| Wed Feb 27 02:04:39.311 [Balancer] caught exception while doing balance: socket exception [CONNECT_ERROR] for localhost:30000 m30999| Wed Feb 27 02:04:39.311 [Balancer] *** End of balancing round Wed Feb 27 02:04:41.480 Wed Feb 27 02:04:41 process on port 30999, with pid 3804 not terminated, sending sigkill