MongoDB shell version: 2.3.2-pre- null Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31100, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "remove2-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "remove2", "shard" : 0, "node" : 0, "set" : "remove2-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/remove2-rs0-0' Wed Dec 12 22:23:36.666 shell: started program mongod.exe --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet remove2-rs0 --dbpath /data/db/remove2-rs0-0 --setParameter enableTestCommands=1 m31100| note: noprealloc may hurt performance in many applications m31100| Wed Dec 12 22:23:36.697 [initandlisten] MongoDB starting : pid=4440 port=31100 dbpath=/data/db/remove2-rs0-0 64-bit host=AMAZONA-DFVK11N m31100| Wed Dec 12 22:23:36.697 [initandlisten] m31100| Wed Dec 12 22:23:36.697 [initandlisten] ** NOTE: This is a development version (2.3.2-pre-) of MongoDB. m31100| Wed Dec 12 22:23:36.697 [initandlisten] ** Not recommended for production. m31100| Wed Dec 12 22:23:36.697 [initandlisten] m31100| Wed Dec 12 22:23:36.697 [initandlisten] db version v2.3.2-pre-, pdfile version 4.5 m31100| Wed Dec 12 22:23:36.697 [initandlisten] git version: 725f626aae2c2701ded3c0f97e7b5aa4c0b65979 m31100| Wed Dec 12 22:23:36.697 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 m31100| Wed Dec 12 22:23:36.697 [initandlisten] options: { dbpath: "/data/db/remove2-rs0-0", noprealloc: true, oplogSize: 40, port: 31100, replSet: "remove2-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31100| Wed Dec 12 22:23:36.712 [initandlisten] journal dir=/data/db/remove2-rs0-0\journal m31100| Wed Dec 12 22:23:36.712 [initandlisten] recover : no journal files present, no recovery needed m31100| Wed Dec 12 22:23:36.837 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0\local.ns, filling with zeroes... m31100| Wed Dec 12 22:23:36.837 [FileAllocator] creating directory /data/db/remove2-rs0-0\_tmp m31100| Wed Dec 12 22:23:36.884 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0\local.ns, size: 16MB, took 0.046 secs m31100| Wed Dec 12 22:23:36.884 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0\local.0, filling with zeroes... m31100| Wed Dec 12 22:23:36.931 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0\local.0, size: 16MB, took 0.048 secs m31100| Wed Dec 12 22:23:36.931 [initandlisten] waiting for connections on port 31100 m31100| Wed Dec 12 22:23:36.931 [websvr] admin web console waiting for connections on port 32100 m31100| Wed Dec 12 22:23:36.946 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31100| Wed Dec 12 22:23:36.946 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31100| Wed Dec 12 22:23:37.180 [initandlisten] connection accepted from 127.0.0.1:64454 #1 (1 connection now open) [ connection to AMAZONA-DFVK11N:31100 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31100, 31101 ] 31101 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31101, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "remove2-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "remove2", "shard" : 0, "node" : 1, "set" : "remove2-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/remove2-rs0-1' Wed Dec 12 22:23:37.196 shell: started program mongod.exe --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet remove2-rs0 --dbpath /data/db/remove2-rs0-1 --setParameter enableTestCommands=1 m31101| note: noprealloc may hurt performance in many applications m31101| Wed Dec 12 22:23:37.227 [initandlisten] MongoDB starting : pid=4864 port=31101 dbpath=/data/db/remove2-rs0-1 64-bit host=AMAZONA-DFVK11N m31101| Wed Dec 12 22:23:37.227 [initandlisten] m31101| Wed Dec 12 22:23:37.227 [initandlisten] ** NOTE: This is a development version (2.3.2-pre-) of MongoDB. m31101| Wed Dec 12 22:23:37.227 [initandlisten] ** Not recommended for production. m31101| Wed Dec 12 22:23:37.227 [initandlisten] m31101| Wed Dec 12 22:23:37.227 [initandlisten] db version v2.3.2-pre-, pdfile version 4.5 m31101| Wed Dec 12 22:23:37.227 [initandlisten] git version: 725f626aae2c2701ded3c0f97e7b5aa4c0b65979 m31101| Wed Dec 12 22:23:37.227 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 m31101| Wed Dec 12 22:23:37.227 [initandlisten] options: { dbpath: "/data/db/remove2-rs0-1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "remove2-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31101| Wed Dec 12 22:23:37.243 [initandlisten] journal dir=/data/db/remove2-rs0-1\journal m31101| Wed Dec 12 22:23:37.243 [initandlisten] recover : no journal files present, no recovery needed m31101| Wed Dec 12 22:23:37.368 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1\local.ns, filling with zeroes... m31101| Wed Dec 12 22:23:37.368 [FileAllocator] creating directory /data/db/remove2-rs0-1\_tmp m31101| Wed Dec 12 22:23:37.414 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1\local.ns, size: 16MB, took 0.047 secs m31101| Wed Dec 12 22:23:37.414 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1\local.0, filling with zeroes... m31101| Wed Dec 12 22:23:37.461 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1\local.0, size: 16MB, took 0.048 secs m31101| Wed Dec 12 22:23:37.477 [websvr] admin web console waiting for connections on port 32101 m31101| Wed Dec 12 22:23:37.477 [initandlisten] waiting for connections on port 31101 m31101| Wed Dec 12 22:23:37.477 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31101| Wed Dec 12 22:23:37.477 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31101| Wed Dec 12 22:23:37.711 [initandlisten] connection accepted from 127.0.0.1:64455 #1 (1 connection now open) [ connection to AMAZONA-DFVK11N:31100, connection to AMAZONA-DFVK11N:31101 ] { "replSetInitiate" : { "_id" : "remove2-rs0", "members" : [ { "_id" : 0, "host" : "AMAZONA-DFVK11N:31100" }, { "_id" : 1, "host" : "AMAZONA-DFVK11N:31101" } ] } } m31100| Wed Dec 12 22:23:37.711 [conn1] replSet replSetInitiate admin command received from client m31100| Wed Dec 12 22:23:37.711 [conn1] replSet replSetInitiate config object parses ok, 2 members specified m31100| Wed Dec 12 22:23:37.711 [initandlisten] connection accepted from 10.28.45.224:64456 #2 (2 connections now open) m31100| Wed Dec 12 22:23:37.711 [conn2] end connection 10.28.45.224:64456 (1 connection now open) m31101| Wed Dec 12 22:23:37.711 [initandlisten] connection accepted from 10.28.45.224:64457 #2 (2 connections now open) m31100| Wed Dec 12 22:23:37.711 [conn1] replSet replSetInitiate all members seem up m31100| Wed Dec 12 22:23:37.711 [conn1] ****** m31100| Wed Dec 12 22:23:37.711 [conn1] creating replication oplog of size: 40MB... m31100| Wed Dec 12 22:23:37.711 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0\local.1, filling with zeroes... m31100| Wed Dec 12 22:23:37.898 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0\local.1, size: 64MB, took 0.188 secs m31100| Wed Dec 12 22:23:41.299 [conn1] ****** m31100| Wed Dec 12 22:23:41.299 [conn1] replSet info saving a newer config version to local.system.replset m31100| Wed Dec 12 22:23:41.314 [conn1] replSet saveConfigLocally done m31100| Wed Dec 12 22:23:41.314 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. m31100| Wed Dec 12 22:23:41.314 [conn1] build index local.replset.minvalid { _id: 1 } m31100| Wed Dec 12 22:23:41.314 [conn1] build index done. scanned 0 total records. 0 secs m31100| Wed Dec 12 22:23:41.314 [conn1] command admin.$cmd command: { replSetInitiate: { _id: "remove2-rs0", members: [ { _id: 0.0, host: "AMAZONA-DFVK11N:31100" }, { _id: 1.0, host: "AMAZONA-DFVK11N:31101" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:3600880 reslen:112 3606ms { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31200, 31201 ] 31200 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31200, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "remove2-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "remove2", "shard" : 1, "node" : 0, "set" : "remove2-rs1" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/remove2-rs1-0' Wed Dec 12 22:23:41.346 shell: started program mongod.exe --oplogSize 40 --port 31200 --noprealloc --smallfiles --rest --replSet remove2-rs1 --dbpath /data/db/remove2-rs1-0 --setParameter enableTestCommands=1 m31200| note: noprealloc may hurt performance in many applications m31200| Wed Dec 12 22:23:41.392 [initandlisten] MongoDB starting : pid=2784 port=31200 dbpath=/data/db/remove2-rs1-0 64-bit host=AMAZONA-DFVK11N m31200| Wed Dec 12 22:23:41.392 [initandlisten] m31200| Wed Dec 12 22:23:41.392 [initandlisten] ** NOTE: This is a development version (2.3.2-pre-) of MongoDB. m31200| Wed Dec 12 22:23:41.392 [initandlisten] ** Not recommended for production. m31200| Wed Dec 12 22:23:41.392 [initandlisten] m31200| Wed Dec 12 22:23:41.392 [initandlisten] db version v2.3.2-pre-, pdfile version 4.5 m31200| Wed Dec 12 22:23:41.392 [initandlisten] git version: 725f626aae2c2701ded3c0f97e7b5aa4c0b65979 m31200| Wed Dec 12 22:23:41.392 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 m31200| Wed Dec 12 22:23:41.392 [initandlisten] options: { dbpath: "/data/db/remove2-rs1-0", noprealloc: true, oplogSize: 40, port: 31200, replSet: "remove2-rs1", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31200| Wed Dec 12 22:23:41.392 [initandlisten] journal dir=/data/db/remove2-rs1-0\journal m31200| Wed Dec 12 22:23:41.392 [initandlisten] recover : no journal files present, no recovery needed m31200| Wed Dec 12 22:23:41.517 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0\local.ns, filling with zeroes... m31200| Wed Dec 12 22:23:41.517 [FileAllocator] creating directory /data/db/remove2-rs1-0\_tmp m31200| Wed Dec 12 22:23:41.564 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0\local.ns, size: 16MB, took 0.047 secs m31200| Wed Dec 12 22:23:41.564 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0\local.0, filling with zeroes... m31200| Wed Dec 12 22:23:41.611 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0\local.0, size: 16MB, took 0.047 secs m31200| Wed Dec 12 22:23:41.611 [initandlisten] waiting for connections on port 31200 m31200| Wed Dec 12 22:23:41.611 [websvr] admin web console waiting for connections on port 32200 m31200| Wed Dec 12 22:23:41.626 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31200| Wed Dec 12 22:23:41.626 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31200| Wed Dec 12 22:23:41.860 [initandlisten] connection accepted from 127.0.0.1:64460 #1 (1 connection now open) [ connection to AMAZONA-DFVK11N:31200 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31200, 31201 ] 31201 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31201, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "remove2-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "remove2", "shard" : 1, "node" : 1, "set" : "remove2-rs1" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/remove2-rs1-1' Wed Dec 12 22:23:41.860 shell: started program mongod.exe --oplogSize 40 --port 31201 --noprealloc --smallfiles --rest --replSet remove2-rs1 --dbpath /data/db/remove2-rs1-1 --setParameter enableTestCommands=1 m31201| note: noprealloc may hurt performance in many applications m31201| Wed Dec 12 22:23:41.892 [initandlisten] MongoDB starting : pid=1136 port=31201 dbpath=/data/db/remove2-rs1-1 64-bit host=AMAZONA-DFVK11N m31201| Wed Dec 12 22:23:41.892 [initandlisten] m31201| Wed Dec 12 22:23:41.892 [initandlisten] ** NOTE: This is a development version (2.3.2-pre-) of MongoDB. m31201| Wed Dec 12 22:23:41.892 [initandlisten] ** Not recommended for production. m31201| Wed Dec 12 22:23:41.892 [initandlisten] m31201| Wed Dec 12 22:23:41.892 [initandlisten] db version v2.3.2-pre-, pdfile version 4.5 m31201| Wed Dec 12 22:23:41.892 [initandlisten] git version: 725f626aae2c2701ded3c0f97e7b5aa4c0b65979 m31201| Wed Dec 12 22:23:41.892 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 m31201| Wed Dec 12 22:23:41.892 [initandlisten] options: { dbpath: "/data/db/remove2-rs1-1", noprealloc: true, oplogSize: 40, port: 31201, replSet: "remove2-rs1", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31201| Wed Dec 12 22:23:41.907 [initandlisten] journal dir=/data/db/remove2-rs1-1\journal m31201| Wed Dec 12 22:23:41.907 [initandlisten] recover : no journal files present, no recovery needed m31201| Wed Dec 12 22:23:42.016 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1\local.ns, filling with zeroes... m31201| Wed Dec 12 22:23:42.016 [FileAllocator] creating directory /data/db/remove2-rs1-1\_tmp m31201| Wed Dec 12 22:23:42.079 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1\local.ns, size: 16MB, took 0.047 secs m31201| Wed Dec 12 22:23:42.079 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1\local.0, filling with zeroes... m31201| Wed Dec 12 22:23:42.126 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1\local.0, size: 16MB, took 0.047 secs m31201| Wed Dec 12 22:23:42.126 [initandlisten] waiting for connections on port 31201 m31201| Wed Dec 12 22:23:42.126 [websvr] admin web console waiting for connections on port 32201 m31201| Wed Dec 12 22:23:42.126 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31201| Wed Dec 12 22:23:42.126 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31201| Wed Dec 12 22:23:42.375 [initandlisten] connection accepted from 127.0.0.1:64461 #1 (1 connection now open) [ connection to AMAZONA-DFVK11N:31200, connection to AMAZONA-DFVK11N:31201 ] { "replSetInitiate" : { "_id" : "remove2-rs1", "members" : [ { "_id" : 0, "host" : "AMAZONA-DFVK11N:31200" }, { "_id" : 1, "host" : "AMAZONA-DFVK11N:31201" } ] } } m31200| Wed Dec 12 22:23:42.375 [conn1] replSet replSetInitiate admin command received from client m31200| Wed Dec 12 22:23:42.375 [conn1] replSet replSetInitiate config object parses ok, 2 members specified m31200| Wed Dec 12 22:23:42.375 [initandlisten] connection accepted from 10.28.45.224:64462 #2 (2 connections now open) m31200| Wed Dec 12 22:23:42.375 [conn2] end connection 10.28.45.224:64462 (1 connection now open) m31201| Wed Dec 12 22:23:42.375 [initandlisten] connection accepted from 10.28.45.224:64463 #2 (2 connections now open) m31200| Wed Dec 12 22:23:42.375 [conn1] replSet replSetInitiate all members seem up m31200| Wed Dec 12 22:23:42.375 [conn1] ****** m31200| Wed Dec 12 22:23:42.375 [conn1] creating replication oplog of size: 40MB... m31200| Wed Dec 12 22:23:42.375 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0\local.1, filling with zeroes... m31200| Wed Dec 12 22:23:42.562 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0\local.1, size: 64MB, took 0.188 secs m31200| Wed Dec 12 22:23:45.620 [conn1] ****** m31200| Wed Dec 12 22:23:45.620 [conn1] replSet info saving a newer config version to local.system.replset m31200| Wed Dec 12 22:23:45.620 [conn1] replSet saveConfigLocally done m31200| Wed Dec 12 22:23:45.620 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. m31200| Wed Dec 12 22:23:45.620 [conn1] build index local.replset.minvalid { _id: 1 } m31200| Wed Dec 12 22:23:45.620 [conn1] build index done. scanned 0 total records. 0 secs m31200| Wed Dec 12 22:23:45.620 [conn1] command admin.$cmd command: { replSetInitiate: { _id: "remove2-rs1", members: [ { _id: 0.0, host: "AMAZONA-DFVK11N:31200" }, { _id: 1.0, host: "AMAZONA-DFVK11N:31201" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:3253904 reslen:112 3259ms { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31100| Wed Dec 12 22:23:46.962 [rsStart] replSet I am AMAZONA-DFVK11N:31100 m31100| Wed Dec 12 22:23:46.962 [rsStart] replSet STARTUP2 m31100| Wed Dec 12 22:23:46.962 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31100| Wed Dec 12 22:23:46.962 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31101 is up m31101| Wed Dec 12 22:23:47.492 [rsStart] trying to contact AMAZONA-DFVK11N:31100 m31100| Wed Dec 12 22:23:47.492 [initandlisten] connection accepted from 10.28.45.224:64466 #3 (2 connections now open) m31101| Wed Dec 12 22:23:47.492 [initandlisten] connection accepted from 10.28.45.224:64467 #3 (3 connections now open) m31101| Wed Dec 12 22:23:47.492 [rsStart] replSet I am AMAZONA-DFVK11N:31101 m31101| Wed Dec 12 22:23:47.492 [conn3] end connection 10.28.45.224:64467 (2 connections now open) m31101| Wed Dec 12 22:23:47.492 [rsStart] replSet got config version 1 from a remote, saving locally m31101| Wed Dec 12 22:23:47.492 [rsStart] replSet info saving a newer config version to local.system.replset m31101| Wed Dec 12 22:23:47.523 [rsStart] replSet saveConfigLocally done m31101| Wed Dec 12 22:23:47.523 [rsStart] replSet STARTUP2 m31101| Wed Dec 12 22:23:47.523 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31101| Wed Dec 12 22:23:47.523 [rsSync] ****** m31101| Wed Dec 12 22:23:47.523 [rsSync] creating replication oplog of size: 40MB... m31101| Wed Dec 12 22:23:47.523 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1\local.1, filling with zeroes... m31101| Wed Dec 12 22:23:47.710 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1\local.1, size: 64MB, took 0.188 secs m31100| Wed Dec 12 22:23:47.976 [rsSync] replSet SECONDARY m31100| Wed Dec 12 22:23:48.974 [rsHealthPoll] replset info AMAZONA-DFVK11N:31101 thinks that we are down m31100| Wed Dec 12 22:23:48.974 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31101 is now in state STARTUP2 m31100| Wed Dec 12 22:23:48.974 [rsMgr] not electing self, AMAZONA-DFVK11N:31101 would veto with 'I don't think AMAZONA-DFVK11N:31100 is electable' m31101| Wed Dec 12 22:23:49.068 [rsSync] ****** m31101| Wed Dec 12 22:23:49.068 [rsSync] replSet initial sync pending m31101| Wed Dec 12 22:23:49.068 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31101| Wed Dec 12 22:23:49.504 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31100 is up m31101| Wed Dec 12 22:23:49.504 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31100 is now in state SECONDARY m31200| Wed Dec 12 22:23:51.642 [rsStart] replSet I am AMAZONA-DFVK11N:31200 m31200| Wed Dec 12 22:23:51.642 [rsStart] replSet STARTUP2 m31200| Wed Dec 12 22:23:51.642 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31200| Wed Dec 12 22:23:51.642 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31201 is up m31201| Wed Dec 12 22:23:52.141 [rsStart] trying to contact AMAZONA-DFVK11N:31200 m31200| Wed Dec 12 22:23:52.141 [initandlisten] connection accepted from 10.28.45.224:64468 #3 (2 connections now open) m31201| Wed Dec 12 22:23:52.141 [initandlisten] connection accepted from 10.28.45.224:64469 #3 (3 connections now open) m31201| Wed Dec 12 22:23:52.141 [rsStart] replSet I am AMAZONA-DFVK11N:31201 m31201| Wed Dec 12 22:23:52.141 [conn3] end connection 10.28.45.224:64469 (2 connections now open) m31201| Wed Dec 12 22:23:52.141 [rsStart] replSet got config version 1 from a remote, saving locally m31201| Wed Dec 12 22:23:52.141 [rsStart] replSet info saving a newer config version to local.system.replset m31201| Wed Dec 12 22:23:52.141 [rsStart] replSet saveConfigLocally done m31201| Wed Dec 12 22:23:52.141 [rsStart] replSet STARTUP2 m31201| Wed Dec 12 22:23:52.141 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31201| Wed Dec 12 22:23:52.141 [rsSync] ****** m31201| Wed Dec 12 22:23:52.141 [rsSync] creating replication oplog of size: 40MB... m31201| Wed Dec 12 22:23:52.141 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1\local.1, filling with zeroes... m31201| Wed Dec 12 22:23:52.344 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1\local.1, size: 64MB, took 0.19 secs m31200| Wed Dec 12 22:23:52.656 [rsSync] replSet SECONDARY m31200| Wed Dec 12 22:23:53.654 [rsHealthPoll] replset info AMAZONA-DFVK11N:31201 thinks that we are down m31200| Wed Dec 12 22:23:53.654 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31201 is now in state STARTUP2 m31200| Wed Dec 12 22:23:53.654 [rsMgr] not electing self, AMAZONA-DFVK11N:31201 would veto with 'I don't think AMAZONA-DFVK11N:31200 is electable' m31201| Wed Dec 12 22:23:53.701 [rsSync] ****** m31201| Wed Dec 12 22:23:53.701 [rsSync] replSet initial sync pending m31201| Wed Dec 12 22:23:53.701 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31201| Wed Dec 12 22:23:54.153 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31200 is up m31201| Wed Dec 12 22:23:54.153 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31200 is now in state SECONDARY m31100| Wed Dec 12 22:23:55.011 [rsMgr] replSet info electSelf 0 m31101| Wed Dec 12 22:23:55.011 [conn2] replSet RECOVERING m31101| Wed Dec 12 22:23:55.011 [conn2] replSet info voting yea for AMAZONA-DFVK11N:31100 (0) m31100| Wed Dec 12 22:23:55.011 [rsMgr] replSet PRIMARY m31101| Wed Dec 12 22:23:55.542 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31100 is now in state PRIMARY m31100| Wed Dec 12 22:23:55.698 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0\admin.ns, filling with zeroes... m31100| Wed Dec 12 22:23:55.744 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0\admin.ns, size: 16MB, took 0.047 secs m31100| Wed Dec 12 22:23:55.744 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0\admin.0, filling with zeroes... m31100| Wed Dec 12 22:23:55.791 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0\admin.0, size: 16MB, took 0.047 secs m31100| Wed Dec 12 22:23:55.791 [conn1] build index admin.foo { _id: 1 } m31100| Wed Dec 12 22:23:55.791 [conn1] build index done. scanned 0 total records. 0 secs ReplSetTest [object Object] ReplSetTest waiting for connection to AMAZONA-DFVK11N:31101 to have an oplog built. m31100| Wed Dec 12 22:23:57.024 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31101 is now in state RECOVERING ReplSetTest waiting for connection to AMAZONA-DFVK11N:31101 to have an oplog built. m31200| Wed Dec 12 22:23:59.691 [rsMgr] replSet info electSelf 0 m31201| Wed Dec 12 22:23:59.691 [conn2] replSet RECOVERING m31201| Wed Dec 12 22:23:59.691 [conn2] replSet info voting yea for AMAZONA-DFVK11N:31200 (0) m31200| Wed Dec 12 22:23:59.691 [rsMgr] replSet PRIMARY ReplSetTest waiting for connection to AMAZONA-DFVK11N:31101 to have an oplog built. m31201| Wed Dec 12 22:24:00.191 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31200 is now in state PRIMARY m31200| Wed Dec 12 22:24:01.704 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31201 is now in state RECOVERING ReplSetTest waiting for connection to AMAZONA-DFVK11N:31101 to have an oplog built. ReplSetTest waiting for connection to AMAZONA-DFVK11N:31101 to have an oplog built. m31101| Wed Dec 12 22:24:05.089 [rsSync] replSet initial sync pending m31101| Wed Dec 12 22:24:05.089 [rsSync] replSet syncing to: AMAZONA-DFVK11N:31100 m31100| Wed Dec 12 22:24:05.089 [initandlisten] connection accepted from 10.28.45.224:64472 #4 (3 connections now open) m31101| Wed Dec 12 22:24:05.089 [rsSync] build index local.me { _id: 1 } m31100| Wed Dec 12 22:24:05.089 [initandlisten] connection accepted from 10.28.45.224:64474 #5 (4 connections now open) m31101| Wed Dec 12 22:24:05.089 [rsSync] build index done. scanned 0 total records. 0.001 secs m31101| Wed Dec 12 22:24:05.089 [rsSync] replSet initial sync drop all databases m31101| Wed Dec 12 22:24:05.089 [rsSync] dropAllDatabasesExceptLocal 1 m31101| Wed Dec 12 22:24:05.089 [rsSync] replSet initial sync clone all databases m31101| Wed Dec 12 22:24:05.089 [rsSync] replSet initial sync cloning db: admin m31101| Wed Dec 12 22:24:05.089 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1\admin.ns, filling with zeroes... m31101| Wed Dec 12 22:24:05.136 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1\admin.ns, size: 16MB, took 0.047 secs m31101| Wed Dec 12 22:24:05.136 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1\admin.0, filling with zeroes... m31101| Wed Dec 12 22:24:05.183 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1\admin.0, size: 16MB, took 0.047 secs m31101| Wed Dec 12 22:24:05.214 [rsSync] build index admin.foo { _id: 1 } m31101| Wed Dec 12 22:24:05.214 [rsSync] fastBuildIndex dupsToDrop:0 m31101| Wed Dec 12 22:24:05.214 [rsSync] build index done. scanned 1 total records. 0.001 secs m31101| Wed Dec 12 22:24:05.214 [rsSync] replSet initial sync data copy, starting syncup m31101| Wed Dec 12 22:24:05.214 [rsSync] oplog sync 1 of 3 m31101| Wed Dec 12 22:24:05.276 [rsSync] oplog sync 2 of 3 m31101| Wed Dec 12 22:24:05.276 [rsSync] replSet initial sync building indexes m31101| Wed Dec 12 22:24:05.276 [rsSync] replSet initial sync cloning indexes for : admin m31101| Wed Dec 12 22:24:05.323 [rsSync] oplog sync 3 of 3 m31100| Wed Dec 12 22:24:05.323 [conn5] end connection 10.28.45.224:64474 (3 connections now open) m31101| Wed Dec 12 22:24:05.323 [rsSync] replSet initial sync finishing up m31101| Wed Dec 12 22:24:05.510 [rsSync] replSet set minValid=50c94a4b:1 m31101| Wed Dec 12 22:24:05.510 [rsSync] build index local.replset.minvalid { _id: 1 } m31101| Wed Dec 12 22:24:05.510 [rsSync] build index done. scanned 0 total records. 0 secs m31101| Wed Dec 12 22:24:05.510 [rsSync] replSet initial sync done m31100| Wed Dec 12 22:24:05.510 [conn4] end connection 10.28.45.224:64472 (2 connections now open) m31101| Wed Dec 12 22:24:05.775 [rsBackgroundSync] replSet syncing to: AMAZONA-DFVK11N:31100 m31100| Wed Dec 12 22:24:05.775 [initandlisten] connection accepted from 10.28.45.224:64476 #6 (3 connections now open) { "ts" : { "t" : 1355369035000, "i" : 1 }, "h" : NumberLong("6727796180165591057"), "v" : 2, "op" : "i", "ns" : "admin.foo", "o" : { "_id" : ObjectId("50c94a4bc11d959a0216b0d0"), "x" : 1 } } ReplSetTest await TS for connection to AMAZONA-DFVK11N:31101 is 1355369035000:1 and latest is 1355369035000:1 ReplSetTest await oplog size for connection to AMAZONA-DFVK11N:31101 is 1 ReplSetTest await synced=true m31101| Wed Dec 12 22:24:06.290 [rsSyncNotifier] replset setting oplog notifier to AMAZONA-DFVK11N:31100 m31100| Wed Dec 12 22:24:06.290 [initandlisten] connection accepted from 10.28.45.224:64477 #7 (4 connections now open) m31100| Wed Dec 12 22:24:07.304 [slaveTracking] build index local.slaves { _id: 1 } m31100| Wed Dec 12 22:24:07.304 [slaveTracking] build index done. scanned 0 total records. 0.001 secs m31101| Wed Dec 12 22:24:07.538 [rsSync] replSet SECONDARY Wed Dec 12 22:24:07.866 starting new replica set monitor for replica set remove2-rs0 with seed of AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 Wed Dec 12 22:24:07.866 successfully connected to seed AMAZONA-DFVK11N:31100 for replica set remove2-rs0 m31100| Wed Dec 12 22:24:07.866 [initandlisten] connection accepted from 10.28.45.224:64479 #8 (5 connections now open) Wed Dec 12 22:24:07.866 changing hosts to { 0: "AMAZONA-DFVK11N:31100", 1: "AMAZONA-DFVK11N:31101" } from remove2-rs0/ Wed Dec 12 22:24:07.866 trying to add new host AMAZONA-DFVK11N:31100 to replica set remove2-rs0 Wed Dec 12 22:24:07.913 cannot connect to new host AMAZONA-DFVK11N:31100 to replica set remove2-rs0, err: Wed Dec 12 22:24:07.913 trying to add new host AMAZONA-DFVK11N:31101 to replica set remove2-rs0 m31100| Wed Dec 12 22:24:07.913 [initandlisten] connection accepted from 10.28.45.224:64481 #9 (6 connections now open) Wed Dec 12 22:24:07.913 cannot connect to new host AMAZONA-DFVK11N:31101 to replica set remove2-rs0, err: m31101| Wed Dec 12 22:24:07.913 [initandlisten] connection accepted from 10.28.45.224:64482 #4 (3 connections now open) m31100| Wed Dec 12 22:24:07.913 [initandlisten] connection accepted from 10.28.45.224:64483 #10 (7 connections now open) m31100| Wed Dec 12 22:24:07.913 [conn8] end connection 10.28.45.224:64479 (6 connections now open) Wed Dec 12 22:24:07.913 Primary for replica set remove2-rs0 changed to AMAZONA-DFVK11N:31100 m31101| Wed Dec 12 22:24:07.913 [initandlisten] connection accepted from 10.28.45.224:64484 #5 (4 connections now open) Wed Dec 12 22:24:07.913 replica set monitor for replica set remove2-rs0 started, address is remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 Wed Dec 12 22:24:07.913 [ReplicaSetMonitorWatcher] starting m31200| Wed Dec 12 22:24:07.913 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0\admin.ns, filling with zeroes... m31200| Wed Dec 12 22:24:07.959 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0\admin.ns, size: 16MB, took 0.047 secs m31200| Wed Dec 12 22:24:07.959 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0\admin.0, filling with zeroes... m31200| Wed Dec 12 22:24:08.006 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0\admin.0, size: 16MB, took 0.047 secs m31200| Wed Dec 12 22:24:08.006 [conn1] build index admin.foo { _id: 1 } m31200| Wed Dec 12 22:24:08.006 [conn1] build index done. scanned 0 total records. 0 secs ReplSetTest [object Object] ReplSetTest waiting for connection to AMAZONA-DFVK11N:31201 to have an oplog built. m31100| Wed Dec 12 22:24:09.098 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31101 is now in state SECONDARY m31201| Wed Dec 12 22:24:09.722 [rsSync] replSet initial sync pending m31201| Wed Dec 12 22:24:09.722 [rsSync] replSet syncing to: AMAZONA-DFVK11N:31200 m31200| Wed Dec 12 22:24:09.722 [initandlisten] connection accepted from 10.28.45.224:64485 #4 (3 connections now open) m31201| Wed Dec 12 22:24:09.722 [rsSync] build index local.me { _id: 1 } m31201| Wed Dec 12 22:24:09.722 [rsSync] build index done. scanned 0 total records. 0.001 secs m31201| Wed Dec 12 22:24:09.722 [rsSync] replSet initial sync drop all databases m31201| Wed Dec 12 22:24:09.722 [rsSync] dropAllDatabasesExceptLocal 1 m31201| Wed Dec 12 22:24:09.722 [rsSync] replSet initial sync clone all databases m31201| Wed Dec 12 22:24:09.722 [rsSync] replSet initial sync cloning db: admin m31200| Wed Dec 12 22:24:09.722 [initandlisten] connection accepted from 10.28.45.224:64486 #5 (4 connections now open) m31201| Wed Dec 12 22:24:09.722 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1\admin.ns, filling with zeroes... m31201| Wed Dec 12 22:24:09.769 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1\admin.ns, size: 16MB, took 0.047 secs m31201| Wed Dec 12 22:24:09.769 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1\admin.0, filling with zeroes... m31201| Wed Dec 12 22:24:09.816 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1\admin.0, size: 16MB, took 0.047 secs m31201| Wed Dec 12 22:24:09.847 [rsSync] build index admin.foo { _id: 1 } m31201| Wed Dec 12 22:24:09.847 [rsSync] fastBuildIndex dupsToDrop:0 m31201| Wed Dec 12 22:24:09.847 [rsSync] build index done. scanned 1 total records. 0.001 secs m31201| Wed Dec 12 22:24:09.847 [rsSync] replSet initial sync data copy, starting syncup m31201| Wed Dec 12 22:24:09.847 [rsSync] oplog sync 1 of 3 m31201| Wed Dec 12 22:24:09.847 [rsSync] oplog sync 2 of 3 m31201| Wed Dec 12 22:24:09.847 [rsSync] replSet initial sync building indexes m31201| Wed Dec 12 22:24:09.847 [rsSync] replSet initial sync cloning indexes for : admin m31201| Wed Dec 12 22:24:09.863 [rsSync] oplog sync 3 of 3 m31200| Wed Dec 12 22:24:09.863 [conn5] end connection 10.28.45.224:64486 (3 connections now open) m31201| Wed Dec 12 22:24:09.863 [rsSync] replSet initial sync finishing up m31201| Wed Dec 12 22:24:09.863 [rsSync] replSet set minValid=50c94a58:1 m31201| Wed Dec 12 22:24:09.863 [rsSync] build index local.replset.minvalid { _id: 1 } m31201| Wed Dec 12 22:24:09.863 [rsSync] build index done. scanned 0 total records. 0 secs m31201| Wed Dec 12 22:24:09.863 [rsSync] replSet initial sync done m31200| Wed Dec 12 22:24:09.863 [conn4] end connection 10.28.45.224:64485 (2 connections now open) { "ts" : { "t" : 1355369048000, "i" : 1 }, "h" : NumberLong("6727797129353363473"), "v" : 2, "op" : "i", "ns" : "admin.foo", "o" : { "_id" : ObjectId("50c94a57c11d959a0216b0d1"), "x" : 1 } } ReplSetTest await TS for connection to AMAZONA-DFVK11N:31201 is 1355369048000:1 and latest is 1355369048000:1 ReplSetTest await oplog size for connection to AMAZONA-DFVK11N:31201 is 1 ReplSetTest await synced=true m31201| Wed Dec 12 22:24:10.393 [rsBackgroundSync] replSet syncing to: AMAZONA-DFVK11N:31200 m31200| Wed Dec 12 22:24:10.393 [initandlisten] connection accepted from 10.28.45.224:64487 #6 (3 connections now open) m31201| Wed Dec 12 22:24:10.861 [rsSyncNotifier] replset setting oplog notifier to AMAZONA-DFVK11N:31200 m31200| Wed Dec 12 22:24:10.861 [initandlisten] connection accepted from 10.28.45.224:64490 #7 (4 connections now open) m31200| Wed Dec 12 22:24:11.875 [slaveTracking] build index local.slaves { _id: 1 } m31200| Wed Dec 12 22:24:11.875 [slaveTracking] build index done. scanned 0 total records. 0.001 secs m31201| Wed Dec 12 22:24:11.891 [rsSync] replSet SECONDARY Wed Dec 12 22:24:12.031 starting new replica set monitor for replica set remove2-rs1 with seed of AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 Wed Dec 12 22:24:12.031 successfully connected to seed AMAZONA-DFVK11N:31200 for replica set remove2-rs1 m31200| Wed Dec 12 22:24:12.031 [initandlisten] connection accepted from 10.28.45.224:64491 #8 (5 connections now open) Wed Dec 12 22:24:12.031 changing hosts to { 0: "AMAZONA-DFVK11N:31200", 1: "AMAZONA-DFVK11N:31201" } from remove2-rs1/ Wed Dec 12 22:24:12.031 trying to add new host AMAZONA-DFVK11N:31200 to replica set remove2-rs1 Wed Dec 12 22:24:12.031 cannot connect to new host AMAZONA-DFVK11N:31200 to replica set remove2-rs1, err: Wed Dec 12 22:24:12.031 trying to add new host AMAZONA-DFVK11N:31201 to replica set remove2-rs1 m31200| Wed Dec 12 22:24:12.031 [initandlisten] connection accepted from 10.28.45.224:64492 #9 (6 connections now open) Wed Dec 12 22:24:12.031 cannot connect to new host AMAZONA-DFVK11N:31201 to replica set remove2-rs1, err: m31201| Wed Dec 12 22:24:12.031 [initandlisten] connection accepted from 10.28.45.224:64493 #4 (3 connections now open) m31200| Wed Dec 12 22:24:12.031 [initandlisten] connection accepted from 10.28.45.224:64494 #10 (7 connections now open) m31200| Wed Dec 12 22:24:12.031 [conn8] end connection 10.28.45.224:64491 (6 connections now open) Wed Dec 12 22:24:12.031 Primary for replica set remove2-rs1 changed to AMAZONA-DFVK11N:31200 m31201| Wed Dec 12 22:24:12.031 [initandlisten] connection accepted from 10.28.45.224:64495 #5 (4 connections now open) Wed Dec 12 22:24:12.031 replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 Resetting db path '/data/db/remove2-config0' Wed Dec 12 22:24:12.031 shell: started program mongod.exe --port 29000 --dbpath /data/db/remove2-config0 --configsvr --setParameter enableTestCommands=1 m29000| Wed Dec 12 22:24:12.078 [initandlisten] MongoDB starting : pid=2224 port=29000 dbpath=/data/db/remove2-config0 64-bit host=AMAZONA-DFVK11N m29000| Wed Dec 12 22:24:12.078 [initandlisten] m29000| Wed Dec 12 22:24:12.078 [initandlisten] ** NOTE: This is a development version (2.3.2-pre-) of MongoDB. m29000| Wed Dec 12 22:24:12.078 [initandlisten] ** Not recommended for production. m29000| Wed Dec 12 22:24:12.078 [initandlisten] m29000| Wed Dec 12 22:24:12.078 [initandlisten] db version v2.3.2-pre-, pdfile version 4.5 m29000| Wed Dec 12 22:24:12.078 [initandlisten] git version: 725f626aae2c2701ded3c0f97e7b5aa4c0b65979 m29000| Wed Dec 12 22:24:12.078 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 m29000| Wed Dec 12 22:24:12.078 [initandlisten] options: { configsvr: true, dbpath: "/data/db/remove2-config0", port: 29000, setParameter: [ "enableTestCommands=1" ] } m29000| Wed Dec 12 22:24:12.078 [initandlisten] journal dir=/data/db/remove2-config0\journal m29000| Wed Dec 12 22:24:12.078 [initandlisten] recover : no journal files present, no recovery needed m29000| Wed Dec 12 22:24:12.203 [websvr] admin web console waiting for connections on port 30000 m29000| Wed Dec 12 22:24:12.203 [initandlisten] waiting for connections on port 29000 m29000| Wed Dec 12 22:24:12.546 [initandlisten] connection accepted from 127.0.0.1:64496 #1 (1 connection now open) "AMAZONA-DFVK11N:29000" m29000| Wed Dec 12 22:24:12.546 [initandlisten] connection accepted from 10.28.45.224:64497 #2 (2 connections now open) ShardingTest remove2 : { "config" : "AMAZONA-DFVK11N:29000", "shards" : [ connection to remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101, connection to remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 ] } Wed Dec 12 22:24:12.546 shell: started program mongos.exe --port 30999 --configdb AMAZONA-DFVK11N:29000 --chunkSize 1 --setParameter enableTestCommands=1 m30999| Wed Dec 12 22:24:12.593 running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Wed Dec 12 22:24:12.593 [mongosMain] MongoS version 2.3.2-pre- starting: pid=3984 port=30999 64-bit host=AMAZONA-DFVK11N (--help for usage) m30999| Wed Dec 12 22:24:12.593 [mongosMain] git version: 725f626aae2c2701ded3c0f97e7b5aa4c0b65979 m30999| Wed Dec 12 22:24:12.593 [mongosMain] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 m30999| Wed Dec 12 22:24:12.593 [mongosMain] options: { chunkSize: 1, configdb: "AMAZONA-DFVK11N:29000", port: 30999, setParameter: [ "enableTestCommands=1" ] } m29000| Wed Dec 12 22:24:12.593 [initandlisten] connection accepted from 10.28.45.224:64501 #3 (3 connections now open) m29000| Wed Dec 12 22:24:12.608 [initandlisten] connection accepted from 10.28.45.224:64502 #4 (4 connections now open) m29000| Wed Dec 12 22:24:12.608 [FileAllocator] allocating new datafile /data/db/remove2-config0\config.ns, filling with zeroes... m29000| Wed Dec 12 22:24:12.608 [FileAllocator] creating directory /data/db/remove2-config0\_tmp m29000| Wed Dec 12 22:24:12.655 [FileAllocator] done allocating datafile /data/db/remove2-config0\config.ns, size: 16MB, took 0.047 secs m29000| Wed Dec 12 22:24:12.655 [FileAllocator] allocating new datafile /data/db/remove2-config0\config.0, filling with zeroes... m29000| Wed Dec 12 22:24:12.702 [FileAllocator] done allocating datafile /data/db/remove2-config0\config.0, size: 16MB, took 0.047 secs m29000| Wed Dec 12 22:24:12.702 [FileAllocator] allocating new datafile /data/db/remove2-config0\config.1, filling with zeroes... m29000| Wed Dec 12 22:24:12.702 [conn3] build index config.version { _id: 1 } m29000| Wed Dec 12 22:24:12.702 [conn3] build index done. scanned 0 total records. 0 secs m30999| Wed Dec 12 22:24:12.702 [websvr] admin web console waiting for connections on port 31999 m30999| Wed Dec 12 22:24:12.702 [mongosMain] waiting for connections on port 30999 m30999| Wed Dec 12 22:24:12.702 [Balancer] about to contact config servers and shards m29000| Wed Dec 12 22:24:12.717 [conn3] build index config.settings { _id: 1 } m29000| Wed Dec 12 22:24:12.717 [conn3] build index done. scanned 0 total records. 0 secs m29000| Wed Dec 12 22:24:12.717 [conn3] build index config.chunks { _id: 1 } m29000| Wed Dec 12 22:24:12.717 [conn3] build index done. scanned 0 total records. 0 secs m29000| Wed Dec 12 22:24:12.717 [conn3] info: creating collection config.chunks on add index m29000| Wed Dec 12 22:24:12.717 [conn3] build index config.chunks { ns: 1, min: 1 } m29000| Wed Dec 12 22:24:12.717 [conn3] build index done. scanned 0 total records. 0 secs m29000| Wed Dec 12 22:24:12.717 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m29000| Wed Dec 12 22:24:12.717 [conn3] build index done. scanned 0 total records. 0 secs m29000| Wed Dec 12 22:24:12.717 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m29000| Wed Dec 12 22:24:12.717 [conn3] build index done. scanned 0 total records. 0 secs m29000| Wed Dec 12 22:24:12.717 [conn3] build index config.shards { _id: 1 } m29000| Wed Dec 12 22:24:12.717 [conn3] build index done. scanned 0 total records. 0 secs m29000| Wed Dec 12 22:24:12.717 [conn3] info: creating collection config.shards on add index m29000| Wed Dec 12 22:24:12.717 [conn3] build index config.shards { host: 1 } m29000| Wed Dec 12 22:24:12.717 [conn3] build index done. scanned 0 total records. 0 secs m30999| Wed Dec 12 22:24:12.717 [Balancer] config servers and shards contacted successfully m30999| Wed Dec 12 22:24:12.717 [Balancer] balancer id: AMAZONA-DFVK11N:30999 started at Dec 12 22:24:12 m29000| Wed Dec 12 22:24:12.717 [conn3] build index config.mongos { _id: 1 } m29000| Wed Dec 12 22:24:12.717 [conn3] build index done. scanned 0 total records. 0 secs m29000| Wed Dec 12 22:24:12.717 [initandlisten] connection accepted from 10.28.45.224:64504 #5 (5 connections now open) m30999| Wed Dec 12 22:24:12.717 [LockPinger] creating distributed lock ping thread for AMAZONA-DFVK11N:29000 and process AMAZONA-DFVK11N:30999:1355369052:41 (sleeping for 30000ms) m29000| Wed Dec 12 22:24:12.717 [conn4] build index config.lockpings { _id: 1 } m29000| Wed Dec 12 22:24:12.717 [conn4] build index done. scanned 0 total records. 0 secs m29000| Wed Dec 12 22:24:12.717 [conn5] build index config.locks { _id: 1 } m29000| Wed Dec 12 22:24:12.717 [conn5] build index done. scanned 0 total records. 0 secs m29000| Wed Dec 12 22:24:12.717 [conn4] build index config.lockpings { ping: new Date(1) } m29000| Wed Dec 12 22:24:12.717 [conn4] build index done. scanned 1 total records. 0 secs m30999| Wed Dec 12 22:24:12.717 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' acquired, ts : 50c94a5c4a44fbeaa68cfa6e m30999| Wed Dec 12 22:24:12.717 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' unlocked. m29000| Wed Dec 12 22:24:12.795 [FileAllocator] done allocating datafile /data/db/remove2-config0\config.1, size: 32MB, took 0.095 secs m30999| Wed Dec 12 22:24:13.061 [mongosMain] connection accepted from 127.0.0.1:64499 #1 (1 connection now open) ShardingTest undefined going to add shard : remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 m30999| Wed Dec 12 22:24:13.061 [conn1] couldn't find database [admin] in config db m29000| Wed Dec 12 22:24:13.061 [conn3] build index config.databases { _id: 1 } m29000| Wed Dec 12 22:24:13.061 [conn3] build index done. scanned 0 total records. 0 secs m30999| Wed Dec 12 22:24:13.061 [conn1] put [admin] on: config:AMAZONA-DFVK11N:29000 m30999| Wed Dec 12 22:24:13.061 [conn1] starting new replica set monitor for replica set remove2-rs0 with seed of AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 m30999| Wed Dec 12 22:24:13.061 [conn1] successfully connected to seed AMAZONA-DFVK11N:31100 for replica set remove2-rs0 m31100| Wed Dec 12 22:24:13.061 [initandlisten] connection accepted from 10.28.45.224:64505 #11 (7 connections now open) m30999| Wed Dec 12 22:24:13.061 [conn1] changing hosts to { 0: "AMAZONA-DFVK11N:31100", 1: "AMAZONA-DFVK11N:31101" } from remove2-rs0/ m30999| Wed Dec 12 22:24:13.061 [conn1] trying to add new host AMAZONA-DFVK11N:31100 to replica set remove2-rs0 m30999| Wed Dec 12 22:24:13.061 [conn1] cannot connect to new host AMAZONA-DFVK11N:31100 to replica set remove2-rs0, err: m30999| Wed Dec 12 22:24:13.061 [conn1] trying to add new host AMAZONA-DFVK11N:31101 to replica set remove2-rs0 m31100| Wed Dec 12 22:24:13.061 [initandlisten] connection accepted from 10.28.45.224:64506 #12 (8 connections now open) m30999| Wed Dec 12 22:24:13.061 [conn1] cannot connect to new host AMAZONA-DFVK11N:31101 to replica set remove2-rs0, err: m31101| Wed Dec 12 22:24:13.061 [initandlisten] connection accepted from 10.28.45.224:64507 #6 (5 connections now open) m31100| Wed Dec 12 22:24:13.061 [initandlisten] connection accepted from 10.28.45.224:64508 #13 (9 connections now open) m31100| Wed Dec 12 22:24:13.061 [conn11] end connection 10.28.45.224:64505 (8 connections now open) m30999| Wed Dec 12 22:24:13.061 [conn1] Primary for replica set remove2-rs0 changed to AMAZONA-DFVK11N:31100 m31101| Wed Dec 12 22:24:13.061 [initandlisten] connection accepted from 10.28.45.224:64509 #7 (6 connections now open) m30999| Wed Dec 12 22:24:13.061 [conn1] replica set monitor for replica set remove2-rs0 started, address is remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 m30999| Wed Dec 12 22:24:13.061 [ReplicaSetMonitorWatcher] starting m31100| Wed Dec 12 22:24:13.061 [initandlisten] connection accepted from 10.28.45.224:64510 #14 (9 connections now open) m30999| Wed Dec 12 22:24:13.061 [conn1] going to add shard: { _id: "remove2-rs0", host: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101" } { "shardAdded" : "remove2-rs0", "ok" : 1 } ShardingTest undefined going to add shard : remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m30999| Wed Dec 12 22:24:13.061 [conn1] starting new replica set monitor for replica set remove2-rs1 with seed of AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m30999| Wed Dec 12 22:24:13.061 [conn1] successfully connected to seed AMAZONA-DFVK11N:31200 for replica set remove2-rs1 m31200| Wed Dec 12 22:24:13.061 [initandlisten] connection accepted from 10.28.45.224:64511 #11 (7 connections now open) m30999| Wed Dec 12 22:24:13.061 [conn1] changing hosts to { 0: "AMAZONA-DFVK11N:31200", 1: "AMAZONA-DFVK11N:31201" } from remove2-rs1/ m30999| Wed Dec 12 22:24:13.061 [conn1] trying to add new host AMAZONA-DFVK11N:31200 to replica set remove2-rs1 m30999| Wed Dec 12 22:24:13.061 [conn1] cannot connect to new host AMAZONA-DFVK11N:31200 to replica set remove2-rs1, err: m30999| Wed Dec 12 22:24:13.061 [conn1] trying to add new host AMAZONA-DFVK11N:31201 to replica set remove2-rs1 m31200| Wed Dec 12 22:24:13.061 [initandlisten] connection accepted from 10.28.45.224:64512 #12 (8 connections now open) m30999| Wed Dec 12 22:24:13.061 [conn1] cannot connect to new host AMAZONA-DFVK11N:31201 to replica set remove2-rs1, err: m31201| Wed Dec 12 22:24:13.061 [initandlisten] connection accepted from 10.28.45.224:64513 #6 (5 connections now open) m31200| Wed Dec 12 22:24:13.061 [initandlisten] connection accepted from 10.28.45.224:64514 #13 (9 connections now open) m31200| Wed Dec 12 22:24:13.076 [conn11] end connection 10.28.45.224:64511 (8 connections now open) m30999| Wed Dec 12 22:24:13.076 [conn1] Primary for replica set remove2-rs1 changed to AMAZONA-DFVK11N:31200 m31201| Wed Dec 12 22:24:13.076 [initandlisten] connection accepted from 10.28.45.224:64515 #7 (6 connections now open) m30999| Wed Dec 12 22:24:13.076 [conn1] replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m31200| Wed Dec 12 22:24:13.076 [initandlisten] connection accepted from 10.28.45.224:64516 #14 (9 connections now open) m30999| Wed Dec 12 22:24:13.076 [conn1] going to add shard: { _id: "remove2-rs1", host: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201" } { "shardAdded" : "remove2-rs1", "ok" : 1 } m30999| Wed Dec 12 22:24:13.076 [mongosMain] connection accepted from 10.28.45.224:64517 #2 (2 connections now open) m30999| Wed Dec 12 22:24:13.076 [conn2] couldn't find database [test] in config db m30999| Wed Dec 12 22:24:13.076 [conn2] put [test] on: remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 m30999| Wed Dec 12 22:24:13.076 [conn2] DROP: test.remove2 m30999| Wed Dec 12 22:24:13.076 [conn2] creating WriteBackListener for: AMAZONA-DFVK11N:31100 serverID: 50c94a5c4a44fbeaa68cfa6d m30999| Wed Dec 12 22:24:13.076 [conn2] creating WriteBackListener for: AMAZONA-DFVK11N:31101 serverID: 50c94a5c4a44fbeaa68cfa6d m31100| Wed Dec 12 22:24:13.076 [initandlisten] connection accepted from 10.28.45.224:64518 #15 (10 connections now open) m31100| Wed Dec 12 22:24:13.076 [conn15] CMD: drop test.remove2 { "was" : 30, "ok" : 1 } { "was" : 30, "ok" : 1 } m30999| Wed Dec 12 22:24:13.076 [conn1] enabling sharding on: test m31100| Wed Dec 12 22:24:13.076 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0\test.ns, filling with zeroes... m31100| Wed Dec 12 22:24:13.123 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0\test.ns, size: 16MB, took 0.047 secs m31100| Wed Dec 12 22:24:13.123 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0\test.0, filling with zeroes... m31100| Wed Dec 12 22:24:13.185 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0\test.0, size: 16MB, took 0.047 secs m31100| Wed Dec 12 22:24:13.185 [conn14] build index test.remove2 { _id: 1 } m31100| Wed Dec 12 22:24:13.185 [conn14] build index done. scanned 0 total records. 0 secs m31100| Wed Dec 12 22:24:13.185 [conn14] info: creating collection test.remove2 on add index m31100| Wed Dec 12 22:24:13.185 [conn14] build index test.remove2 { i: 1.0 } m31100| Wed Dec 12 22:24:13.185 [conn14] build index done. scanned 0 total records. 0 secs m30999| Wed Dec 12 22:24:13.185 [conn1] CMD: shardcollection: { shardCollection: "test.remove2", key: { i: 1.0 } } m30999| Wed Dec 12 22:24:13.185 [conn1] enable sharding on: test.remove2 with shard key: { i: 1.0 } m30999| Wed Dec 12 22:24:13.185 [conn1] going to create 1 chunk(s) for: test.remove2 using new epoch 50c94a5d4a44fbeaa68cfa6f m31101| Wed Dec 12 22:24:13.185 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1\test.ns, filling with zeroes... m30999| Wed Dec 12 22:24:13.185 [conn1] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 2 version: 1|0||50c94a5d4a44fbeaa68cfa6f based on: (empty) m29000| Wed Dec 12 22:24:13.185 [conn3] build index config.collections { _id: 1 } m29000| Wed Dec 12 22:24:13.185 [conn3] build index done. scanned 0 total records. 0 secs m31100| Wed Dec 12 22:24:13.185 [initandlisten] connection accepted from 10.28.45.224:64519 #16 (11 connections now open) m31100| Wed Dec 12 22:24:13.185 [conn16] no current chunk manager found for this shard, will initialize m29000| Wed Dec 12 22:24:13.185 [initandlisten] connection accepted from 10.28.45.224:64520 #6 (6 connections now open) m30999| Wed Dec 12 22:24:13.185 [conn1] creating WriteBackListener for: AMAZONA-DFVK11N:31200 serverID: 50c94a5c4a44fbeaa68cfa6d m30999| Wed Dec 12 22:24:13.185 [conn1] creating WriteBackListener for: AMAZONA-DFVK11N:31201 serverID: 50c94a5c4a44fbeaa68cfa6d m31200| Wed Dec 12 22:24:13.185 [initandlisten] connection accepted from 10.28.45.224:64521 #15 (10 connections now open) m30999| Wed Dec 12 22:24:13.185 [conn1] resetting shard version of test.remove2 on AMAZONA-DFVK11N:31200, version is zero m31200| Wed Dec 12 22:24:13.201 [initandlisten] connection accepted from 10.28.45.224:64522 #16 (11 connections now open) m30999| Wed Dec 12 22:24:13.201 [conn2] resetting shard version of test.remove2 on AMAZONA-DFVK11N:31200, version is zero m31100| Wed Dec 12 22:24:13.201 [conn14] request split points lookup for chunk test.remove2 { : MinKey } -->> { : MaxKey } m31100| Wed Dec 12 22:24:13.201 [conn14] chunk is larger than 1024 bytes because of key { i: 0.0 } m31100| Wed Dec 12 22:24:13.201 [conn14] request split points lookup for chunk test.remove2 { : MinKey } -->> { : MaxKey } m31100| Wed Dec 12 22:24:13.201 [conn14] chunk is larger than 1024 bytes because of key { i: 0.0 } m31100| Wed Dec 12 22:24:13.201 [conn14] request split points lookup for chunk test.remove2 { : MinKey } -->> { : MaxKey } m31100| Wed Dec 12 22:24:13.201 [conn14] max number of requested split points reached (2) before the end of chunk test.remove2 { : MinKey } -->> { : MaxKey } m31100| Wed Dec 12 22:24:13.201 [conn14] chunk is larger than 1024 bytes because of key { i: 0.0 } m31100| Wed Dec 12 22:24:13.201 [conn14] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: MinKey }, max: { i: MaxKey }, from: "remove2-rs0", splitKeys: [ { i: 0.0 } ], shardId: "test.remove2-i_MinKey", configdb: "AMAZONA-DFVK11N:29000" } m29000| Wed Dec 12 22:24:13.201 [initandlisten] connection accepted from 10.28.45.224:64523 #7 (7 connections now open) m31100| Wed Dec 12 22:24:13.201 [LockPinger] creating distributed lock ping thread for AMAZONA-DFVK11N:29000 and process AMAZONA-DFVK11N:31100:1355369053:41 (sleeping for 30000ms) m31100| Wed Dec 12 22:24:13.201 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' acquired, ts : 50c94a5df61562284ec141a2 m31100| Wed Dec 12 22:24:13.201 [conn14] splitChunk accepted at version 1|0||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:13.201 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:13-0", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369053201), what: "split", ns: "test.remove2", details: { before: { min: { i: MinKey }, max: { i: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: MinKey }, max: { i: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') }, right: { min: { i: 0.0 }, max: { i: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') } } } m29000| Wed Dec 12 22:24:13.201 [conn6] build index config.changelog { _id: 1 } m29000| Wed Dec 12 22:24:13.201 [conn6] build index done. scanned 0 total records. 0 secs m31100| Wed Dec 12 22:24:13.201 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' unlocked. m30999| Wed Dec 12 22:24:13.201 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 3 version: 1|2||50c94a5d4a44fbeaa68cfa6f based on: 1|0||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:13.201 [conn2] autosplitted test.remove2 shard: ns:test.remove2shard: remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101lastmod: 1|0||000000000000000000000000min: { i: MinKey }max: { i: MaxKey } on: { i: 0.0 } (splitThreshold 921) m31100| Wed Dec 12 22:24:13.217 [conn14] request split points lookup for chunk test.remove2 { : 0.0 } -->> { : MaxKey } m31100| Wed Dec 12 22:24:13.217 [conn14] max number of requested split points reached (2) before the end of chunk test.remove2 { : 0.0 } -->> { : MaxKey } m31100| Wed Dec 12 22:24:13.217 [conn14] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: 0.0 }, max: { i: MaxKey }, from: "remove2-rs0", splitKeys: [ { i: 9.0 } ], shardId: "test.remove2-i_0.0", configdb: "AMAZONA-DFVK11N:29000" } m31100| Wed Dec 12 22:24:13.217 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' acquired, ts : 50c94a5df61562284ec141a3 m31100| Wed Dec 12 22:24:13.217 [conn14] splitChunk accepted at version 1|2||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:13.232 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:13-1", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369053232), what: "split", ns: "test.remove2", details: { before: { min: { i: 0.0 }, max: { i: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: 0.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') }, right: { min: { i: 9.0 }, max: { i: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') } } } m31100| Wed Dec 12 22:24:13.232 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' unlocked. m30999| Wed Dec 12 22:24:13.232 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 4 version: 1|4||50c94a5d4a44fbeaa68cfa6f based on: 1|2||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:13.232 [conn2] autosplitted test.remove2 shard: ns:test.remove2shard: remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101lastmod: 1|2||000000000000000000000000min: { i: 0.0 }max: { i: MaxKey } on: { i: 9.0 } (splitThreshold 471859) m31101| Wed Dec 12 22:24:13.232 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1\test.ns, size: 16MB, took 0.051 secs m31101| Wed Dec 12 22:24:13.232 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1\test.0, filling with zeroes... m31100| Wed Dec 12 22:24:13.248 [conn14] request split points lookup for chunk test.remove2 { : 0.0 } -->> { : 9.0 } m31100| Wed Dec 12 22:24:13.248 [conn14] max number of requested split points reached (2) before the end of chunk test.remove2 { : 0.0 } -->> { : 9.0 } m31100| Wed Dec 12 22:24:13.248 [conn14] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: 0.0 }, max: { i: 9.0 }, from: "remove2-rs0", splitKeys: [ { i: 4.0 } ], shardId: "test.remove2-i_0.0", configdb: "AMAZONA-DFVK11N:29000" } m31100| Wed Dec 12 22:24:13.248 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' acquired, ts : 50c94a5df61562284ec141a4 m31100| Wed Dec 12 22:24:13.248 [conn14] splitChunk accepted at version 1|4||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:13.248 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:13-2", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369053248), what: "split", ns: "test.remove2", details: { before: { min: { i: 0.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: 0.0 }, max: { i: 4.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') }, right: { min: { i: 4.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') } } } m31100| Wed Dec 12 22:24:13.248 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' unlocked. m30999| Wed Dec 12 22:24:13.248 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 5 version: 1|6||50c94a5d4a44fbeaa68cfa6f based on: 1|4||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:13.248 [conn2] autosplitted test.remove2 shard: ns:test.remove2shard: remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101lastmod: 1|3||000000000000000000000000min: { i: 0.0 }max: { i: 9.0 } on: { i: 4.0 } (splitThreshold 1048576) m31100| Wed Dec 12 22:24:13.248 [conn14] request split points lookup for chunk test.remove2 { : 0.0 } -->> { : 4.0 } m31100| Wed Dec 12 22:24:13.248 [conn14] request split points lookup for chunk test.remove2 { : 4.0 } -->> { : 9.0 } m31100| Wed Dec 12 22:24:13.263 [conn14] request split points lookup for chunk test.remove2 { : 4.0 } -->> { : 9.0 } m31100| Wed Dec 12 22:24:13.263 [conn14] request split points lookup for chunk test.remove2 { : 9.0 } -->> { : MaxKey } m31100| Wed Dec 12 22:24:13.263 [conn14] request split points lookup for chunk test.remove2 { : 0.0 } -->> { : 4.0 } m31100| Wed Dec 12 22:24:13.279 [conn14] request split points lookup for chunk test.remove2 { : 4.0 } -->> { : 9.0 } m31100| Wed Dec 12 22:24:13.279 [conn14] max number of requested split points reached (2) before the end of chunk test.remove2 { : 4.0 } -->> { : 9.0 } m31100| Wed Dec 12 22:24:13.279 [conn14] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: 4.0 }, max: { i: 9.0 }, from: "remove2-rs0", splitKeys: [ { i: 6.0 } ], shardId: "test.remove2-i_4.0", configdb: "AMAZONA-DFVK11N:29000" } m31101| Wed Dec 12 22:24:13.295 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1\test.0, size: 16MB, took 0.053 secs m31101| Wed Dec 12 22:24:13.295 [repl writer worker 1] build index test.remove2 { _id: 1 } m31101| Wed Dec 12 22:24:13.295 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31101| Wed Dec 12 22:24:13.295 [repl writer worker 1] info: creating collection test.remove2 on add index m31101| Wed Dec 12 22:24:13.295 [repl writer worker 1] build index test.remove2 { i: 1.0 } m31101| Wed Dec 12 22:24:13.295 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31100| Wed Dec 12 22:24:13.326 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' acquired, ts : 50c94a5df61562284ec141a5 m31100| Wed Dec 12 22:24:13.326 [conn14] splitChunk accepted at version 1|6||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:13.326 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:13-3", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369053326), what: "split", ns: "test.remove2", details: { before: { min: { i: 4.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: 4.0 }, max: { i: 6.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') }, right: { min: { i: 6.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') } } } m31100| Wed Dec 12 22:24:13.326 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' unlocked. m30999| Wed Dec 12 22:24:13.326 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 6 version: 1|8||50c94a5d4a44fbeaa68cfa6f based on: 1|6||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:13.326 [conn2] autosplitted test.remove2 shard: ns:test.remove2shard: remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101lastmod: 1|6||000000000000000000000000min: { i: 4.0 }max: { i: 9.0 } on: { i: 6.0 } (splitThreshold 1048576) m31200| Wed Dec 12 22:24:13.778 [rsHealthPoll] replSet member AMAZONA-DFVK11N:31201 is now in state SECONDARY m31100| Wed Dec 12 22:24:13.856 [conn15] insert test.remove2 keyUpdates:0 locks(micros) w:1061652 530ms m31100| Wed Dec 12 22:24:13.856 [conn14] request split points lookup for chunk test.remove2 { : 6.0 } -->> { : 9.0 } m31100| Wed Dec 12 22:24:13.856 [conn14] request split points lookup for chunk test.remove2 { : 0.0 } -->> { : 4.0 } m31100| Wed Dec 12 22:24:13.856 [conn14] max number of requested split points reached (2) before the end of chunk test.remove2 { : 0.0 } -->> { : 4.0 } m31100| Wed Dec 12 22:24:13.856 [conn14] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: 0.0 }, max: { i: 4.0 }, from: "remove2-rs0", splitKeys: [ { i: 1.0 } ], shardId: "test.remove2-i_0.0", configdb: "AMAZONA-DFVK11N:29000" } m31100| Wed Dec 12 22:24:13.872 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' acquired, ts : 50c94a5df61562284ec141a6 m31100| Wed Dec 12 22:24:13.872 [conn14] splitChunk accepted at version 1|8||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:13.872 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:13-4", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369053872), what: "split", ns: "test.remove2", details: { before: { min: { i: 0.0 }, max: { i: 4.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: 0.0 }, max: { i: 1.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') }, right: { min: { i: 1.0 }, max: { i: 4.0 }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') } } } m31100| Wed Dec 12 22:24:13.872 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' unlocked. m30999| Wed Dec 12 22:24:13.872 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 7 version: 1|10||50c94a5d4a44fbeaa68cfa6f based on: 1|8||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:13.872 [conn2] autosplitted test.remove2 shard: ns:test.remove2shard: remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101lastmod: 1|5||000000000000000000000000min: { i: 0.0 }max: { i: 4.0 } on: { i: 1.0 } (splitThreshold 1048576) m31100| Wed Dec 12 22:24:13.872 [conn14] request split points lookup for chunk test.remove2 { : 6.0 } -->> { : 9.0 } m31100| Wed Dec 12 22:24:13.872 [conn14] request split points lookup for chunk test.remove2 { : 1.0 } -->> { : 4.0 } m31100| Wed Dec 12 22:24:13.872 [conn14] request split points lookup for chunk test.remove2 { : 4.0 } -->> { : 6.0 } m31100| Wed Dec 12 22:24:13.887 [conn14] request split points lookup for chunk test.remove2 { : 9.0 } -->> { : MaxKey } m31100| Wed Dec 12 22:24:13.887 [conn14] request split points lookup for chunk test.remove2 { : 0.0 } -->> { : 1.0 } m31100| Wed Dec 12 22:24:13.887 [conn14] request split points lookup for chunk test.remove2 { : 1.0 } -->> { : 4.0 } m31100| Wed Dec 12 22:24:13.887 [conn14] max number of requested split points reached (2) before the end of chunk test.remove2 { : 1.0 } -->> { : 4.0 } m31100| Wed Dec 12 22:24:13.887 [conn14] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: 1.0 }, max: { i: 4.0 }, from: "remove2-rs0", splitKeys: [ { i: 2.0 } ], shardId: "test.remove2-i_1.0", configdb: "AMAZONA-DFVK11N:29000" } m31100| Wed Dec 12 22:24:13.887 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' acquired, ts : 50c94a5df61562284ec141a7 m31100| Wed Dec 12 22:24:13.887 [conn14] splitChunk accepted at version 1|10||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:13.903 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:13-5", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369053887), what: "split", ns: "test.remove2", details: { before: { min: { i: 1.0 }, max: { i: 4.0 }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: 1.0 }, max: { i: 2.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') }, right: { min: { i: 2.0 }, max: { i: 4.0 }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') } } } m31100| Wed Dec 12 22:24:13.903 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' unlocked. m30999| Wed Dec 12 22:24:13.903 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 8 version: 1|12||50c94a5d4a44fbeaa68cfa6f based on: 1|10||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:13.903 [conn2] autosplitted test.remove2 shard: ns:test.remove2shard: remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101lastmod: 1|10||000000000000000000000000min: { i: 1.0 }max: { i: 4.0 } on: { i: 2.0 } (splitThreshold 1048576) m31100| Wed Dec 12 22:24:13.903 [conn14] request split points lookup for chunk test.remove2 { : 6.0 } -->> { : 9.0 } m31100| Wed Dec 12 22:24:13.903 [conn14] max number of requested split points reached (2) before the end of chunk test.remove2 { : 6.0 } -->> { : 9.0 } m31100| Wed Dec 12 22:24:13.903 [conn14] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: 6.0 }, max: { i: 9.0 }, from: "remove2-rs0", splitKeys: [ { i: 7.0 } ], shardId: "test.remove2-i_6.0", configdb: "AMAZONA-DFVK11N:29000" } m31100| Wed Dec 12 22:24:13.903 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' acquired, ts : 50c94a5df61562284ec141a8 m31100| Wed Dec 12 22:24:13.903 [conn14] splitChunk accepted at version 1|12||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:13.903 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:13-6", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369053903), what: "split", ns: "test.remove2", details: { before: { min: { i: 6.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: 6.0 }, max: { i: 7.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') }, right: { min: { i: 7.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f') } } } m31100| Wed Dec 12 22:24:13.903 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' unlocked. m30999| Wed Dec 12 22:24:13.903 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 9 version: 1|14||50c94a5d4a44fbeaa68cfa6f based on: 1|12||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:13.903 [conn2] autosplitted test.remove2 shard: ns:test.remove2shard: remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101lastmod: 1|8||000000000000000000000000min: { i: 6.0 }max: { i: 9.0 } on: { i: 7.0 } (splitThreshold 1048576) m31100| Wed Dec 12 22:24:13.919 [conn14] request split points lookup for chunk test.remove2 { : 2.0 } -->> { : 4.0 } m31100| Wed Dec 12 22:24:13.919 [conn14] request split points lookup for chunk test.remove2 { : 4.0 } -->> { : 6.0 } m31100| Wed Dec 12 22:24:13.919 [conn14] request split points lookup for chunk test.remove2 { : 7.0 } -->> { : 9.0 } m31100| Wed Dec 12 22:24:13.919 [conn14] request split points lookup for chunk test.remove2 { : 9.0 } -->> { : MaxKey } m31100| Wed Dec 12 22:24:13.934 [conn14] request split points lookup for chunk test.remove2 { : 0.0 } -->> { : 1.0 } m31100| Wed Dec 12 22:24:13.934 [conn14] chunk is larger than 1048576 bytes because of key { i: 0.0 } m31100| Wed Dec 12 22:24:13.934 [conn14] request split points lookup for chunk test.remove2 { : 1.0 } -->> { : 2.0 } m31100| Wed Dec 12 22:24:13.934 [conn14] chunk is larger than 1048576 bytes because of key { i: 1.0 } m31100| Wed Dec 12 22:24:13.934 [conn14] request split points lookup for chunk test.remove2 { : 6.0 } -->> { : 7.0 } m31100| Wed Dec 12 22:24:13.934 [conn14] chunk is larger than 1048576 bytes because of key { i: 6.0 } m30999| Wed Dec 12 22:24:13.934 [conn1] creating WriteBackListener for: AMAZONA-DFVK11N:29000 serverID: 50c94a5c4a44fbeaa68cfa6d m29000| Wed Dec 12 22:24:13.934 [initandlisten] connection accepted from 10.28.45.224:64526 #8 (8 connections now open) ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 m31101| Wed Dec 12 22:24:15.135 [conn2] end connection 10.28.45.224:64457 (5 connections now open) m31101| Wed Dec 12 22:24:15.135 [initandlisten] connection accepted from 10.28.45.224:64527 #8 (6 connections now open) ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 m31100| Wed Dec 12 22:24:17.678 [conn3] end connection 10.28.45.224:64466 (10 connections now open) m31100| Wed Dec 12 22:24:17.678 [initandlisten] connection accepted from 10.28.45.224:64528 #17 (11 connections now open) ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 m30999| Wed Dec 12 22:24:18.723 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' acquired, ts : 50c94a624a44fbeaa68cfa70 m29000| Wed Dec 12 22:24:18.723 [conn3] build index config.tags { _id: 1 } m29000| Wed Dec 12 22:24:18.723 [conn3] build index done. scanned 0 total records. 0.001 secs m29000| Wed Dec 12 22:24:18.723 [conn3] info: creating collection config.tags on add index m29000| Wed Dec 12 22:24:18.723 [conn3] build index config.tags { ns: 1, min: 1 } m29000| Wed Dec 12 22:24:18.723 [conn3] build index done. scanned 0 total records. 0 secs m30999| Wed Dec 12 22:24:18.723 [Balancer] ns: test.remove2 going to move { _id: "test.remove2-i_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs0" } from: remove2-rs0 to: remove2-rs1 tag [] m30999| Wed Dec 12 22:24:18.723 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2shard: remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101lastmod: 1|1||000000000000000000000000min: { i: MinKey }max: { i: 0.0 }) remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 -> remove2-rs1:remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m31100| Wed Dec 12 22:24:18.723 [conn14] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", to: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "AMAZONA-DFVK11N:29000", secondaryThrottle: false, waitForDelete: false } m31200| Wed Dec 12 22:24:18.723 [initandlisten] connection accepted from 10.28.45.224:64530 #17 (12 connections now open) m31100| Wed Dec 12 22:24:18.723 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' acquired, ts : 50c94a62f61562284ec141a9 m31100| Wed Dec 12 22:24:18.723 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:18-7", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369058723), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs0", to: "remove2-rs1" } } m31100| Wed Dec 12 22:24:18.723 [conn14] moveChunk request accepted at version 1|14||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:18.723 [conn14] moveChunk number of documents: 0 m31201| Wed Dec 12 22:24:18.723 [initandlisten] connection accepted from 10.28.45.224:64532 #8 (7 connections now open) m31101| Wed Dec 12 22:24:18.739 [initandlisten] connection accepted from 10.28.45.224:64538 #9 (7 connections now open) m31200| Wed Dec 12 22:24:18.723 [initandlisten] connection accepted from 10.28.45.224:64531 #18 (13 connections now open) m31200| Wed Dec 12 22:24:18.723 [initandlisten] connection accepted from 10.28.45.224:64533 #19 (14 connections now open) m31100| Wed Dec 12 22:24:18.723 [conn14] starting new replica set monitor for replica set remove2-rs1 with seed of AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m31100| Wed Dec 12 22:24:18.723 [conn14] successfully connected to seed AMAZONA-DFVK11N:31200 for replica set remove2-rs1 m31100| Wed Dec 12 22:24:18.723 [conn14] changing hosts to { 0: "AMAZONA-DFVK11N:31200", 1: "AMAZONA-DFVK11N:31201" } from remove2-rs1/ m31100| Wed Dec 12 22:24:18.723 [conn14] trying to add new host AMAZONA-DFVK11N:31200 to replica set remove2-rs1 m31100| Wed Dec 12 22:24:18.723 [conn14] cannot connect to new host AMAZONA-DFVK11N:31200 to replica set remove2-rs1, err: m31100| Wed Dec 12 22:24:18.723 [conn14] trying to add new host AMAZONA-DFVK11N:31201 to replica set remove2-rs1 m31100| Wed Dec 12 22:24:18.723 [conn14] cannot connect to new host AMAZONA-DFVK11N:31201 to replica set remove2-rs1, err: m31100| Wed Dec 12 22:24:18.723 [conn14] Primary for replica set remove2-rs1 changed to AMAZONA-DFVK11N:31200 m31100| Wed Dec 12 22:24:18.739 [conn14] replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m31100| Wed Dec 12 22:24:18.739 [ReplicaSetMonitorWatcher] starting m31100| Wed Dec 12 22:24:18.739 [initandlisten] connection accepted from 10.28.45.224:64536 #18 (12 connections now open) m31100| Wed Dec 12 22:24:18.739 [initandlisten] connection accepted from 10.28.45.224:64537 #19 (13 connections now open) m31100| Wed Dec 12 22:24:18.739 [initandlisten] connection accepted from 10.28.45.224:64539 #20 (14 connections now open) m31100| Wed Dec 12 22:24:18.739 [conn18] end connection 10.28.45.224:64536 (13 connections now open) m31100| Wed Dec 12 22:24:18.739 [initandlisten] connection accepted from 10.28.45.224:64541 #21 (14 connections now open) m31100| Wed Dec 12 22:24:18.755 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Wed Dec 12 22:24:18.770 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Wed Dec 12 22:24:18.786 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31201| Wed Dec 12 22:24:18.723 [initandlisten] connection accepted from 10.28.45.224:64534 #9 (8 connections now open) m31101| Wed Dec 12 22:24:18.739 [initandlisten] connection accepted from 10.28.45.224:64540 #10 (8 connections now open) m31200| Wed Dec 12 22:24:18.723 [conn17] end connection 10.28.45.224:64530 (13 connections now open) m31200| Wed Dec 12 22:24:18.739 [initandlisten] connection accepted from 10.28.45.224:64535 #20 (14 connections now open) m31200| Wed Dec 12 22:24:18.739 [migrateThread] starting new replica set monitor for replica set remove2-rs0 with seed of AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 m31200| Wed Dec 12 22:24:18.739 [migrateThread] successfully connected to seed AMAZONA-DFVK11N:31100 for replica set remove2-rs0 m31200| Wed Dec 12 22:24:18.739 [migrateThread] changing hosts to { 0: "AMAZONA-DFVK11N:31100", 1: "AMAZONA-DFVK11N:31101" } from remove2-rs0/ m31200| Wed Dec 12 22:24:18.739 [migrateThread] trying to add new host AMAZONA-DFVK11N:31100 to replica set remove2-rs0 m31200| Wed Dec 12 22:24:18.739 [migrateThread] cannot connect to new host AMAZONA-DFVK11N:31100 to replica set remove2-rs0, err: m31200| Wed Dec 12 22:24:18.739 [migrateThread] trying to add new host AMAZONA-DFVK11N:31101 to replica set remove2-rs0 m31200| Wed Dec 12 22:24:18.739 [migrateThread] cannot connect to new host AMAZONA-DFVK11N:31101 to replica set remove2-rs0, err: m31200| Wed Dec 12 22:24:18.739 [migrateThread] Primary for replica set remove2-rs0 changed to AMAZONA-DFVK11N:31100 m31200| Wed Dec 12 22:24:18.739 [migrateThread] replica set monitor for replica set remove2-rs0 started, address is remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 m31200| Wed Dec 12 22:24:18.739 [ReplicaSetMonitorWatcher] starting m31200| Wed Dec 12 22:24:18.739 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0\test.ns, filling with zeroes... m31200| Wed Dec 12 22:24:18.786 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0\test.ns, size: 16MB, took 0.047 secs m31200| Wed Dec 12 22:24:18.786 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0\test.0, filling with zeroes... m31100| Wed Dec 12 22:24:18.801 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8 chunk diff: 8 m31100| Wed Dec 12 22:24:18.833 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Wed Dec 12 22:24:18.833 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0\test.0, size: 16MB, took 0.047 secs m31200| Wed Dec 12 22:24:18.833 [migrateThread] build index test.remove2 { _id: 1 } m31200| Wed Dec 12 22:24:18.833 [migrateThread] build index done. scanned 0 total records. 0 secs m31200| Wed Dec 12 22:24:18.833 [migrateThread] info: creating collection test.remove2 on add index m31200| Wed Dec 12 22:24:18.833 [migrateThread] build index test.remove2 { i: 1.0 } m31200| Wed Dec 12 22:24:18.833 [migrateThread] build index done. scanned 0 total records. 0 secs m31201| Wed Dec 12 22:24:18.848 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1\test.ns, filling with zeroes... m31200| Wed Dec 12 22:24:18.848 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Wed Dec 12 22:24:18.848 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 } m31200| Wed Dec 12 22:24:18.848 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: MinKey } -> { i: 0.0 } m31100| Wed Dec 12 22:24:18.879 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Wed Dec 12 22:24:18.879 [conn14] moveChunk setting version to: 2|0||50c94a5d4a44fbeaa68cfa6f m31200| Wed Dec 12 22:24:18.879 [initandlisten] connection accepted from 10.28.45.224:64543 #21 (15 connections now open) m31200| Wed Dec 12 22:24:18.879 [conn21] Waiting for commit to finish m31200| Wed Dec 12 22:24:18.895 [conn21] Waiting for commit to finish m31200| Wed Dec 12 22:24:18.895 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 } m31200| Wed Dec 12 22:24:18.895 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: MinKey } -> { i: 0.0 } m31200| Wed Dec 12 22:24:18.895 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:18-0", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1355369058895), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 5: 107, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 46 } } m29000| Wed Dec 12 22:24:18.895 [initandlisten] connection accepted from 10.28.45.224:64544 #9 (9 connections now open) m31201| Wed Dec 12 22:24:18.895 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1\test.ns, size: 16MB, took 0.047 secs m31201| Wed Dec 12 22:24:18.895 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1\test.0, filling with zeroes... m31100| Wed Dec 12 22:24:18.911 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Wed Dec 12 22:24:18.911 [conn14] moveChunk updating self version to: 2|1||50c94a5d4a44fbeaa68cfa6f through { i: 0.0 } -> { i: 1.0 } for collection 'test.remove2' m29000| Wed Dec 12 22:24:18.911 [initandlisten] connection accepted from 10.28.45.224:64545 #10 (10 connections now open) m31100| Wed Dec 12 22:24:18.911 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:18-8", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369058911), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs0", to: "remove2-rs1" } } m31100| Wed Dec 12 22:24:18.911 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Wed Dec 12 22:24:18.911 [conn14] MigrateFromStatus::done Global lock acquired m31100| Wed Dec 12 22:24:18.911 [conn14] forking for cleanup of chunk data m31100| Wed Dec 12 22:24:18.911 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Wed Dec 12 22:24:18.911 [conn14] MigrateFromStatus::done Global lock acquired m31100| Wed Dec 12 22:24:18.911 [cleanupOldData-50c94a62f61562284ec141aa] (start) waiting to cleanup test.remove2 from { i: MinKey } -> { i: 0.0 }, # cursors remaining: 0 m31100| Wed Dec 12 22:24:18.911 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' unlocked. m31100| Wed Dec 12 22:24:18.911 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:18-9", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369058911), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 7, step4 of 6: 138, step5 of 6: 32, step6 of 6: 0 } } m31100| Wed Dec 12 22:24:18.911 [conn14] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", to: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "AMAZONA-DFVK11N:29000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:21 r:76 w:36 reslen:37 182ms m30999| Wed Dec 12 22:24:18.911 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 10 version: 2|1||50c94a5d4a44fbeaa68cfa6f based on: 1|14||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:18.911 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' unlocked. m31100| Wed Dec 12 22:24:18.942 [cleanupOldData-50c94a62f61562284ec141aa] waiting to remove documents for test.remove2 from { i: MinKey } -> { i: 0.0 } m31100| Wed Dec 12 22:24:18.942 [cleanupOldData-50c94a62f61562284ec141aa] moveChunk starting delete for: test.remove2 from { i: MinKey } -> { i: 0.0 } m31100| Wed Dec 12 22:24:18.942 [cleanupOldData-50c94a62f61562284ec141aa] moveChunk deleted 0 documents for test.remove2 from { i: MinKey } -> { i: 0.0 } m31201| Wed Dec 12 22:24:18.942 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1\test.0, size: 16MB, took 0.047 secs m31201| Wed Dec 12 22:24:18.942 [repl writer worker 1] build index test.remove2 { _id: 1 } m31201| Wed Dec 12 22:24:18.942 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31201| Wed Dec 12 22:24:18.942 [repl writer worker 1] info: creating collection test.remove2 on add index m31201| Wed Dec 12 22:24:18.942 [repl writer worker 1] build index test.remove2 { i: 1.0 } m31201| Wed Dec 12 22:24:18.942 [repl writer worker 1] build index done. scanned 0 total records. 0 secs ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7 chunk diff: 6 ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7 chunk diff: 6 ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7 chunk diff: 6 ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7 chunk diff: 6 m31201| Wed Dec 12 22:24:19.815 [conn2] end connection 10.28.45.224:64463 (7 connections now open) m31201| Wed Dec 12 22:24:19.815 [initandlisten] connection accepted from 10.28.45.224:64546 #10 (8 connections now open) ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7 chunk diff: 6 m30999| Wed Dec 12 22:24:19.925 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' acquired, ts : 50c94a634a44fbeaa68cfa71 m30999| Wed Dec 12 22:24:19.925 [Balancer] ns: test.remove2 going to move { _id: "test.remove2-i_0.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f'), ns: "test.remove2", min: { i: 0.0 }, max: { i: 1.0 }, shard: "remove2-rs0" } from: remove2-rs0 to: remove2-rs1 tag [] m30999| Wed Dec 12 22:24:19.925 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2shard: remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101lastmod: 2|1||000000000000000000000000min: { i: 0.0 }max: { i: 1.0 }) remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 -> remove2-rs1:remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m31100| Wed Dec 12 22:24:19.925 [conn14] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", to: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "AMAZONA-DFVK11N:29000", secondaryThrottle: false, waitForDelete: false } m31100| Wed Dec 12 22:24:19.925 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' acquired, ts : 50c94a63f61562284ec141ab m31100| Wed Dec 12 22:24:19.925 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:19-10", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369059925), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs0", to: "remove2-rs1" } } m31100| Wed Dec 12 22:24:19.925 [conn14] moveChunk request accepted at version 2|1||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:19.925 [conn14] moveChunk number of documents: 30 m31100| Wed Dec 12 22:24:19.940 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 262832, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Wed Dec 12 22:24:19.940 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Wed Dec 12 22:24:19.940 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 } m31200| Wed Dec 12 22:24:19.940 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: 0.0 } -> { i: 1.0 } m31100| Wed Dec 12 22:24:19.956 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1.0 }, state: "steady", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Wed Dec 12 22:24:19.956 [conn14] moveChunk setting version to: 3|0||50c94a5d4a44fbeaa68cfa6f m31200| Wed Dec 12 22:24:19.956 [conn21] Waiting for commit to finish m31200| Wed Dec 12 22:24:19.971 [conn21] Waiting for commit to finish m31200| Wed Dec 12 22:24:19.971 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 } m31200| Wed Dec 12 22:24:19.971 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: 0.0 } -> { i: 1.0 } m31200| Wed Dec 12 22:24:19.971 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:19-1", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1355369059971), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 27 } } m31100| Wed Dec 12 22:24:19.987 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1.0 }, state: "done", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Wed Dec 12 22:24:19.987 [conn14] moveChunk updating self version to: 3|1||50c94a5d4a44fbeaa68cfa6f through { i: 1.0 } -> { i: 2.0 } for collection 'test.remove2' m31100| Wed Dec 12 22:24:19.987 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:19-11", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369059987), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs0", to: "remove2-rs1" } } m31100| Wed Dec 12 22:24:19.987 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Wed Dec 12 22:24:19.987 [conn14] MigrateFromStatus::done Global lock acquired m31100| Wed Dec 12 22:24:19.987 [conn14] forking for cleanup of chunk data m31100| Wed Dec 12 22:24:19.987 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Wed Dec 12 22:24:19.987 [conn14] MigrateFromStatus::done Global lock acquired m31100| Wed Dec 12 22:24:19.987 [cleanupOldData-50c94a63f61562284ec141ac] (start) waiting to cleanup test.remove2 from { i: 0.0 } -> { i: 1.0 }, # cursors remaining: 0 m31100| Wed Dec 12 22:24:19.987 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' unlocked. m31100| Wed Dec 12 22:24:19.987 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:19-12", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369059987), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 23, step5 of 6: 31, step6 of 6: 0 } } m30999| Wed Dec 12 22:24:19.987 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 11 version: 3|1||50c94a5d4a44fbeaa68cfa6f based on: 2|1||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:19.987 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' unlocked. m31100| Wed Dec 12 22:24:20.018 [cleanupOldData-50c94a63f61562284ec141ac] waiting to remove documents for test.remove2 from { i: 0.0 } -> { i: 1.0 } m31100| Wed Dec 12 22:24:20.018 [cleanupOldData-50c94a63f61562284ec141ac] moveChunk starting delete for: test.remove2 from { i: 0.0 } -> { i: 1.0 } ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6 chunk diff: 4 m31100| Wed Dec 12 22:24:20.018 [cleanupOldData-50c94a63f61562284ec141ac] moveChunk deleted 30 documents for test.remove2 from { i: 0.0 } -> { i: 1.0 } ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6 chunk diff: 4 ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6 chunk diff: 4 ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6 chunk diff: 4 ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6 chunk diff: 4 m30999| Wed Dec 12 22:24:21.001 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' acquired, ts : 50c94a654a44fbeaa68cfa72 m30999| Wed Dec 12 22:24:21.001 [Balancer] ns: test.remove2 going to move { _id: "test.remove2-i_1.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f'), ns: "test.remove2", min: { i: 1.0 }, max: { i: 2.0 }, shard: "remove2-rs0" } from: remove2-rs0 to: remove2-rs1 tag [] m30999| Wed Dec 12 22:24:21.001 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2shard: remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101lastmod: 3|1||000000000000000000000000min: { i: 1.0 }max: { i: 2.0 }) remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 -> remove2-rs1:remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m31100| Wed Dec 12 22:24:21.001 [conn14] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", to: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 1.0 }, max: { i: 2.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "AMAZONA-DFVK11N:29000", secondaryThrottle: false, waitForDelete: false } m31100| Wed Dec 12 22:24:21.001 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' acquired, ts : 50c94a65f61562284ec141ad m31100| Wed Dec 12 22:24:21.001 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:21-13", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369061001), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 2.0 }, from: "remove2-rs0", to: "remove2-rs1" } } m31100| Wed Dec 12 22:24:21.001 [conn14] moveChunk request accepted at version 3|1||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:21.001 [conn14] moveChunk number of documents: 30 ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6 chunk diff: 4 m31200| Wed Dec 12 22:24:21.017 [migrateThread] Waiting for replication to catch up before entering critical section m31100| Wed Dec 12 22:24:21.017 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: 1.0 }, max: { i: 2.0 }, shardKeyPattern: { i: 1.0 }, state: "clone", counts: { cloned: 20, clonedBytes: 328540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Wed Dec 12 22:24:21.032 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: 1.0 }, max: { i: 2.0 }, shardKeyPattern: { i: 1.0 }, state: "steady", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Wed Dec 12 22:24:21.032 [conn14] moveChunk setting version to: 4|0||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:21.063 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: 1.0 }, max: { i: 2.0 }, shardKeyPattern: { i: 1.0 }, state: "done", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Wed Dec 12 22:24:21.063 [conn14] moveChunk updating self version to: 4|1||50c94a5d4a44fbeaa68cfa6f through { i: 2.0 } -> { i: 4.0 } for collection 'test.remove2' m31100| Wed Dec 12 22:24:21.063 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:21-14", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369061063), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 2.0 }, from: "remove2-rs0", to: "remove2-rs1" } } m31100| Wed Dec 12 22:24:21.063 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Wed Dec 12 22:24:21.063 [conn14] MigrateFromStatus::done Global lock acquired m31100| Wed Dec 12 22:24:21.063 [conn14] forking for cleanup of chunk data m31100| Wed Dec 12 22:24:21.063 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Wed Dec 12 22:24:21.063 [conn14] MigrateFromStatus::done Global lock acquired m31100| Wed Dec 12 22:24:21.063 [cleanupOldData-50c94a65f61562284ec141ae] (start) waiting to cleanup test.remove2 from { i: 1.0 } -> { i: 2.0 }, # cursors remaining: 0 m31100| Wed Dec 12 22:24:21.063 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' unlocked. m30999| Wed Dec 12 22:24:21.063 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 12 version: 4|1||50c94a5d4a44fbeaa68cfa6f based on: 3|1||50c94a5d4a44fbeaa68cfa6f m31200| Wed Dec 12 22:24:21.017 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 2.0 } m31200| Wed Dec 12 22:24:21.017 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: 1.0 } -> { i: 2.0 } m31200| Wed Dec 12 22:24:21.032 [conn21] Waiting for commit to finish m31200| Wed Dec 12 22:24:21.048 [conn21] Waiting for commit to finish m31200| Wed Dec 12 22:24:21.048 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 2.0 } m31200| Wed Dec 12 22:24:21.048 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: 1.0 } -> { i: 2.0 } m31100| Wed Dec 12 22:24:21.063 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:21-15", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369061063), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 2.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 23, step5 of 6: 31, step6 of 6: 0 } } m31100| Wed Dec 12 22:24:21.095 [cleanupOldData-50c94a65f61562284ec141ae] waiting to remove documents for test.remove2 from { i: 1.0 } -> { i: 2.0 } m31100| Wed Dec 12 22:24:21.095 [cleanupOldData-50c94a65f61562284ec141ae] moveChunk starting delete for: test.remove2 from { i: 1.0 } -> { i: 2.0 } m31100| Wed Dec 12 22:24:21.095 [cleanupOldData-50c94a65f61562284ec141ae] moveChunk deleted 30 documents for test.remove2 from { i: 1.0 } -> { i: 2.0 } m31200| Wed Dec 12 22:24:21.048 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:21-2", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1355369061048), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 2.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 28 } } m30999| Wed Dec 12 22:24:21.063 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' unlocked. ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5 chunk diff: 2 ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5 chunk diff: 2 ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5 chunk diff: 2 ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5 chunk diff: 2 m30999| Wed Dec 12 22:24:22.077 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' acquired, ts : 50c94a664a44fbeaa68cfa73 m30999| Wed Dec 12 22:24:22.077 [Balancer] ns: test.remove2 going to move { _id: "test.remove2-i_2.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f'), ns: "test.remove2", min: { i: 2.0 }, max: { i: 4.0 }, shard: "remove2-rs0" } from: remove2-rs0 to: remove2-rs1 tag [] m30999| Wed Dec 12 22:24:22.077 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2shard: remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101lastmod: 4|1||000000000000000000000000min: { i: 2.0 }max: { i: 4.0 }) remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 -> remove2-rs1:remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m31100| Wed Dec 12 22:24:22.077 [conn14] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", to: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 2.0 }, max: { i: 4.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_2.0", configdb: "AMAZONA-DFVK11N:29000", secondaryThrottle: false, waitForDelete: false } m31100| Wed Dec 12 22:24:22.077 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' acquired, ts : 50c94a66f61562284ec141af m31100| Wed Dec 12 22:24:22.077 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:22-16", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369062077), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 2.0 }, max: { i: 4.0 }, from: "remove2-rs0", to: "remove2-rs1" } } m31100| Wed Dec 12 22:24:22.077 [conn14] moveChunk request accepted at version 4|1||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:22.077 [conn14] moveChunk number of documents: 60 m31100| Wed Dec 12 22:24:22.093 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: 2.0 }, max: { i: 4.0 }, shardKeyPattern: { i: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Wed Dec 12 22:24:22.093 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Wed Dec 12 22:24:22.093 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 2.0 } -> { i: 4.0 } m31100| Wed Dec 12 22:24:22.109 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: 2.0 }, max: { i: 4.0 }, shardKeyPattern: { i: 1.0 }, state: "catchup", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Wed Dec 12 22:24:22.109 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: 2.0 } -> { i: 4.0 } m31100| Wed Dec 12 22:24:22.124 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: 2.0 }, max: { i: 4.0 }, shardKeyPattern: { i: 1.0 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Wed Dec 12 22:24:22.124 [conn14] moveChunk setting version to: 5|0||50c94a5d4a44fbeaa68cfa6f m31200| Wed Dec 12 22:24:22.124 [conn21] Waiting for commit to finish m31200| Wed Dec 12 22:24:22.140 [conn21] Waiting for commit to finish m31200| Wed Dec 12 22:24:22.140 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 2.0 } -> { i: 4.0 } m31200| Wed Dec 12 22:24:22.140 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: 2.0 } -> { i: 4.0 } m31200| Wed Dec 12 22:24:22.140 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:22-3", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1355369062140), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 2.0 }, max: { i: 4.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 32 } } m31100| Wed Dec 12 22:24:22.155 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", min: { i: 2.0 }, max: { i: 4.0 }, shardKeyPattern: { i: 1.0 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Wed Dec 12 22:24:22.155 [conn14] moveChunk updating self version to: 5|1||50c94a5d4a44fbeaa68cfa6f through { i: 4.0 } -> { i: 6.0 } for collection 'test.remove2' m31100| Wed Dec 12 22:24:22.155 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:22-17", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369062155), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 2.0 }, max: { i: 4.0 }, from: "remove2-rs0", to: "remove2-rs1" } } m31100| Wed Dec 12 22:24:22.155 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Wed Dec 12 22:24:22.155 [conn14] MigrateFromStatus::done Global lock acquired m31100| Wed Dec 12 22:24:22.155 [conn14] forking for cleanup of chunk data m31100| Wed Dec 12 22:24:22.155 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Wed Dec 12 22:24:22.155 [conn14] MigrateFromStatus::done Global lock acquired m31100| Wed Dec 12 22:24:22.155 [cleanupOldData-50c94a66f61562284ec141b0] (start) waiting to cleanup test.remove2 from { i: 2.0 } -> { i: 4.0 }, # cursors remaining: 0 m31100| Wed Dec 12 22:24:22.155 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31100:1355369053:41' unlocked. m31100| Wed Dec 12 22:24:22.155 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:22-18", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64510", time: new Date(1355369062155), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 2.0 }, max: { i: 4.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 39, step5 of 6: 31, step6 of 6: 0 } } ShardingTest input: { "remove2-rs0" : 4, "remove2-rs1" : 4 } min: 4 max: 4 chunk diff: 0 { "was" : 30, "ok" : 1 } m30999| Wed Dec 12 22:24:22.155 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 13 version: 5|1||50c94a5d4a44fbeaa68cfa6f based on: 4|1||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:22.155 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' unlocked. m31200| Wed Dec 12 22:24:22.155 [conn16] no current chunk manager found for this shard, will initialize m31200| Wed Dec 12 22:24:22.171 [initandlisten] connection accepted from 10.28.45.224:64549 #22 (16 connections now open) m31100| Wed Dec 12 22:24:22.187 [cleanupOldData-50c94a66f61562284ec141b0] waiting to remove documents for test.remove2 from { i: 2.0 } -> { i: 4.0 } m31100| Wed Dec 12 22:24:22.187 [cleanupOldData-50c94a66f61562284ec141b0] moveChunk starting delete for: test.remove2 from { i: 2.0 } -> { i: 4.0 } m31100| Wed Dec 12 22:24:22.187 [initandlisten] connection accepted from 10.28.45.224:64550 #22 (15 connections now open) m31100| Wed Dec 12 22:24:22.187 [cleanupOldData-50c94a66f61562284ec141b0] moveChunk deleted 60 documents for test.remove2 from { i: 2.0 } -> { i: 4.0 } m29000| Wed Dec 12 22:24:22.280 [conn8] timeoutMs not support for v8 yet code: $reduce = function ( doc , out ){ out.nChunks++; } m29000| in gc --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3 } shards: { "_id" : "remove2-rs0", "host" : "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101" } { "_id" : "remove2-rs1", "host" : "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "remove2-rs0" } test.remove2 shard key: { "i" : 1 } chunks: remove2-rs1 4 remove2-rs0 4 { "i" : { "$MinKey" : true } } -->> { "i" : 0 } on : remove2-rs1 { "t" : 2000, "i" : 0 } { "i" : 0 } -->> { "i" : 1 } on : remove2-rs1 { "t" : 3000, "i" : 0 } { "i" : 1 } -->> { "i" : 2 } on : remove2-rs1 { "t" : 4000, "i" : 0 } { "i" : 2 } -->> { "i" : 4 } on : remove2-rs1 { "t" : 5000, "i" : 0 } { "i" : 4 } -->> { "i" : 6 } on : remove2-rs0 { "t" : 5000, "i" : 1 } { "i" : 6 } -->> { "i" : 7 } on : remove2-rs0 { "t" : 1000, "i" : 13 } { "i" : 7 } -->> { "i" : 9 } on : remove2-rs0 { "t" : 1000, "i" : 14 } { "i" : 9 } -->> { "i" : { "$MaxKey" : true } } on : remove2-rs0 { "t" : 1000, "i" : 4 } ---- Attempting to remove shard and add it back in ---- Removing shard with name: remove2-rs1 m30999| Wed Dec 12 22:24:22.280 [conn1] going to start draining shard: remove2-rs1 m30999| primaryLocalDoc: { _id: "local", primary: "remove2-rs1" } { "msg" : "draining started successfully", "state" : "started", "shard" : "remove2-rs1", "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(4), "dbs" : NumberLong(0) }, "ok" : 1 } m31200| Wed Dec 12 22:24:22.327 [conn3] end connection 10.28.45.224:64468 (15 connections now open) m31200| Wed Dec 12 22:24:22.327 [initandlisten] connection accepted from 10.28.45.224:64551 #23 (16 connections now open) { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(4), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(4), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(4), "dbs" : NumberLong(0) }, "ok" : 1 } m31101| Wed Dec 12 22:24:23.076 [initandlisten] connection accepted from 10.28.45.224:64552 #11 (9 connections now open) m31201| Wed Dec 12 22:24:23.076 [initandlisten] connection accepted from 10.28.45.224:64553 #11 (9 connections now open) { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(4), "dbs" : NumberLong(0) }, "ok" : 1 } m30999| Wed Dec 12 22:24:23.169 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' acquired, ts : 50c94a674a44fbeaa68cfa74 m30999| Wed Dec 12 22:24:23.169 [Balancer] going to move { _id: "test.remove2-i_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs1" } from remove2-rs1() to remove2-rs0 m30999| Wed Dec 12 22:24:23.169 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2shard: remove2-rs1:remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201lastmod: 2|0||000000000000000000000000min: { i: MinKey }max: { i: 0.0 }) remove2-rs1:remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 -> remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 m31200| Wed Dec 12 22:24:23.169 [conn14] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", to: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "AMAZONA-DFVK11N:29000", secondaryThrottle: false, waitForDelete: false } m29000| Wed Dec 12 22:24:23.169 [initandlisten] connection accepted from 10.28.45.224:64554 #11 (11 connections now open) m31200| Wed Dec 12 22:24:23.169 [LockPinger] creating distributed lock ping thread for AMAZONA-DFVK11N:29000 and process AMAZONA-DFVK11N:31200:1355369063:41 (sleeping for 30000ms) m31200| Wed Dec 12 22:24:23.169 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31200:1355369063:41' acquired, ts : 50c94a670b1c0dbac1e1e67a m31200| Wed Dec 12 22:24:23.169 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:23-4", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64516", time: new Date(1355369063169), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs1", to: "remove2-rs0" } } m31200| Wed Dec 12 22:24:23.169 [conn14] moveChunk request accepted at version 5|0||50c94a5d4a44fbeaa68cfa6f m31200| Wed Dec 12 22:24:23.169 [conn14] moveChunk number of documents: 0 m31100| Wed Dec 12 22:24:23.169 [migrateThread] Waiting for replication to catch up before entering critical section m31100| Wed Dec 12 22:24:23.169 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 } m31100| Wed Dec 12 22:24:23.169 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: MinKey } -> { i: 0.0 } m31200| Wed Dec 12 22:24:23.185 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Wed Dec 12 22:24:23.185 [conn14] moveChunk setting version to: 6|0||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:23.185 [initandlisten] connection accepted from 10.28.45.224:64556 #23 (16 connections now open) m31100| Wed Dec 12 22:24:23.185 [conn23] Waiting for commit to finish m31100| Wed Dec 12 22:24:23.201 [conn23] Waiting for commit to finish m31100| Wed Dec 12 22:24:23.201 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 } m31100| Wed Dec 12 22:24:23.201 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: MinKey } -> { i: 0.0 } m31100| Wed Dec 12 22:24:23.201 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:23-19", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1355369063201), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 20 } } m31200| Wed Dec 12 22:24:23.216 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m31200| Wed Dec 12 22:24:23.216 [conn14] moveChunk updating self version to: 6|1||50c94a5d4a44fbeaa68cfa6f through { i: 0.0 } -> { i: 1.0 } for collection 'test.remove2' m29000| Wed Dec 12 22:24:23.216 [initandlisten] connection accepted from 10.28.45.224:64558 #12 (12 connections now open) m31200| Wed Dec 12 22:24:23.216 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:23-5", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64516", time: new Date(1355369063216), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs1", to: "remove2-rs0" } } m30999| Wed Dec 12 22:24:23.216 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 14 version: 6|1||50c94a5d4a44fbeaa68cfa6f based on: 5|1||50c94a5d4a44fbeaa68cfa6f m31200| Wed Dec 12 22:24:23.216 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31200| Wed Dec 12 22:24:23.216 [conn14] MigrateFromStatus::done Global lock acquired m30999| Wed Dec 12 22:24:23.216 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' unlocked. m31200| Wed Dec 12 22:24:23.216 [conn14] forking for cleanup of chunk data m31200| Wed Dec 12 22:24:23.216 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31200| Wed Dec 12 22:24:23.216 [conn14] MigrateFromStatus::done Global lock acquired m31200| Wed Dec 12 22:24:23.216 [cleanupOldData-50c94a670b1c0dbac1e1e67b] (start) waiting to cleanup test.remove2 from { i: MinKey } -> { i: 0.0 }, # cursors remaining: 0 m31200| Wed Dec 12 22:24:23.216 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31200:1355369063:41' unlocked. m31200| Wed Dec 12 22:24:23.216 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:23-6", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64516", time: new Date(1355369063216), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 6: 0, step2 of 6: 4, step3 of 6: 0, step4 of 6: 6, step5 of 6: 32, step6 of 6: 0 } } m31200| Wed Dec 12 22:24:23.247 [cleanupOldData-50c94a670b1c0dbac1e1e67b] waiting to remove documents for test.remove2 from { i: MinKey } -> { i: 0.0 } m31200| Wed Dec 12 22:24:23.247 [cleanupOldData-50c94a670b1c0dbac1e1e67b] moveChunk starting delete for: test.remove2 from { i: MinKey } -> { i: 0.0 } m31200| Wed Dec 12 22:24:23.247 [cleanupOldData-50c94a670b1c0dbac1e1e67b] moveChunk deleted 0 documents for test.remove2 from { i: MinKey } -> { i: 0.0 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(3), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(3), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(3), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(3), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(3), "dbs" : NumberLong(0) }, "ok" : 1 } m30999| Wed Dec 12 22:24:24.230 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' acquired, ts : 50c94a684a44fbeaa68cfa75 m30999| Wed Dec 12 22:24:24.230 [Balancer] going to move { _id: "test.remove2-i_0.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f'), ns: "test.remove2", min: { i: 0.0 }, max: { i: 1.0 }, shard: "remove2-rs1" } from remove2-rs1() to remove2-rs0 m30999| Wed Dec 12 22:24:24.230 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2shard: remove2-rs1:remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201lastmod: 6|1||000000000000000000000000min: { i: 0.0 }max: { i: 1.0 }) remove2-rs1:remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 -> remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 m31200| Wed Dec 12 22:24:24.230 [conn14] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", to: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "AMAZONA-DFVK11N:29000", secondaryThrottle: false, waitForDelete: false } m31200| Wed Dec 12 22:24:24.230 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31200:1355369063:41' acquired, ts : 50c94a680b1c0dbac1e1e67c m31200| Wed Dec 12 22:24:24.230 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:24-7", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64516", time: new Date(1355369064230), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs1", to: "remove2-rs0" } } m31200| Wed Dec 12 22:24:24.230 [conn14] moveChunk request accepted at version 6|1||50c94a5d4a44fbeaa68cfa6f m31200| Wed Dec 12 22:24:24.230 [conn14] moveChunk number of documents: 30 m31200| Wed Dec 12 22:24:24.246 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1.0 }, state: "clone", counts: { cloned: 19, clonedBytes: 312113, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Wed Dec 12 22:24:24.246 [migrateThread] Waiting for replication to catch up before entering critical section m31100| Wed Dec 12 22:24:24.246 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 } m31100| Wed Dec 12 22:24:24.246 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: 0.0 } -> { i: 1.0 } m31200| Wed Dec 12 22:24:24.261 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1.0 }, state: "steady", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Wed Dec 12 22:24:24.261 [conn14] moveChunk setting version to: 7|0||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:24.261 [conn23] Waiting for commit to finish m31100| Wed Dec 12 22:24:24.277 [conn23] Waiting for commit to finish m31100| Wed Dec 12 22:24:24.277 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 } m31100| Wed Dec 12 22:24:24.277 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: 0.0 } -> { i: 1.0 } m31100| Wed Dec 12 22:24:24.277 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:24-20", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1355369064277), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 28 } } m31200| Wed Dec 12 22:24:24.293 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1.0 }, state: "done", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } m31200| Wed Dec 12 22:24:24.293 [conn14] moveChunk updating self version to: 7|1||50c94a5d4a44fbeaa68cfa6f through { i: 1.0 } -> { i: 2.0 } for collection 'test.remove2' m31200| Wed Dec 12 22:24:24.293 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:24-8", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64516", time: new Date(1355369064293), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs1", to: "remove2-rs0" } } m31200| Wed Dec 12 22:24:24.293 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31200| Wed Dec 12 22:24:24.293 [conn14] MigrateFromStatus::done Global lock acquired m31200| Wed Dec 12 22:24:24.293 [conn14] forking for cleanup of chunk data m31200| Wed Dec 12 22:24:24.293 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31200| Wed Dec 12 22:24:24.293 [conn14] MigrateFromStatus::done Global lock acquired m31200| Wed Dec 12 22:24:24.293 [cleanupOldData-50c94a680b1c0dbac1e1e67d] (start) waiting to cleanup test.remove2 from { i: 0.0 } -> { i: 1.0 }, # cursors remaining: 0 m31200| Wed Dec 12 22:24:24.293 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31200:1355369063:41' unlocked. m31200| Wed Dec 12 22:24:24.293 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:24-9", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64516", time: new Date(1355369064293), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 24, step5 of 6: 31, step6 of 6: 0 } } m30999| Wed Dec 12 22:24:24.293 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 15 version: 7|1||50c94a5d4a44fbeaa68cfa6f based on: 6|1||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:24.293 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' unlocked. { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(2), "dbs" : NumberLong(0) }, "ok" : 1 } m31200| Wed Dec 12 22:24:24.324 [cleanupOldData-50c94a680b1c0dbac1e1e67d] waiting to remove documents for test.remove2 from { i: 0.0 } -> { i: 1.0 } m31200| Wed Dec 12 22:24:24.324 [cleanupOldData-50c94a680b1c0dbac1e1e67d] moveChunk starting delete for: test.remove2 from { i: 0.0 } -> { i: 1.0 } m31200| Wed Dec 12 22:24:24.324 [cleanupOldData-50c94a680b1c0dbac1e1e67d] moveChunk deleted 30 documents for test.remove2 from { i: 0.0 } -> { i: 1.0 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(2), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(2), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(2), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(2), "dbs" : NumberLong(0) }, "ok" : 1 } m30999| Wed Dec 12 22:24:25.307 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' acquired, ts : 50c94a694a44fbeaa68cfa76 m30999| Wed Dec 12 22:24:25.307 [Balancer] going to move { _id: "test.remove2-i_1.0", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f'), ns: "test.remove2", min: { i: 1.0 }, max: { i: 2.0 }, shard: "remove2-rs1" } from remove2-rs1() to remove2-rs0 m30999| Wed Dec 12 22:24:25.307 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2shard: remove2-rs1:remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201lastmod: 7|1||000000000000000000000000min: { i: 1.0 }max: { i: 2.0 }) remove2-rs1:remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 -> remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 m31200| Wed Dec 12 22:24:25.307 [conn14] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", to: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 1.0 }, max: { i: 2.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "AMAZONA-DFVK11N:29000", secondaryThrottle: false, waitForDelete: false } m31200| Wed Dec 12 22:24:25.307 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31200:1355369063:41' acquired, ts : 50c94a690b1c0dbac1e1e67e m31200| Wed Dec 12 22:24:25.307 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:25-10", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64516", time: new Date(1355369065307), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 2.0 }, from: "remove2-rs1", to: "remove2-rs0" } } m31200| Wed Dec 12 22:24:25.307 [conn14] moveChunk request accepted at version 7|1||50c94a5d4a44fbeaa68cfa6f m31200| Wed Dec 12 22:24:25.307 [conn14] moveChunk number of documents: 30 m31200| Wed Dec 12 22:24:25.322 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 1.0 }, max: { i: 2.0 }, shardKeyPattern: { i: 1.0 }, state: "clone", counts: { cloned: 10, clonedBytes: 164270, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(2), "dbs" : NumberLong(0) }, "ok" : 1 } m31100| Wed Dec 12 22:24:25.322 [migrateThread] Waiting for replication to catch up before entering critical section m31100| Wed Dec 12 22:24:25.322 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 2.0 } m31200| Wed Dec 12 22:24:25.338 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 1.0 }, max: { i: 2.0 }, shardKeyPattern: { i: 1.0 }, state: "catchup", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Wed Dec 12 22:24:25.353 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 1.0 }, max: { i: 2.0 }, shardKeyPattern: { i: 1.0 }, state: "catchup", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Wed Dec 12 22:24:25.369 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 1.0 }, max: { i: 2.0 }, shardKeyPattern: { i: 1.0 }, state: "catchup", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Wed Dec 12 22:24:25.369 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: 1.0 } -> { i: 2.0 } m31200| Wed Dec 12 22:24:25.400 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 1.0 }, max: { i: 2.0 }, shardKeyPattern: { i: 1.0 }, state: "steady", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Wed Dec 12 22:24:25.400 [conn14] moveChunk setting version to: 8|0||50c94a5d4a44fbeaa68cfa6f m31100| Wed Dec 12 22:24:25.400 [conn23] Waiting for commit to finish m31100| Wed Dec 12 22:24:25.416 [conn23] Waiting for commit to finish m31100| Wed Dec 12 22:24:25.416 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 2.0 } m31100| Wed Dec 12 22:24:25.416 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: 1.0 } -> { i: 2.0 } m31100| Wed Dec 12 22:24:25.416 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:25-21", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1355369065416), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 2.0 }, step1 of 5: 1, step2 of 5: 0, step3 of 5: 11, step4 of 5: 0, step5 of 5: 88 } } m31200| Wed Dec 12 22:24:25.431 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 1.0 }, max: { i: 2.0 }, shardKeyPattern: { i: 1.0 }, state: "done", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } m31200| Wed Dec 12 22:24:25.431 [conn14] moveChunk updating self version to: 8|1||50c94a5d4a44fbeaa68cfa6f through { i: 2.0 } -> { i: 4.0 } for collection 'test.remove2' m31200| Wed Dec 12 22:24:25.431 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:25-11", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64516", time: new Date(1355369065431), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 2.0 }, from: "remove2-rs1", to: "remove2-rs0" } } m31200| Wed Dec 12 22:24:25.431 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31200| Wed Dec 12 22:24:25.431 [conn14] MigrateFromStatus::done Global lock acquired m31200| Wed Dec 12 22:24:25.431 [conn14] forking for cleanup of chunk data m31200| Wed Dec 12 22:24:25.431 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31200| Wed Dec 12 22:24:25.431 [conn14] MigrateFromStatus::done Global lock acquired m31200| Wed Dec 12 22:24:25.431 [cleanupOldData-50c94a690b1c0dbac1e1e67f] (start) waiting to cleanup test.remove2 from { i: 1.0 } -> { i: 2.0 }, # cursors remaining: 0 m31200| Wed Dec 12 22:24:25.431 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31200:1355369063:41' unlocked. m31200| Wed Dec 12 22:24:25.431 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:25-12", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64516", time: new Date(1355369065431), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 2.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 85, step5 of 6: 32, step6 of 6: 0 } } m31200| Wed Dec 12 22:24:25.431 [conn14] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", to: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 1.0 }, max: { i: 2.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "AMAZONA-DFVK11N:29000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:22 r:93 w:12 reslen:37 122ms m30999| Wed Dec 12 22:24:25.431 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 16 version: 8|1||50c94a5d4a44fbeaa68cfa6f based on: 7|1||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:25.431 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' unlocked. m31200| Wed Dec 12 22:24:25.463 [cleanupOldData-50c94a690b1c0dbac1e1e67f] waiting to remove documents for test.remove2 from { i: 1.0 } -> { i: 2.0 } m31200| Wed Dec 12 22:24:25.463 [cleanupOldData-50c94a690b1c0dbac1e1e67f] moveChunk starting delete for: test.remove2 from { i: 1.0 } -> { i: 2.0 } m31200| Wed Dec 12 22:24:25.463 [cleanupOldData-50c94a690b1c0dbac1e1e67f] moveChunk deleted 30 documents for test.remove2 from { i: 1.0 } -> { i: 2.0 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(1), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(1), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(1), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(1), "dbs" : NumberLong(0) }, "ok" : 1 } { "msg" : "draining ongoing", "state" : "ongoing", "remaining" : { "chunks" : NumberLong(1), "dbs" : NumberLong(0) }, "ok" : 1 } m30999| Wed Dec 12 22:24:26.445 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' acquired, ts : 50c94a6a4a44fbeaa68cfa77 m30999| Wed Dec 12 22:24:26.445 [Balancer] going to move { _id: "test.remove2-i_2.0", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('50c94a5d4a44fbeaa68cfa6f'), ns: "test.remove2", min: { i: 2.0 }, max: { i: 4.0 }, shard: "remove2-rs1" } from remove2-rs1() to remove2-rs0 m30999| Wed Dec 12 22:24:26.445 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2shard: remove2-rs1:remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201lastmod: 8|1||000000000000000000000000min: { i: 2.0 }max: { i: 4.0 }) remove2-rs1:remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 -> remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 m31200| Wed Dec 12 22:24:26.445 [conn14] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", to: "remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 2.0 }, max: { i: 4.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_2.0", configdb: "AMAZONA-DFVK11N:29000", secondaryThrottle: false, waitForDelete: false } m31100| Wed Dec 12 22:24:26.461 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Wed Dec 12 22:24:26.445 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31200:1355369063:41' acquired, ts : 50c94a6a0b1c0dbac1e1e680 m31200| Wed Dec 12 22:24:26.445 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:26-13", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64516", time: new Date(1355369066445), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 2.0 }, max: { i: 4.0 }, from: "remove2-rs1", to: "remove2-rs0" } } m31200| Wed Dec 12 22:24:26.445 [conn14] moveChunk request accepted at version 8|1||50c94a5d4a44fbeaa68cfa6f m31200| Wed Dec 12 22:24:26.445 [conn14] moveChunk number of documents: 60 m31100| Wed Dec 12 22:24:26.461 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 2.0 } -> { i: 4.0 } m31100| Wed Dec 12 22:24:26.477 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: 2.0 } -> { i: 4.0 } m31100| Wed Dec 12 22:24:26.492 [conn23] Waiting for commit to finish m31100| Wed Dec 12 22:24:26.508 [conn23] Waiting for commit to finish m31100| Wed Dec 12 22:24:26.508 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 2.0 } -> { i: 4.0 } m31100| Wed Dec 12 22:24:26.508 [migrateThread] migrate commit flushed to journal for 'test.remove2' { i: 2.0 } -> { i: 4.0 } m31200| Wed Dec 12 22:24:26.461 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 2.0 }, max: { i: 4.0 }, shardKeyPattern: { i: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Wed Dec 12 22:24:26.477 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 2.0 }, max: { i: 4.0 }, shardKeyPattern: { i: 1.0 }, state: "catchup", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Wed Dec 12 22:24:26.492 [conn14] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 2.0 }, max: { i: 4.0 }, shardKeyPattern: { i: 1.0 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Wed Dec 12 22:24:26.492 [conn14] moveChunk setting version to: 9|0||50c94a5d4a44fbeaa68cfa6f m31200| Wed Dec 12 22:24:26.523 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201", min: { i: 2.0 }, max: { i: 4.0 }, shardKeyPattern: { i: 1.0 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } m31200| Wed Dec 12 22:24:26.523 [conn14] moveChunk moved last chunk out for collection 'test.remove2' m31100| Wed Dec 12 22:24:26.508 [migrateThread] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:26-22", server: "AMAZONA-DFVK11N", clientAddr: ":27017", time: new Date(1355369066508), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 2.0 }, max: { i: 4.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 22, step4 of 5: 0, step5 of 5: 31 } } m31200| Wed Dec 12 22:24:26.523 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:26-14", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64516", time: new Date(1355369066523), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 2.0 }, max: { i: 4.0 }, from: "remove2-rs1", to: "remove2-rs0" } } m31200| Wed Dec 12 22:24:26.523 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31200| Wed Dec 12 22:24:26.523 [conn14] MigrateFromStatus::done Global lock acquired m31200| Wed Dec 12 22:24:26.523 [conn14] forking for cleanup of chunk data m31200| Wed Dec 12 22:24:26.523 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31200| Wed Dec 12 22:24:26.523 [conn14] MigrateFromStatus::done Global lock acquired m31200| Wed Dec 12 22:24:26.523 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] (start) waiting to cleanup test.remove2 from { i: 2.0 } -> { i: 4.0 }, # cursors remaining: 0 m31200| Wed Dec 12 22:24:26.523 [conn14] distributed lock 'test.remove2/AMAZONA-DFVK11N:31200:1355369063:41' unlocked. m31200| Wed Dec 12 22:24:26.523 [conn14] about to log metadata event: { _id: "AMAZONA-DFVK11N-2012-12-13T03:24:26-15", server: "AMAZONA-DFVK11N", clientAddr: "10.28.45.224:64516", time: new Date(1355369066523), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 2.0 }, max: { i: 4.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 39, step5 of 6: 33, step6 of 6: 0 } } m30999| Wed Dec 12 22:24:26.523 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 17 version: 9|0||50c94a5d4a44fbeaa68cfa6f based on: 8|1||50c94a5d4a44fbeaa68cfa6f m30999| Wed Dec 12 22:24:26.523 [Balancer] distributed lock 'balancer/AMAZONA-DFVK11N:30999:1355369052:41' unlocked. m30999| Wed Dec 12 22:24:26.539 [conn1] going to remove shard: remove2-rs1 m30999| Wed Dec 12 22:24:26.539 [conn1] deleting replica set monitor for: remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m31200| Wed Dec 12 22:24:26.539 [conn14] end connection 10.28.45.224:64516 (15 connections now open) m31200| Wed Dec 12 22:24:26.539 [conn12] end connection 10.28.45.224:64512 (15 connections now open) m31201| Wed Dec 12 22:24:26.539 [conn6] end connection 10.28.45.224:64513 (8 connections now open) { "msg" : "removeshard completed successfully", "state" : "completed", "shard" : "remove2-rs1", "ok" : 1 } [ { "_id" : "balancer", "process" : "AMAZONA-DFVK11N:30999:1355369052:41", "state" : 0, "ts" : ObjectId("50c94a6a4a44fbeaa68cfa77"), "when" : ISODate("2012-12-13T03:24:26.445Z"), "who" : "AMAZONA-DFVK11N:30999:1355369052:41:Balancer:18467", "why" : "doing balance round" }, { "_id" : "test.remove2", "process" : "AMAZONA-DFVK11N:31200:1355369063:41", "state" : 0, "ts" : ObjectId("50c94a6a0b1c0dbac1e1e680"), "when" : ISODate("2012-12-13T03:24:26.445Z"), "who" : "AMAZONA-DFVK11N:31200:1355369063:41:conn14:18467", "why" : "migrate-{ i: 2.0 }" } ] m31200| Wed Dec 12 22:24:26.539 [conn1] dropDatabase test m31200| Wed Dec 12 22:24:26.555 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] waiting to remove documents for test.remove2 from { i: 2.0 } -> { i: 4.0 } m31200| Wed Dec 12 22:24:26.555 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] moveChunk starting delete for: test.remove2 from { i: 2.0 } -> { i: 4.0 } m31200| Wed Dec 12 22:24:26.976 [conn1] removeJournalFiles m31200| Wed Dec 12 22:24:26.976 [conn1] command test.$cmd command: { dropDatabase: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:436846 reslen:55 436ms Shard removed successfully Adding shard with seed: remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m31201| Wed Dec 12 22:24:26.976 [repl writer worker 1] dropDatabase test m30999| Wed Dec 12 22:24:26.976 [conn1] starting new replica set monitor for replica set remove2-rs1 with seed of AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m30999| Wed Dec 12 22:24:26.976 [conn1] successfully connected to seed AMAZONA-DFVK11N:31200 for replica set remove2-rs1 m31200| Wed Dec 12 22:24:26.976 [initandlisten] connection accepted from 10.28.45.224:64563 #24 (15 connections now open) m30999| Wed Dec 12 22:24:26.976 [conn1] changing hosts to { 0: "AMAZONA-DFVK11N:31200", 1: "AMAZONA-DFVK11N:31201" } from remove2-rs1/ m30999| Wed Dec 12 22:24:26.976 [conn1] trying to add new host AMAZONA-DFVK11N:31200 to replica set remove2-rs1 m30999| Wed Dec 12 22:24:26.976 [conn1] cannot connect to new host AMAZONA-DFVK11N:31200 to replica set remove2-rs1, err: m30999| Wed Dec 12 22:24:26.976 [conn1] trying to add new host AMAZONA-DFVK11N:31201 to replica set remove2-rs1 m31200| Wed Dec 12 22:24:26.976 [initandlisten] connection accepted from 10.28.45.224:64564 #25 (16 connections now open) m30999| Wed Dec 12 22:24:26.976 [conn1] cannot connect to new host AMAZONA-DFVK11N:31201 to replica set remove2-rs1, err: m31201| Wed Dec 12 22:24:26.976 [initandlisten] connection accepted from 10.28.45.224:64565 #12 (9 connections now open) m31200| Wed Dec 12 22:24:26.976 [conn24] end connection 10.28.45.224:64563 (15 connections now open) m30999| Wed Dec 12 22:24:26.976 [conn1] Primary for replica set remove2-rs1 changed to AMAZONA-DFVK11N:31200 m30999| Wed Dec 12 22:24:26.976 [conn1] replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 m31200| Wed Dec 12 22:24:26.976 [initandlisten] connection accepted from 10.28.45.224:64566 #26 (16 connections now open) m31200| Wed Dec 12 22:24:27.007 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] test.remove2 Assertion failure nsd src\mongo\s\d_migrate.cpp 74 m30999| Wed Dec 12 22:24:27.007 [conn1] addshard request { addshard: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201" } failed: can't add shard remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 because a local database 'test' exists in another remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 First attempt to addShard failed, trying again m30999| Wed Dec 12 22:24:27.007 [conn1] addshard request { addshard: "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201" } failed: can't add shard remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 because a local database 'test' exists in another remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101 Wed Dec 12 22:24:27.007 exec error: src/mongo/shell/shardingtest.js:558 command { "addshard" : "remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201" } failed: { "ok" : 0, "errmsg" : "can't add shard remove2-rs1/AMAZONA-DFVK11N:31200,AMAZONA-DFVK11N:31201 because a local database 'test' exists in another remove2-rs0:remove2-rs0/AMAZONA-DFVK11N:31100,AMAZONA-DFVK11N:31101" } throw "command " + tojson( cmd ) + " failed: " + tojson( res ); ^ failed to load: D:\slave\Windows_64bit_2008+\mongo\jstests\sharding\remove2.js m29000| Wed Dec 12 22:24:27.007 [initandlisten] connection accepted from 127.0.0.1:64567 #13 (13 connections now open) m29000| Wed Dec 12 22:24:27.007 [conn13] terminating, shutdown command received m29000| Wed Dec 12 22:24:27.007 dbexit: shutdown called m29000| Wed Dec 12 22:24:27.007 [conn13] shutdown: going to close listening sockets... m29000| Wed Dec 12 22:24:27.007 [conn13] closing listening socket: 424 m29000| Wed Dec 12 22:24:27.007 [conn13] closing listening socket: 436 m29000| Wed Dec 12 22:24:27.007 [conn13] shutdown: going to flush diaglog... m29000| Wed Dec 12 22:24:27.007 [conn13] shutdown: going to close sockets... m29000| Wed Dec 12 22:24:27.007 [conn13] shutdown: waiting for fs preallocator... m29000| Wed Dec 12 22:24:27.007 [conn13] shutdown: lock for final commit... m29000| Wed Dec 12 22:24:27.007 [conn13] shutdown: final commit... m29000| Wed Dec 12 22:24:27.007 [conn3] end connection 10.28.45.224:64501 (12 connections now open) m29000| Wed Dec 12 22:24:27.007 [conn4] end connection 10.28.45.224:64502 (12 connections now open) m29000| Wed Dec 12 22:24:27.007 [conn2] end connection 10.28.45.224:64497 (12 connections now open) m29000| Wed Dec 12 22:24:27.007 [conn1] end connection 127.0.0.1:64496 (12 connections now open) m29000| Wed Dec 12 22:24:27.007 [conn5] end connection 10.28.45.224:64504 (12 connections now open) m29000| Wed Dec 12 22:24:27.007 [conn7] end connection 10.28.45.224:64523 (12 connections now open) m29000| Wed Dec 12 22:24:27.007 [conn8] end connection 10.28.45.224:64526 (12 connections now open) m29000| Wed Dec 12 22:24:27.007 [conn6] end connection 10.28.45.224:64520 (12 connections now open) m29000| Wed Dec 12 22:24:27.007 [conn9] end connection 10.28.45.224:64544 (12 connections now open) m29000| Wed Dec 12 22:24:27.007 [conn10] end connection 10.28.45.224:64545 (12 connections now open) m29000| Wed Dec 12 22:24:27.007 [conn11] end connection 10.28.45.224:64554 (12 connections now open) m29000| Wed Dec 12 22:24:27.007 [conn12] end connection 10.28.45.224:64558 (11 connections now open) Wed Dec 12 22:24:27.007 DBClientCursor::init call() failed m29000| Wed Dec 12 22:24:27.381 [conn13] shutdown: closing all files... m29000| Wed Dec 12 22:24:27.381 [conn13] closeAllFiles() finished m29000| Wed Dec 12 22:24:27.381 [conn13] journalCleanup... m29000| Wed Dec 12 22:24:27.381 [conn13] removeJournalFiles m29000| Wed Dec 12 22:24:27.381 [conn13] shutdown: removing fs lock... m29000| Wed Dec 12 22:24:27.381 dbexit: really exiting now m30999| Wed Dec 12 22:24:27.537 [Balancer] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.28.45.224:29000 m30999| Wed Dec 12 22:24:27.537 [Balancer] Detecting bad connection created at 0 microSec, clearing pool for AMAZONA-DFVK11N:29000 m30999| Wed Dec 12 22:24:27.537 [Balancer] caught exception while doing balance: socket exception [SEND_ERROR] for 10.28.45.224:29000 m31201| Wed Dec 12 22:24:27.818 [repl writer worker 1] removeJournalFiles m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] mongod.exe ...\src\mongo\util\stacktrace.cpp(182) mongo::printStackTrace+0x3e m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] mongod.exe ...\src\mongo\util\assert_util.cpp(109) mongo::verifyFailed+0xdc m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] mongod.exe ...\src\mongo\s\d_migrate.cpp(74) mongo::findShardKeyIndexPattern_locked+0x247 m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] mongod.exe ...\src\mongo\s\d_migrate.cpp(83) mongo::findShardKeyIndexPattern_unlocked+0x4c m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] mongod.exe ...\src\mongo\s\d_migrate.cpp(216) mongo::OldDataCleanup::doRemove+0x1b2 m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] mongod.exe ...\src\mongo\s\d_migrate.cpp(667) mongo::MigrateFromStatus::doRemove+0xa1 m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] mongod.exe ...\src\mongo\s\d_migrate.cpp(772) mongo::_cleanupOldData+0xd8c m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] mongod.exe ...\src\mongo\s\d_migrate.cpp(777) mongo::cleanupOldData+0x44 m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] mongod.exe ...\src\third_party\boost\boost\thread\detail\thread.hpp(63) boost::detail::thread_data > > >::run+0x20 m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] mongod.exe ...\src\third_party\boost\libs\thread\src\win32\thread.cpp(180) boost::`anonymous namespace'::thread_start_function+0x21 m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] mongod.exe f:\dd\vctools\crt_bld\self_64_amd64\crt\src\threadex.c(314) _callthreadstartex+0x17 m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] mongod.exe f:\dd\vctools\crt_bld\self_64_amd64\crt\src\threadex.c(292) _threadstartex+0x7f m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] kernel32.dll BaseThreadInitThunk+0xd m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] error cleaning old data:assertion src\mongo\s\d_migrate.cpp:74 m31200| Wed Dec 12 22:24:27.849 [cleanupOldData-50c94a6a0b1c0dbac1e1e681] Client::shutdown not called: cleanupOldData-50c94a6a0b1c0dbac1e1e681 m30999| Wed Dec 12 22:24:28.021 [mongosMain] connection accepted from 127.0.0.1:64568 #3 (3 connections now open) m30999| Wed Dec 12 22:24:28.021 [conn3] terminating, shutdown command received m30999| Wed Dec 12 22:24:28.021 [conn3] dbexit: shutdown called rc:0 shutdown called Wed Dec 12 22:24:28.021 Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:30999 m31100| Wed Dec 12 22:24:28.021 [conn14] end connection 10.28.45.224:64510 (15 connections now open) m31100| Wed Dec 12 22:24:28.021 [conn12] end connection 10.28.45.224:64506 (15 connections now open) m31200| Wed Dec 12 22:24:28.021 [conn26] end connection 10.28.45.224:64566 (15 connections now open) Wed Dec 12 22:24:28.021 SocketException: remote: 127.0.0.1:30999 error: 9001 socket exception [1] server [127.0.0.1:30999] Wed Dec 12 22:24:28.021 DBClientCursor::init call() failed m31100| Wed Dec 12 22:24:28.021 [conn16] end connection 10.28.45.224:64519 (13 connections now open) m31200| Wed Dec 12 22:24:28.021 [conn15] end connection 10.28.45.224:64521 (14 connections now open) m31101| Wed Dec 12 22:24:28.021 [conn6] end connection 10.28.45.224:64507 (8 connections now open) m31100| Wed Dec 12 22:24:28.021 [conn15] end connection 10.28.45.224:64518 (12 connections now open) m31200| Wed Dec 12 22:24:28.021 [conn22] end connection 10.28.45.224:64549 (13 connections now open) m31200| Wed Dec 12 22:24:28.021 [conn16] end connection 10.28.45.224:64522 (13 connections now open) m31100| Wed Dec 12 22:24:28.021 [conn22] end connection 10.28.45.224:64550 (11 connections now open) m31200| Wed Dec 12 22:24:28.021 [conn25] end connection 10.28.45.224:64564 (11 connections now open) m31100| Wed Dec 12 22:24:28.021 [initandlisten] connection accepted from 127.0.0.1:64569 #24 (12 connections now open) m31101| Wed Dec 12 22:24:28.021 [conn11] end connection 10.28.45.224:64552 (7 connections now open) m31201| Wed Dec 12 22:24:28.021 [conn12] end connection 10.28.45.224:64565 (8 connections now open) m31100| Wed Dec 12 22:24:28.021 [conn24] terminating, shutdown command received m31201| Wed Dec 12 22:24:28.021 [conn11] end connection 10.28.45.224:64553 (7 connections now open) m31100| Wed Dec 12 22:24:28.021 dbexit: shutdown called m31100| Wed Dec 12 22:24:28.021 [conn24] shutdown: going to close listening sockets... m31100| Wed Dec 12 22:24:28.021 [conn24] closing listening socket: 512 m31100| Wed Dec 12 22:24:28.021 [conn24] closing listening socket: 520 m31100| Wed Dec 12 22:24:28.021 [conn24] shutdown: going to flush diaglog... m31100| Wed Dec 12 22:24:28.021 [conn24] shutdown: going to close sockets... m31100| Wed Dec 12 22:24:28.021 [conn24] shutdown: waiting for fs preallocator... m31100| Wed Dec 12 22:24:28.021 [conn24] shutdown: lock for final commit... m31100| Wed Dec 12 22:24:28.021 [conn24] shutdown: final commit... m31100| Wed Dec 12 22:24:28.021 [conn9] end connection 10.28.45.224:64481 (11 connections now open) m31100| Wed Dec 12 22:24:28.021 [conn10] end connection 10.28.45.224:64483 (11 connections now open) m31100| Wed Dec 12 22:24:28.021 [conn21] end connection 10.28.45.224:64541 (11 connections now open) m31100| Wed Dec 12 22:24:28.021 [conn19] end connection 10.28.45.224:64537 (11 connections now open) m31100| Wed Dec 12 22:24:28.021 [conn23] end connection 10.28.45.224:64556 (11 connections now open) m31100| Wed Dec 12 22:24:28.021 [conn20] end connection 10.28.45.224:64539 (11 connections now open) m31200| Wed Dec 12 22:24:28.021 [conn21] end connection 10.28.45.224:64543 (10 connections now open) m31101| Wed Dec 12 22:24:28.021 [conn8] end connection 10.28.45.224:64527 (6 connections now open) Wed Dec 12 22:24:28.021 DBClientCursor::init call() failed m31100| Wed Dec 12 22:24:28.021 [conn1] end connection 127.0.0.1:64454 (11 connections now open) m31101| Wed Dec 12 22:24:28.021 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: AMAZONA-DFVK11N:31100 m31200| Wed Dec 12 22:24:28.021 [conn20] end connection 10.28.45.224:64535 (9 connections now open) m31201| Wed Dec 12 22:24:28.021 [conn9] end connection 10.28.45.224:64534 (6 connections now open) m31201| Wed Dec 12 22:24:28.021 [conn8] end connection 10.28.45.224:64532 (5 connections now open) m31101| Wed Dec 12 22:24:28.021 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: AMAZONA-DFVK11N:31100 m31200| Wed Dec 12 22:24:28.021 [conn19] end connection 10.28.45.224:64533 (8 connections now open) m31200| Wed Dec 12 22:24:28.021 [conn18] end connection 10.28.45.224:64531 (7 connections now open) m31100| Wed Dec 12 22:24:28.021 [conn17] end connection 10.28.45.224:64528 (11 connections now open) m31100| Wed Dec 12 22:24:28.286 [conn24] shutdown: closing all files... m31100| Wed Dec 12 22:24:28.286 [conn24] closeAllFiles() finished m31100| Wed Dec 12 22:24:28.286 [conn24] journalCleanup... m31100| Wed Dec 12 22:24:28.286 [conn24] removeJournalFiles m31100| Wed Dec 12 22:24:28.286 [conn24] shutdown: removing fs lock... m31100| Wed Dec 12 22:24:28.286 dbexit: really exiting now m31200| Wed Dec 12 22:24:28.754 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.28.45.224:31100 m31200| Wed Dec 12 22:24:28.754 [ReplicaSetMonitorWatcher] trying reconnect to AMAZONA-DFVK11N:31100 m31101| Wed Dec 12 22:24:29.097 [initandlisten] connection accepted from 127.0.0.1:64573 #12 (7 connections now open) m31101| Wed Dec 12 22:24:29.097 [conn12] terminating, shutdown command received m31101| Wed Dec 12 22:24:29.097 dbexit: shutdown called m31101| Wed Dec 12 22:24:29.097 [conn12] shutdown: going to close listening sockets... m31101| Wed Dec 12 22:24:29.097 [conn12] closing listening socket: 592 m31101| Wed Dec 12 22:24:29.097 [conn12] closing listening socket: 596 m31101| Wed Dec 12 22:24:29.097 [conn12] shutdown: going to flush diaglog... m31101| Wed Dec 12 22:24:29.097 [conn12] shutdown: going to close sockets... m31101| Wed Dec 12 22:24:29.097 [conn12] shutdown: waiting for fs preallocator... m31101| Wed Dec 12 22:24:29.097 [conn12] shutdown: lock for final commit... m31101| Wed Dec 12 22:24:29.097 [conn12] shutdown: final commit... m31101| Wed Dec 12 22:24:29.097 [conn1] end connection 127.0.0.1:64455 (6 connections now open) Wed Dec 12 22:24:29.097 DBClientCursor::init call() failed m31101| Wed Dec 12 22:24:29.097 [conn4] end connection 10.28.45.224:64482 (6 connections now open) m31101| Wed Dec 12 22:24:29.097 [conn5] end connection 10.28.45.224:64484 (6 connections now open) m31101| Wed Dec 12 22:24:29.097 [conn10] end connection 10.28.45.224:64540 (6 connections now open) m31101| Wed Dec 12 22:24:29.097 [conn9] end connection 10.28.45.224:64538 (5 connections now open) m31101| Wed Dec 12 22:24:29.456 [conn12] shutdown: closing all files... m31101| Wed Dec 12 22:24:29.456 [conn12] closeAllFiles() finished m31101| Wed Dec 12 22:24:29.456 [conn12] journalCleanup... m31101| Wed Dec 12 22:24:29.456 [conn12] removeJournalFiles m31101| Wed Dec 12 22:24:29.456 [conn12] shutdown: removing fs lock... m31101| Wed Dec 12 22:24:29.456 dbexit: really exiting now m31200| Wed Dec 12 22:24:29.768 [ReplicaSetMonitorWatcher] reconnect AMAZONA-DFVK11N:31100 failed couldn't connect to server AMAZONA-DFVK11N:31100 m31200| Wed Dec 12 22:24:29.768 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.28.45.224:31101 m31200| Wed Dec 12 22:24:30.111 [initandlisten] connection accepted from 127.0.0.1:64574 #27 (8 connections now open) m31200| Wed Dec 12 22:24:30.111 [conn27] terminating, shutdown command received m31200| Wed Dec 12 22:24:30.111 dbexit: shutdown called m31200| Wed Dec 12 22:24:30.111 [conn27] shutdown: going to close listening sockets... m31200| Wed Dec 12 22:24:30.111 [conn27] closing listening socket: 596 m31200| Wed Dec 12 22:24:30.111 [conn27] closing listening socket: 600 m31200| Wed Dec 12 22:24:30.111 [conn27] shutdown: going to flush diaglog... m31200| Wed Dec 12 22:24:30.111 [conn27] shutdown: going to close sockets... m31200| Wed Dec 12 22:24:30.111 [conn27] shutdown: waiting for fs preallocator... m31200| Wed Dec 12 22:24:30.111 [conn27] shutdown: lock for final commit... m31200| Wed Dec 12 22:24:30.111 [conn27] shutdown: final commit... m31201| Wed Dec 12 22:24:30.111 [conn10] end connection 10.28.45.224:64546 (4 connections now open) m31200| Wed Dec 12 22:24:30.111 [conn1] end connection 127.0.0.1:64460 (7 connections now open) m31200| Wed Dec 12 22:24:30.111 [conn23] end connection 10.28.45.224:64551 (7 connections now open) m31200| Wed Dec 12 22:24:30.111 [conn9] end connection 10.28.45.224:64492 (7 connections now open) Wed Dec 12 22:24:30.111 DBClientCursor::init call() failed m31200| Wed Dec 12 22:24:30.111 [conn10] end connection 10.28.45.224:64494 (7 connections now open) m31201| Wed Dec 12 22:24:30.111 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: AMAZONA-DFVK11N:31200 m31201| Wed Dec 12 22:24:30.111 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: AMAZONA-DFVK11N:31200 m31200| Wed Dec 12 22:24:30.111 [conn27] shutdown: closing all files... m31200| Wed Dec 12 22:24:30.111 [conn27] closeAllFiles() finished m31200| Wed Dec 12 22:24:30.111 [conn27] journalCleanup... m31200| Wed Dec 12 22:24:30.111 [conn27] removeJournalFiles m31200| Wed Dec 12 22:24:30.111 [conn27] shutdown: removing fs lock... m31200| Wed Dec 12 22:24:30.111 dbexit: really exiting now m31201| Wed Dec 12 22:24:30.377 [rsHealthPoll] replset info AMAZONA-DFVK11N:31200 heartbeat failed, retrying m31201| Wed Dec 12 22:24:31.126 [initandlisten] connection accepted from 127.0.0.1:64578 #13 (5 connections now open) m31201| Wed Dec 12 22:24:31.126 [conn13] terminating, shutdown command received m31201| Wed Dec 12 22:24:31.126 dbexit: shutdown called m31201| Wed Dec 12 22:24:31.126 [conn13] shutdown: going to close listening sockets... m31201| Wed Dec 12 22:24:31.126 [conn13] closing listening socket: 572 m31201| Wed Dec 12 22:24:31.126 [conn13] closing listening socket: 592 m31201| Wed Dec 12 22:24:31.126 [conn13] shutdown: going to flush diaglog... m31201| Wed Dec 12 22:24:31.126 [conn13] shutdown: going to close sockets... m31201| Wed Dec 12 22:24:31.126 [conn13] shutdown: waiting for fs preallocator... m31201| Wed Dec 12 22:24:31.126 [conn13] shutdown: lock for final commit... m31201| Wed Dec 12 22:24:31.126 [conn13] shutdown: final commit... m31201| Wed Dec 12 22:24:31.126 [conn1] end connection 127.0.0.1:64461 (4 connections now open) m31201| Wed Dec 12 22:24:31.126 [conn4] end connection 10.28.45.224:64493 (4 connections now open) m31201| Wed Dec 12 22:24:31.126 [conn5] end connection 10.28.45.224:64495 (4 connections now open) Wed Dec 12 22:24:31.126 DBClientCursor::init call() failed