(view as text)
scons   --release smokeFailingTests
 in dir C:\10gen\buildslaves\mongo\Windows_32bit\mongo (timeout 10800 secs)
 watching logfiles {}
 argv: scons   --release smokeFailingTests
 environment:
  ALLUSERSPROFILE=C:\ProgramData
  APPDATA=C:\Users\Administrator\AppData\Roaming
  BB_BUILDSLAVE="c:\Python27\Scripts\buildslave" 
  BB_PYTHON="c:\Python27\Scripts\..\python"
  CLIENTNAME=dcrosta
  COMMONPROGRAMFILES=C:\Program Files\Common Files
  COMPUTERNAME=IP-0A420969
  COMSPEC=C:\Windows\system32\cmd.exe
  DFSTRACINGON=FALSE
  FP_NO_HOST_CHECK=NO
  HOMEDRIVE=C:
  HOMEPATH=\Users\Administrator
  LOCALAPPDATA=C:\Users\Administrator\AppData\Local
  LOGONSERVER=\\IP-0A420969
  MONGO_BRANCH=master
  MONGO_BUILDER_NAME=Windows 32-bit
  MONGO_BUILD_NUMBER=5126
  MONGO_GIT_HASH=163a2d64ee88f7a4efb604f6208578ef117c4bc3
  MONGO_PHASE=recent failures
  MONGO_SLAVE_NAME=windows
  MONGO_USE_BUILDLOGGER=false
  OS=Windows_NT
  PATH=C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Git\cmd;c:\Program Files\Microsoft SQL Server\100\Tools\Binn\;c:\Program Files\Microsoft SQL Server\100\DTS\Binn\;c:\python27\scripts;c:\python27;c:\program files\microsoft visual studio 10.0\VC;c:\cygwin\bin
  PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
  PROCESSOR_ARCHITECTURE=x86
  PROCESSOR_IDENTIFIER=x86 Family 6 Model 26 Stepping 5, GenuineIntel
  PROCESSOR_LEVEL=6
  PROCESSOR_REVISION=1a05
  PROGRAMDATA=C:\ProgramData
  PROGRAMFILES=C:\Program Files
  PROMPT=$P$G
  PSMODULEPATH=C:\Windows\system32\WindowsPowerShell\v1.0\Modules\
  PUBLIC=C:\Users\Public
  PWD=C:\10gen\buildslaves\mongo\Windows_32bit\mongo
  PYTHON_EGG_CACHE=C:\egg
  SESSIONNAME=RDP-Tcp#0
  SYSTEMDRIVE=C:
  SYSTEMROOT=C:\Windows
  TEMP=C:\Users\ADMINI~1\AppData\Local\Temp\2
  TMP=C:\Users\ADMINI~1\AppData\Local\Temp\2
  TRACE_FORMAT_SEARCH_PATH=\\winseqfe\release\Windows6.0\lh_sp2rtm\6002.18005.090410-1830\x86fre\symbols.pri\TraceFormat
  USERDOMAIN=IP-0A420969
  USERNAME=Administrator
  USERPROFILE=C:\Users\Administrator
  VS100COMNTOOLS=C:\Program Files\Microsoft Visual Studio 10.0\Common7\Tools\
  WINDIR=C:\Windows
 using PTY: False
scons: Reading SConscript files ...
scons version: 2.1.0
python version: 2 7 2 'final' 0
found visual studio at C:\Program Files\Microsoft Visual Studio 10.0\VC\BIN
Windows SDK Root 'C:/Program Files/Microsoft SDKs/Windows/v7.0A'
Checking whether the C++ compiler works(cached) yes
Checking for C header file unistd.h... (cached) no
Checking whether clock_gettime is declared... (cached) no
Checking for C++ header file execinfo.h... (cached) no
Checking for C library pcap... (cached) no
Checking for C library wpcap... (cached) no
scons: done reading SConscript files.
scons: Building targets ...
generate_buildinfo(["build\buildinfo.cpp"], ['\n#include <string>\n#include <boost/version.hpp>\n\n#include "mongo/util/version.h"\n\nnamespace mongo {\n    const char * gitVersion() { return "%(git_version)s"; }\n    std::string sysInfo() { return "%(sys_info)s BOOST_LIB_VERSION=" BOOST_LIB_VERSION ; }\n}  // namespace mongo\n'])
c:\Python27\Scripts\..\python.exe C:\10gen\buildslaves\mongo\Windows_32bit\mongo\buildscripts\smoke.py failingTests --only-old-fails --continue-on-failure
cwd [C:\10gen\buildslaves\mongo\Windows_32bit\mongo]
Wed Jun 13 10:31:43 
Wed Jun 13 10:31:43 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
Wed Jun 13 10:31:43 
Wed Jun 13 10:31:43 [initandlisten] MongoDB starting : pid=2684 port=27999 dbpath=/data/db/sconsTests 32-bit host=ip-0A420969
Wed Jun 13 10:31:43 [initandlisten] 
Wed Jun 13 10:31:43 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
Wed Jun 13 10:31:43 [initandlisten] **       Not recommended for production.
Wed Jun 13 10:31:43 [initandlisten] 
Wed Jun 13 10:31:43 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
Wed Jun 13 10:31:43 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
Wed Jun 13 10:31:43 [initandlisten] **       with --journal, the limit is lower
Wed Jun 13 10:31:43 [initandlisten] 
Wed Jun 13 10:31:43 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
Wed Jun 13 10:31:43 [initandlisten] git version: 163a2d64ee88f7a4efb604f6208578ef117c4bc3
Wed Jun 13 10:31:43 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB_VERSION=1_49
Wed Jun 13 10:31:43 [initandlisten] options: { dbpath: "/data/db/sconsTests/", port: 27999 }
Wed Jun 13 10:31:43 [initandlisten] waiting for connections on port 27999
Wed Jun 13 10:31:43 [websvr] admin web console waiting for connections on port 28999
running C:\10gen\buildslaves\mongo\Windows_32bit\mongo\mongod.exe --port 27999 --dbpath /data/db/sconsTests/
 *******************************************
         Test : remove2.js ...
      Command : C:\10gen\buildslaves\mongo\Windows_32bit\mongo\mongo.exe --port 27999 --nodb C:\10gen\buildslaves\mongo\Windows_32bit\mongo\jstests\sharding\remove2.js --eval TestData = new Object();TestData.testPath = "C:\\10gen\\buildslaves\\mongo\\Windows_32bit\\mongo\\jstests\\sharding\\remove2.js";TestData.testFile = "remove2.js";TestData.testName = "remove2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;
         Date : Wed Jun 13 10:31:45 2012
Wed Jun 13 10:31:45 [initandlisten] connection accepted from 127.0.0.1:58580 #1 (1 connection now open)
Wed Jun 13 10:31:45 [conn1] end connection 127.0.0.1:58580 (0 connections now open)
MongoDB shell version: 2.1.2-pre-
null
Replica set test!
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31100, 31101 ]	31100 number
{
	"useHostName" : true,
	"oplogSize" : 40,
	"keyFile" : undefined,
	"port" : 31100,
	"noprealloc" : "",
	"smallfiles" : "",
	"rest" : "",
	"replSet" : "remove2-rs0",
	"dbpath" : "$set-$node",
	"useHostname" : true,
	"noJournalPrealloc" : undefined,
	"pathOpts" : {
		"testName" : "remove2",
		"shard" : 0,
		"node" : 0,
		"set" : "remove2-rs0"
	},
	"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/remove2-rs0-0'
Wed Jun 13 10:31:45 shell: started program mongod.exe --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet remove2-rs0 --dbpath /data/db/remove2-rs0-0
 m31100| note: noprealloc may hurt performance in many applications
 m31100| Wed Jun 13 10:31:45 
 m31100| Wed Jun 13 10:31:45 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
 m31100| Wed Jun 13 10:31:45 
 m31100| Wed Jun 13 10:31:45 [initandlisten] MongoDB starting : pid=876 port=31100 dbpath=/data/db/remove2-rs0-0 32-bit host=ip-0A420969
 m31100| Wed Jun 13 10:31:45 [initandlisten] 
 m31100| Wed Jun 13 10:31:45 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
 m31100| Wed Jun 13 10:31:45 [initandlisten] **       Not recommended for production.
 m31100| Wed Jun 13 10:31:45 [initandlisten] 
 m31100| Wed Jun 13 10:31:45 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
 m31100| Wed Jun 13 10:31:45 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
 m31100| Wed Jun 13 10:31:45 [initandlisten] **       with --journal, the limit is lower
 m31100| Wed Jun 13 10:31:45 [initandlisten] 
 m31100| Wed Jun 13 10:31:45 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
 m31100| Wed Jun 13 10:31:45 [initandlisten] git version: 163a2d64ee88f7a4efb604f6208578ef117c4bc3
 m31100| Wed Jun 13 10:31:45 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB_VERSION=1_49
 m31100| Wed Jun 13 10:31:45 [initandlisten] options: { dbpath: "/data/db/remove2-rs0-0", noprealloc: true, oplogSize: 40, port: 31100, replSet: "remove2-rs0", rest: true, smallfiles: true }
 m31100| Wed Jun 13 10:31:45 [initandlisten] waiting for connections on port 31100
 m31100| Wed Jun 13 10:31:45 [websvr] admin web console waiting for connections on port 32100
 m31100| Wed Jun 13 10:31:45 [initandlisten] connection accepted from 127.0.0.1:58582 #1 (1 connection now open)
 m31100| Wed Jun 13 10:31:45 [conn1] end connection 127.0.0.1:58582 (0 connections now open)
 m31100| Wed Jun 13 10:31:45 [initandlisten] connection accepted from 127.0.0.1:58583 #2 (1 connection now open)
 m31100| Wed Jun 13 10:31:45 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
 m31100| Wed Jun 13 10:31:45 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
 m31100| Wed Jun 13 10:31:45 [initandlisten] connection accepted from 127.0.0.1:58581 #3 (2 connections now open)
[ connection to ip-0A420969:31100 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31100, 31101 ]	31101 number
{
	"useHostName" : true,
	"oplogSize" : 40,
	"keyFile" : undefined,
	"port" : 31101,
	"noprealloc" : "",
	"smallfiles" : "",
	"rest" : "",
	"replSet" : "remove2-rs0",
	"dbpath" : "$set-$node",
	"useHostname" : true,
	"noJournalPrealloc" : undefined,
	"pathOpts" : {
		"testName" : "remove2",
		"shard" : 0,
		"node" : 1,
		"set" : "remove2-rs0"
	},
	"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/remove2-rs0-1'
Wed Jun 13 10:31:46 shell: started program mongod.exe --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet remove2-rs0 --dbpath /data/db/remove2-rs0-1
 m31101| note: noprealloc may hurt performance in many applications
 m31101| Wed Jun 13 10:31:46 
 m31101| Wed Jun 13 10:31:46 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
 m31101| Wed Jun 13 10:31:46 
 m31101| Wed Jun 13 10:31:46 [initandlisten] MongoDB starting : pid=1788 port=31101 dbpath=/data/db/remove2-rs0-1 32-bit host=ip-0A420969
 m31101| Wed Jun 13 10:31:46 [initandlisten] 
 m31101| Wed Jun 13 10:31:46 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
 m31101| Wed Jun 13 10:31:46 [initandlisten] **       Not recommended for production.
 m31101| Wed Jun 13 10:31:46 [initandlisten] 
 m31101| Wed Jun 13 10:31:46 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
 m31101| Wed Jun 13 10:31:46 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
 m31101| Wed Jun 13 10:31:46 [initandlisten] **       with --journal, the limit is lower
 m31101| Wed Jun 13 10:31:46 [initandlisten] 
 m31101| Wed Jun 13 10:31:46 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
 m31101| Wed Jun 13 10:31:46 [initandlisten] git version: 163a2d64ee88f7a4efb604f6208578ef117c4bc3
 m31101| Wed Jun 13 10:31:46 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB_VERSION=1_49
 m31101| Wed Jun 13 10:31:46 [initandlisten] options: { dbpath: "/data/db/remove2-rs0-1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "remove2-rs0", rest: true, smallfiles: true }
 m31101| Wed Jun 13 10:31:46 [initandlisten] waiting for connections on port 31101
 m31101| Wed Jun 13 10:31:46 [websvr] admin web console waiting for connections on port 32101
 m31101| Wed Jun 13 10:31:46 [initandlisten] connection accepted from 127.0.0.1:58585 #1 (1 connection now open)
 m31101| Wed Jun 13 10:31:46 [conn1] end connection 127.0.0.1:58585 (0 connections now open)
 m31101| Wed Jun 13 10:31:46 [initandlisten] connection accepted from 127.0.0.1:58586 #2 (1 connection now open)
 m31101| Wed Jun 13 10:31:46 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
 m31101| Wed Jun 13 10:31:46 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
 m31101| Wed Jun 13 10:31:46 [initandlisten] connection accepted from 127.0.0.1:58584 #3 (2 connections now open)
[ connection to ip-0A420969:31100, connection to ip-0A420969:31101 ]
{
	"replSetInitiate" : {
		"_id" : "remove2-rs0",
		"members" : [
			{
				"_id" : 0,
				"host" : "ip-0A420969:31100"
			},
			{
				"_id" : 1,
				"host" : "ip-0A420969:31101"
			}
		]
	}
}
 m31100| Wed Jun 13 10:31:46 [conn3] replSet replSetInitiate admin command received from client
 m31100| Wed Jun 13 10:31:46 [conn3] replSet replSetInitiate config object parses ok, 2 members specified
 m31100| Wed Jun 13 10:31:46 [initandlisten] connection accepted from 10.66.9.105:58587 #4 (3 connections now open)
 m31100| Wed Jun 13 10:31:46 [conn4] end connection 10.66.9.105:58587 (2 connections now open)
 m31101| Wed Jun 13 10:31:46 [initandlisten] connection accepted from 10.66.9.105:58588 #4 (3 connections now open)
 m31100| Wed Jun 13 10:31:46 [conn3] replSet replSetInitiate all members seem up
 m31100| Wed Jun 13 10:31:46 [conn3] ******
 m31100| Wed Jun 13 10:31:46 [conn3] creating replication oplog of size: 40MB...
 m31100| Wed Jun 13 10:31:46 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0/local.ns, filling with zeroes...
 m31100| Wed Jun 13 10:31:46 [FileAllocator] creating directory /data/db/remove2-rs0-0/_tmp
 m31100| Wed Jun 13 10:31:46 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0/local.ns, size: 16MB,  took 0.062 secs
 m31100| Wed Jun 13 10:31:46 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0/local.0, filling with zeroes...
 m31100| Wed Jun 13 10:31:46 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0/local.0, size: 64MB,  took 0.23 secs
 m31100| Wed Jun 13 10:31:49 [conn3] ******
 m31100| Wed Jun 13 10:31:49 [conn3] replSet info saving a newer config version to local.system.replset
 m31100| Wed Jun 13 10:31:49 [conn3] replSet saveConfigLocally done
 m31100| Wed Jun 13 10:31:49 [conn3] replSet replSetInitiate config now saved locally.  Should come online in about a minute.
 m31100| Wed Jun 13 10:31:49 [conn3] command admin.$cmd command: { replSetInitiate: { _id: "remove2-rs0", members: [ { _id: 0.0, host: "ip-0A420969:31100" }, { _id: 1.0, host: "ip-0A420969:31101" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:2756225 w:61 reslen:112 2812ms
{
	"info" : "Config now saved locally.  Should come online in about a minute.",
	"ok" : 1
}
Replica set test!
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31200, 31201 ]	31200 number
{
	"useHostName" : true,
	"oplogSize" : 40,
	"keyFile" : undefined,
	"port" : 31200,
	"noprealloc" : "",
	"smallfiles" : "",
	"rest" : "",
	"replSet" : "remove2-rs1",
	"dbpath" : "$set-$node",
	"useHostname" : true,
	"noJournalPrealloc" : undefined,
	"pathOpts" : {
		"testName" : "remove2",
		"shard" : 1,
		"node" : 0,
		"set" : "remove2-rs1"
	},
	"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/remove2-rs1-0'
Wed Jun 13 10:31:49 shell: started program mongod.exe --oplogSize 40 --port 31200 --noprealloc --smallfiles --rest --replSet remove2-rs1 --dbpath /data/db/remove2-rs1-0
 m31200| note: noprealloc may hurt performance in many applications
 m31200| Wed Jun 13 10:31:49 
 m31200| Wed Jun 13 10:31:49 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
 m31200| Wed Jun 13 10:31:49 
 m31200| Wed Jun 13 10:31:49 [initandlisten] MongoDB starting : pid=3824 port=31200 dbpath=/data/db/remove2-rs1-0 32-bit host=ip-0A420969
 m31200| Wed Jun 13 10:31:49 [initandlisten] 
 m31200| Wed Jun 13 10:31:49 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
 m31200| Wed Jun 13 10:31:49 [initandlisten] **       Not recommended for production.
 m31200| Wed Jun 13 10:31:49 [initandlisten] 
 m31200| Wed Jun 13 10:31:49 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
 m31200| Wed Jun 13 10:31:49 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
 m31200| Wed Jun 13 10:31:49 [initandlisten] **       with --journal, the limit is lower
 m31200| Wed Jun 13 10:31:49 [initandlisten] 
 m31200| Wed Jun 13 10:31:49 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
 m31200| Wed Jun 13 10:31:49 [initandlisten] git version: 163a2d64ee88f7a4efb604f6208578ef117c4bc3
 m31200| Wed Jun 13 10:31:49 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB_VERSION=1_49
 m31200| Wed Jun 13 10:31:49 [initandlisten] options: { dbpath: "/data/db/remove2-rs1-0", noprealloc: true, oplogSize: 40, port: 31200, replSet: "remove2-rs1", rest: true, smallfiles: true }
 m31200| Wed Jun 13 10:31:49 [initandlisten] waiting for connections on port 31200
 m31200| Wed Jun 13 10:31:49 [websvr] admin web console waiting for connections on port 32200
 m31200| Wed Jun 13 10:31:49 [initandlisten] connection accepted from 127.0.0.1:58590 #1 (1 connection now open)
 m31200| Wed Jun 13 10:31:49 [conn1] end connection 127.0.0.1:58590 (0 connections now open)
 m31200| Wed Jun 13 10:31:49 [initandlisten] connection accepted from 127.0.0.1:58591 #2 (1 connection now open)
 m31200| Wed Jun 13 10:31:49 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
 m31200| Wed Jun 13 10:31:49 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
 m31200| Wed Jun 13 10:31:49 [initandlisten] connection accepted from 127.0.0.1:58589 #3 (2 connections now open)
[ connection to ip-0A420969:31200 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31200, 31201 ]	31201 number
{
	"useHostName" : true,
	"oplogSize" : 40,
	"keyFile" : undefined,
	"port" : 31201,
	"noprealloc" : "",
	"smallfiles" : "",
	"rest" : "",
	"replSet" : "remove2-rs1",
	"dbpath" : "$set-$node",
	"useHostname" : true,
	"noJournalPrealloc" : undefined,
	"pathOpts" : {
		"testName" : "remove2",
		"shard" : 1,
		"node" : 1,
		"set" : "remove2-rs1"
	},
	"restart" : undefined
}
ReplSetTest Starting....
Resetting db path '/data/db/remove2-rs1-1'
Wed Jun 13 10:31:49 shell: started program mongod.exe --oplogSize 40 --port 31201 --noprealloc --smallfiles --rest --replSet remove2-rs1 --dbpath /data/db/remove2-rs1-1
 m31201| note: noprealloc may hurt performance in many applications
 m31201| Wed Jun 13 10:31:49 
 m31201| Wed Jun 13 10:31:49 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
 m31201| Wed Jun 13 10:31:49 
 m31201| Wed Jun 13 10:31:49 [initandlisten] MongoDB starting : pid=3704 port=31201 dbpath=/data/db/remove2-rs1-1 32-bit host=ip-0A420969
 m31201| Wed Jun 13 10:31:49 [initandlisten] 
 m31201| Wed Jun 13 10:31:49 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
 m31201| Wed Jun 13 10:31:49 [initandlisten] **       Not recommended for production.
 m31201| Wed Jun 13 10:31:49 [initandlisten] 
 m31201| Wed Jun 13 10:31:49 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
 m31201| Wed Jun 13 10:31:49 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
 m31201| Wed Jun 13 10:31:49 [initandlisten] **       with --journal, the limit is lower
 m31201| Wed Jun 13 10:31:49 [initandlisten] 
 m31201| Wed Jun 13 10:31:49 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
 m31201| Wed Jun 13 10:31:49 [initandlisten] git version: 163a2d64ee88f7a4efb604f6208578ef117c4bc3
 m31201| Wed Jun 13 10:31:49 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB_VERSION=1_49
 m31201| Wed Jun 13 10:31:49 [initandlisten] options: { dbpath: "/data/db/remove2-rs1-1", noprealloc: true, oplogSize: 40, port: 31201, replSet: "remove2-rs1", rest: true, smallfiles: true }
 m31201| Wed Jun 13 10:31:49 [initandlisten] waiting for connections on port 31201
 m31201| Wed Jun 13 10:31:49 [websvr] admin web console waiting for connections on port 32201
 m31201| Wed Jun 13 10:31:49 [initandlisten] connection accepted from 127.0.0.1:58593 #1 (1 connection now open)
 m31201| Wed Jun 13 10:31:49 [conn1] end connection 127.0.0.1:58593 (0 connections now open)
 m31201| Wed Jun 13 10:31:49 [initandlisten] connection accepted from 127.0.0.1:58594 #2 (1 connection now open)
 m31201| Wed Jun 13 10:31:49 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
 m31201| Wed Jun 13 10:31:49 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
 m31201| Wed Jun 13 10:31:50 [initandlisten] connection accepted from 127.0.0.1:58592 #3 (2 connections now open)
[ connection to ip-0A420969:31200, connection to ip-0A420969:31201 ]
{
	"replSetInitiate" : {
		"_id" : "remove2-rs1",
		"members" : [
			{
				"_id" : 0,
				"host" : "ip-0A420969:31200"
			},
			{
				"_id" : 1,
				"host" : "ip-0A420969:31201"
			}
		]
	}
}
 m31200| Wed Jun 13 10:31:50 [conn3] replSet replSetInitiate admin command received from client
 m31200| Wed Jun 13 10:31:50 [conn3] replSet replSetInitiate config object parses ok, 2 members specified
 m31200| Wed Jun 13 10:31:50 [initandlisten] connection accepted from 10.66.9.105:58595 #4 (3 connections now open)
 m31200| Wed Jun 13 10:31:50 [conn4] end connection 10.66.9.105:58595 (2 connections now open)
 m31201| Wed Jun 13 10:31:50 [initandlisten] connection accepted from 10.66.9.105:58596 #4 (3 connections now open)
 m31200| Wed Jun 13 10:31:50 [conn3] replSet replSetInitiate all members seem up
 m31200| Wed Jun 13 10:31:50 [conn3] ******
 m31200| Wed Jun 13 10:31:50 [conn3] creating replication oplog of size: 40MB...
 m31200| Wed Jun 13 10:31:50 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/local.ns, filling with zeroes...
 m31200| Wed Jun 13 10:31:50 [FileAllocator] creating directory /data/db/remove2-rs1-0/_tmp
 m31200| Wed Jun 13 10:31:50 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/local.ns, size: 16MB,  took 0.057 secs
 m31200| Wed Jun 13 10:31:50 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/local.0, filling with zeroes...
 m31200| Wed Jun 13 10:31:50 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/local.0, size: 64MB,  took 0.552 secs
 m31200| Wed Jun 13 10:31:52 [conn3] ******
 m31200| Wed Jun 13 10:31:52 [conn3] replSet info saving a newer config version to local.system.replset
 m31200| Wed Jun 13 10:31:52 [conn3] replSet saveConfigLocally done
 m31200| Wed Jun 13 10:31:52 [conn3] replSet replSetInitiate config now saved locally.  Should come online in about a minute.
 m31200| Wed Jun 13 10:31:52 [conn3] command admin.$cmd command: { replSetInitiate: { _id: "remove2-rs1", members: [ { _id: 0.0, host: "ip-0A420969:31200" }, { _id: 1.0, host: "ip-0A420969:31201" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:2583699 w:48 reslen:112 2583ms
{
	"info" : "Config now saved locally.  Should come online in about a minute.",
	"ok" : 1
}
 m31100| Wed Jun 13 10:31:55 [rsStart] replSet I am ip-0A420969:31100
 m31100| Wed Jun 13 10:31:55 [rsStart] replSet STARTUP2
 m31100| Wed Jun 13 10:31:55 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
 m31100| Wed Jun 13 10:31:55 [rsHealthPoll] replSet member ip-0A420969:31101 is up
 m31100| Wed Jun 13 10:31:55 [rsSync] replSet SECONDARY
 m31100| Wed Jun 13 10:31:55 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes
 m31101| Wed Jun 13 10:31:56 [rsStart] trying to contact ip-0A420969:31100
 m31100| Wed Jun 13 10:31:56 [initandlisten] connection accepted from 10.66.9.105:58597 #5 (3 connections now open)
 m31101| Wed Jun 13 10:31:56 [initandlisten] connection accepted from 10.66.9.105:58598 #5 (4 connections now open)
 m31101| Wed Jun 13 10:31:56 [rsStart] replSet I am ip-0A420969:31101
 m31101| Wed Jun 13 10:31:56 [conn5] end connection 10.66.9.105:58598 (3 connections now open)
 m31101| Wed Jun 13 10:31:56 [rsStart] replSet got config version 1 from a remote, saving locally
 m31101| Wed Jun 13 10:31:56 [rsStart] replSet info saving a newer config version to local.system.replset
 m31101| Wed Jun 13 10:31:56 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1/local.ns, filling with zeroes...
 m31101| Wed Jun 13 10:31:56 [FileAllocator] creating directory /data/db/remove2-rs0-1/_tmp
 m31101| Wed Jun 13 10:31:56 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1/local.ns, size: 16MB,  took 0.145 secs
 m31101| Wed Jun 13 10:31:56 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1/local.0, filling with zeroes...
 m31101| Wed Jun 13 10:31:56 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1/local.0, size: 16MB,  took 0.057 secs
 m31101| Wed Jun 13 10:31:57 [rsStart] replSet saveConfigLocally done
 m31101| Wed Jun 13 10:31:57 [rsStart] replSet STARTUP2
 m31101| Wed Jun 13 10:31:57 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
 m31101| Wed Jun 13 10:31:57 [rsSync] ******
 m31101| Wed Jun 13 10:31:57 [rsSync] creating replication oplog of size: 40MB...
 m31101| Wed Jun 13 10:31:57 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1/local.1, filling with zeroes...
 m31100| Wed Jun 13 10:31:57 [rsHealthPoll] replSet member ip-0A420969:31101 is now in state STARTUP2
 m31100| Wed Jun 13 10:31:57 [rsMgr] not electing self, ip-0A420969:31101 would veto
 m31101| Wed Jun 13 10:31:57 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1/local.1, size: 64MB,  took 0.319 secs
 m31101| Wed Jun 13 10:31:58 [rsHealthPoll] replSet member ip-0A420969:31100 is up
 m31101| Wed Jun 13 10:31:58 [rsHealthPoll] replSet member ip-0A420969:31100 is now in state SECONDARY
 m31200| Wed Jun 13 10:31:59 [rsStart] replSet I am ip-0A420969:31200
 m31200| Wed Jun 13 10:31:59 [rsStart] replSet STARTUP2
 m31200| Wed Jun 13 10:31:59 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
 m31200| Wed Jun 13 10:31:59 [rsHealthPoll] replSet member ip-0A420969:31201 is up
 m31200| Wed Jun 13 10:31:59 [rsSync] replSet SECONDARY
 m31101| Wed Jun 13 10:31:59 [rsSync] ******
 m31101| Wed Jun 13 10:31:59 [rsSync] replSet initial sync pending
 m31101| Wed Jun 13 10:31:59 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
 m31201| Wed Jun 13 10:31:59 [rsStart] trying to contact ip-0A420969:31200
 m31200| Wed Jun 13 10:31:59 [initandlisten] connection accepted from 10.66.9.105:58599 #5 (3 connections now open)
 m31201| Wed Jun 13 10:32:00 [initandlisten] connection accepted from 10.66.9.105:58600 #5 (4 connections now open)
 m31201| Wed Jun 13 10:32:00 [rsStart] replSet I am ip-0A420969:31201
 m31201| Wed Jun 13 10:32:00 [conn5] end connection 10.66.9.105:58600 (3 connections now open)
 m31201| Wed Jun 13 10:32:00 [rsStart] replSet got config version 1 from a remote, saving locally
 m31201| Wed Jun 13 10:32:00 [rsStart] replSet info saving a newer config version to local.system.replset
 m31201| Wed Jun 13 10:32:00 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/local.ns, filling with zeroes...
 m31201| Wed Jun 13 10:32:00 [FileAllocator] creating directory /data/db/remove2-rs1-1/_tmp
 m31201| Wed Jun 13 10:32:00 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/local.ns, size: 16MB,  took 0.057 secs
 m31201| Wed Jun 13 10:32:00 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/local.0, filling with zeroes...
 m31201| Wed Jun 13 10:32:00 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/local.0, size: 16MB,  took 0.058 secs
 m31201| Wed Jun 13 10:32:01 [rsStart] replSet saveConfigLocally done
 m31201| Wed Jun 13 10:32:01 [rsStart] replSet STARTUP2
 m31201| Wed Jun 13 10:32:01 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
 m31201| Wed Jun 13 10:32:01 [rsSync] ******
 m31201| Wed Jun 13 10:32:01 [rsSync] creating replication oplog of size: 40MB...
 m31201| Wed Jun 13 10:32:01 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/local.1, filling with zeroes...
 m31201| Wed Jun 13 10:32:01 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/local.1, size: 64MB,  took 0.243 secs
 m31200| Wed Jun 13 10:32:01 [rsHealthPoll] replSet member ip-0A420969:31201 is now in state STARTUP2
 m31200| Wed Jun 13 10:32:01 [rsMgr] not electing self, ip-0A420969:31201 would veto
 m31201| Wed Jun 13 10:32:02 [rsHealthPoll] replSet member ip-0A420969:31200 is up
 m31201| Wed Jun 13 10:32:02 [rsHealthPoll] replSet member ip-0A420969:31200 is now in state SECONDARY
 m31100| Wed Jun 13 10:32:03 [rsMgr] replSet info electSelf 0
 m31101| Wed Jun 13 10:32:03 [conn4] replSet RECOVERING
 m31101| Wed Jun 13 10:32:03 [conn4] replSet info voting yea for ip-0A420969:31100 (0)
 m31201| Wed Jun 13 10:32:03 [rsSync] ******
 m31201| Wed Jun 13 10:32:03 [rsSync] replSet initial sync pending
 m31201| Wed Jun 13 10:32:03 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
 m31100| Wed Jun 13 10:32:04 [rsMgr] replSet PRIMARY
 m31101| Wed Jun 13 10:32:04 [rsHealthPoll] replSet member ip-0A420969:31100 is now in state PRIMARY
 m31100| Wed Jun 13 10:32:04 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0/admin.ns, filling with zeroes...
 m31100| Wed Jun 13 10:32:05 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0/admin.ns, size: 16MB,  took 0.06 secs
 m31100| Wed Jun 13 10:32:05 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0/admin.0, filling with zeroes...
 m31100| Wed Jun 13 10:32:05 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0/admin.0, size: 16MB,  took 0.056 secs
 m31100| Wed Jun 13 10:32:05 [conn3] build index admin.foo { _id: 1 }
 m31100| Wed Jun 13 10:32:05 [conn3] build index done.  scanned 0 total records. 0.026 secs
 m31100| Wed Jun 13 10:32:05 [conn3] insert admin.foo keyUpdates:0 locks(micros) W:2756225 w:148522 147ms
ReplSetTest Timestamp(1339597925000, 1)
ReplSetTest waiting for connection to ip-0A420969:31101 to have an oplog built.
 m31100| Wed Jun 13 10:32:05 [rsHealthPoll] replSet member ip-0A420969:31101 is now in state RECOVERING
ReplSetTest waiting for connection to ip-0A420969:31101 to have an oplog built.
 m31200| Wed Jun 13 10:32:07 [rsMgr] replSet info electSelf 0
 m31201| Wed Jun 13 10:32:07 [conn4] replSet RECOVERING
 m31201| Wed Jun 13 10:32:07 [conn4] replSet info voting yea for ip-0A420969:31200 (0)
 m31200| Wed Jun 13 10:32:07 [rsMgr] replSet PRIMARY
 m31201| Wed Jun 13 10:32:08 [rsHealthPoll] replSet member ip-0A420969:31200 is now in state PRIMARY
ReplSetTest waiting for connection to ip-0A420969:31101 to have an oplog built.
 m31200| Wed Jun 13 10:32:09 [rsHealthPoll] replSet member ip-0A420969:31201 is now in state RECOVERING
ReplSetTest waiting for connection to ip-0A420969:31101 to have an oplog built.
ReplSetTest waiting for connection to ip-0A420969:31101 to have an oplog built.
ReplSetTest waiting for connection to ip-0A420969:31101 to have an oplog built.
 m31101| Wed Jun 13 10:32:15 [rsSync] replSet initial sync pending
 m31101| Wed Jun 13 10:32:15 [rsSync] replSet syncing to: ip-0A420969:31100
 m31100| Wed Jun 13 10:32:15 [initandlisten] connection accepted from 10.66.9.105:58601 #6 (4 connections now open)
 m31101| Wed Jun 13 10:32:15 [rsSync] build index local.me { _id: 1 }
 m31101| Wed Jun 13 10:32:15 [rsSync] build index done.  scanned 0 total records. 0 secs
 m31101| Wed Jun 13 10:32:15 [rsSync] replSet initial sync drop all databases
 m31101| Wed Jun 13 10:32:15 [rsSync] dropAllDatabasesExceptLocal 1
 m31101| Wed Jun 13 10:32:15 [rsSync] replSet initial sync clone all databases
 m31101| Wed Jun 13 10:32:15 [rsSync] replSet initial sync cloning db: admin
 m31100| Wed Jun 13 10:32:15 [initandlisten] connection accepted from 10.66.9.105:58602 #7 (5 connections now open)
 m31101| Wed Jun 13 10:32:15 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1/admin.ns, filling with zeroes...
 m31101| Wed Jun 13 10:32:15 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1/admin.ns, size: 16MB,  took 0.059 secs
 m31101| Wed Jun 13 10:32:15 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1/admin.0, filling with zeroes...
 m31101| Wed Jun 13 10:32:15 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1/admin.0, size: 16MB,  took 0.076 secs
 m31101| Wed Jun 13 10:32:15 [rsSync] build index admin.foo { _id: 1 }
 m31101| Wed Jun 13 10:32:15 [rsSync] 	 fastBuildIndex dupsToDrop:0
 m31101| Wed Jun 13 10:32:15 [rsSync] build index done.  scanned 1 total records. 0 secs
 m31101| Wed Jun 13 10:32:15 [rsSync] replSet initial sync data copy, starting syncup
 m31100| Wed Jun 13 10:32:15 [conn7] end connection 10.66.9.105:58602 (4 connections now open)
 m31101| Wed Jun 13 10:32:15 [rsSync] replSet initial sync building indexes
 m31101| Wed Jun 13 10:32:15 [rsSync] replSet initial sync cloning indexes for : admin
 m31100| Wed Jun 13 10:32:15 [initandlisten] connection accepted from 10.66.9.105:58603 #8 (5 connections now open)
 m31101| Wed Jun 13 10:32:15 [rsSync] replSet initial sync query minValid
 m31100| Wed Jun 13 10:32:15 [conn8] end connection 10.66.9.105:58603 (4 connections now open)
 m31101| Wed Jun 13 10:32:15 [rsSync] replSet initial sync finishing up
 m31101| Wed Jun 13 10:32:15 [rsSync] replSet set minValid=4fd8a465:1
 m31101| Wed Jun 13 10:32:15 [rsSync] build index local.replset.minvalid { _id: 1 }
 m31101| Wed Jun 13 10:32:15 [rsSync] build index done.  scanned 0 total records. 0 secs
 m31101| Wed Jun 13 10:32:15 [rsSync] replSet initial sync done
 m31100| Wed Jun 13 10:32:15 [conn6] end connection 10.66.9.105:58601 (3 connections now open)
 m31101| Wed Jun 13 10:32:16 [rsSync] replSet SECONDARY
{
	"ts" : Timestamp(1339597925000, 1),
	"h" : NumberLong("5576277351748075537"),
	"op" : "i",
	"ns" : "admin.foo",
	"o" : {
		"_id" : ObjectId("4fd8a4640091dc33506bc3c6"),
		"x" : 1
	}
}
ReplSetTest await TS for connection to ip-0A420969:31101 is 1339597925000:1 and latest is 1339597925000:1
ReplSetTest await oplog size for connection to ip-0A420969:31101 is 1
ReplSetTest await synced=true
Wed Jun 13 10:32:17 starting new replica set monitor for replica set remove2-rs0 with seed of ip-0A420969:31100,ip-0A420969:31101
Wed Jun 13 10:32:17 successfully connected to seed ip-0A420969:31100 for replica set remove2-rs0
 m31100| Wed Jun 13 10:32:17 [initandlisten] connection accepted from 10.66.9.105:58604 #9 (4 connections now open)
Wed Jun 13 10:32:17 changing hosts to { 0: "ip-0A420969:31100", 1: "ip-0A420969:31101" } from remove2-rs0/
Wed Jun 13 10:32:17 trying to add new host ip-0A420969:31100 to replica set remove2-rs0
Wed Jun 13 10:32:17 successfully connected to new host ip-0A420969:31100 in replica set remove2-rs0
Wed Jun 13 10:32:17 trying to add new host ip-0A420969:31101 to replica set remove2-rs0
 m31100| Wed Jun 13 10:32:17 [initandlisten] connection accepted from 10.66.9.105:58605 #10 (5 connections now open)
Wed Jun 13 10:32:17 successfully connected to new host ip-0A420969:31101 in replica set remove2-rs0
 m31101| Wed Jun 13 10:32:17 [initandlisten] connection accepted from 10.66.9.105:58606 #6 (4 connections now open)
 m31100| Wed Jun 13 10:32:17 [initandlisten] connection accepted from 10.66.9.105:58607 #11 (6 connections now open)
 m31100| Wed Jun 13 10:32:17 [conn9] end connection 10.66.9.105:58604 (5 connections now open)
Wed Jun 13 10:32:17 Primary for replica set remove2-rs0 changed to ip-0A420969:31100
 m31101| Wed Jun 13 10:32:17 [initandlisten] connection accepted from 10.66.9.105:58608 #7 (5 connections now open)
Wed Jun 13 10:32:17 replica set monitor for replica set remove2-rs0 started, address is remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
Wed Jun 13 10:32:17 [ReplicaSetMonitorWatcher] starting
 m31100| Wed Jun 13 10:32:17 [initandlisten] connection accepted from 10.66.9.105:58609 #12 (6 connections now open)
 m31200| Wed Jun 13 10:32:17 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/admin.ns, filling with zeroes...
 m31200| Wed Jun 13 10:32:17 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/admin.ns, size: 16MB,  took 0.063 secs
 m31200| Wed Jun 13 10:32:17 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/admin.0, filling with zeroes...
 m31101| Wed Jun 13 10:32:17 [rsBackgroundSync] replSet syncing to: ip-0A420969:31100
 m31100| Wed Jun 13 10:32:17 [initandlisten] connection accepted from 10.66.9.105:58610 #13 (7 connections now open)
 m31200| Wed Jun 13 10:32:17 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/admin.0, size: 16MB,  took 0.126 secs
 m31200| Wed Jun 13 10:32:17 [conn3] build index admin.foo { _id: 1 }
 m31200| Wed Jun 13 10:32:17 [conn3] build index done.  scanned 0 total records. 0 secs
 m31200| Wed Jun 13 10:32:17 [conn3] insert admin.foo keyUpdates:0 locks(micros) W:2583699 w:196534 195ms
ReplSetTest Timestamp(1339597937000, 1)
ReplSetTest waiting for connection to ip-0A420969:31201 to have an oplog built.
 m31100| Wed Jun 13 10:32:17 [rsHealthPoll] replSet member ip-0A420969:31101 is now in state SECONDARY
 m31101| Wed Jun 13 10:32:17 [rsSyncNotifier] replset setting oplog notifier to ip-0A420969:31100
 m31100| Wed Jun 13 10:32:17 [initandlisten] connection accepted from 10.66.9.105:58612 #14 (8 connections now open)
 m31100| Wed Jun 13 10:32:18 [slaveTracking] build index local.slaves { _id: 1 }
 m31100| Wed Jun 13 10:32:18 [slaveTracking] build index done.  scanned 0 total records. 0 secs
ReplSetTest waiting for connection to ip-0A420969:31201 to have an oplog built.
 m31201| Wed Jun 13 10:32:19 [rsSync] replSet initial sync pending
 m31201| Wed Jun 13 10:32:19 [rsSync] replSet syncing to: ip-0A420969:31200
 m31200| Wed Jun 13 10:32:19 [initandlisten] connection accepted from 10.66.9.105:58613 #6 (4 connections now open)
 m31201| Wed Jun 13 10:32:19 [rsSync] build index local.me { _id: 1 }
 m31201| Wed Jun 13 10:32:19 [rsSync] build index done.  scanned 0 total records. 0 secs
 m31201| Wed Jun 13 10:32:19 [rsSync] replSet initial sync drop all databases
 m31201| Wed Jun 13 10:32:19 [rsSync] dropAllDatabasesExceptLocal 1
 m31201| Wed Jun 13 10:32:19 [rsSync] replSet initial sync clone all databases
 m31201| Wed Jun 13 10:32:19 [rsSync] replSet initial sync cloning db: admin
 m31200| Wed Jun 13 10:32:19 [initandlisten] connection accepted from 10.66.9.105:58614 #7 (5 connections now open)
 m31201| Wed Jun 13 10:32:19 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/admin.ns, filling with zeroes...
 m31201| Wed Jun 13 10:32:19 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/admin.ns, size: 16MB,  took 0.104 secs
 m31201| Wed Jun 13 10:32:19 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/admin.0, filling with zeroes...
 m31201| Wed Jun 13 10:32:19 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/admin.0, size: 16MB,  took 0.057 secs
 m31201| Wed Jun 13 10:32:19 [rsSync] build index admin.foo { _id: 1 }
 m31201| Wed Jun 13 10:32:19 [rsSync] 	 fastBuildIndex dupsToDrop:0
 m31201| Wed Jun 13 10:32:19 [rsSync] build index done.  scanned 1 total records. 0 secs
 m31201| Wed Jun 13 10:32:19 [rsSync] replSet initial sync data copy, starting syncup
 m31200| Wed Jun 13 10:32:19 [conn7] end connection 10.66.9.105:58614 (4 connections now open)
 m31201| Wed Jun 13 10:32:19 [rsSync] replSet initial sync building indexes
 m31201| Wed Jun 13 10:32:19 [rsSync] replSet initial sync cloning indexes for : admin
 m31200| Wed Jun 13 10:32:19 [initandlisten] connection accepted from 10.66.9.105:58615 #8 (5 connections now open)
 m31201| Wed Jun 13 10:32:19 [rsSync] replSet initial sync query minValid
 m31200| Wed Jun 13 10:32:19 [conn8] end connection 10.66.9.105:58615 (4 connections now open)
 m31201| Wed Jun 13 10:32:19 [rsSync] replSet initial sync finishing up
 m31201| Wed Jun 13 10:32:19 [rsSync] replSet set minValid=4fd8a471:1
 m31201| Wed Jun 13 10:32:19 [rsSync] build index local.replset.minvalid { _id: 1 }
 m31201| Wed Jun 13 10:32:19 [rsSync] build index done.  scanned 0 total records. 0 secs
 m31201| Wed Jun 13 10:32:19 [rsSync] replSet initial sync done
 m31200| Wed Jun 13 10:32:19 [conn6] end connection 10.66.9.105:58613 (3 connections now open)
 m31201| Wed Jun 13 10:32:20 [rsSync] replSet SECONDARY
 m31201| Wed Jun 13 10:32:21 [rsBackgroundSync] replSet syncing to: ip-0A420969:31200
 m31200| Wed Jun 13 10:32:21 [initandlisten] connection accepted from 10.66.9.105:58616 #9 (4 connections now open)
{
	"ts" : Timestamp(1339597937000, 1),
	"h" : NumberLong("5576278227921403921"),
	"op" : "i",
	"ns" : "admin.foo",
	"o" : {
		"_id" : ObjectId("4fd8a4710091dc33506bc3c7"),
		"x" : 1
	}
}
ReplSetTest await TS for connection to ip-0A420969:31201 is 1339597937000:1 and latest is 1339597937000:1
ReplSetTest await oplog size for connection to ip-0A420969:31201 is 1
ReplSetTest await synced=true
Wed Jun 13 10:32:21 starting new replica set monitor for replica set remove2-rs1 with seed of ip-0A420969:31200,ip-0A420969:31201
 m31200| Wed Jun 13 10:32:21 [rsHealthPoll] replSet member ip-0A420969:31201 is now in state SECONDARY
Wed Jun 13 10:32:21 successfully connected to seed ip-0A420969:31200 for replica set remove2-rs1
 m31200| Wed Jun 13 10:32:21 [initandlisten] connection accepted from 10.66.9.105:58617 #10 (5 connections now open)
Wed Jun 13 10:32:21 changing hosts to { 0: "ip-0A420969:31200", 1: "ip-0A420969:31201" } from remove2-rs1/
Wed Jun 13 10:32:21 trying to add new host ip-0A420969:31200 to replica set remove2-rs1
Wed Jun 13 10:32:21 successfully connected to new host ip-0A420969:31200 in replica set remove2-rs1
Wed Jun 13 10:32:21 trying to add new host ip-0A420969:31201 to replica set remove2-rs1
 m31200| Wed Jun 13 10:32:21 [initandlisten] connection accepted from 10.66.9.105:58618 #11 (6 connections now open)
 m31201| Wed Jun 13 10:32:21 [initandlisten] connection accepted from 10.66.9.105:58619 #6 (4 connections now open)
Wed Jun 13 10:32:21 successfully connected to new host ip-0A420969:31201 in replica set remove2-rs1
 m31200| Wed Jun 13 10:32:21 [initandlisten] connection accepted from 10.66.9.105:58620 #12 (7 connections now open)
 m31200| Wed Jun 13 10:32:21 [conn10] end connection 10.66.9.105:58617 (6 connections now open)
Wed Jun 13 10:32:21 Primary for replica set remove2-rs1 changed to ip-0A420969:31200
 m31201| Wed Jun 13 10:32:21 [initandlisten] connection accepted from 10.66.9.105:58621 #7 (5 connections now open)
Wed Jun 13 10:32:21 replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31200| Wed Jun 13 10:32:21 [initandlisten] connection accepted from 10.66.9.105:58622 #13 (7 connections now open)
Resetting db path '/data/db/remove2-config0'
Wed Jun 13 10:32:21 shell: started program mongod.exe --port 29000 --dbpath /data/db/remove2-config0
 m29000| Wed Jun 13 10:32:21 
 m29000| Wed Jun 13 10:32:21 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
 m29000| Wed Jun 13 10:32:21 
 m29000| Wed Jun 13 10:32:21 [initandlisten] MongoDB starting : pid=3652 port=29000 dbpath=/data/db/remove2-config0 32-bit host=ip-0A420969
 m29000| Wed Jun 13 10:32:21 [initandlisten] 
 m29000| Wed Jun 13 10:32:21 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
 m29000| Wed Jun 13 10:32:21 [initandlisten] **       Not recommended for production.
 m29000| Wed Jun 13 10:32:21 [initandlisten] 
 m29000| Wed Jun 13 10:32:21 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
 m29000| Wed Jun 13 10:32:21 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
 m29000| Wed Jun 13 10:32:21 [initandlisten] **       with --journal, the limit is lower
 m29000| Wed Jun 13 10:32:21 [initandlisten] 
 m29000| Wed Jun 13 10:32:21 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
 m29000| Wed Jun 13 10:32:21 [initandlisten] git version: 163a2d64ee88f7a4efb604f6208578ef117c4bc3
 m29000| Wed Jun 13 10:32:21 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB_VERSION=1_49
 m29000| Wed Jun 13 10:32:21 [initandlisten] options: { dbpath: "/data/db/remove2-config0", port: 29000 }
 m29000| Wed Jun 13 10:32:21 [initandlisten] waiting for connections on port 29000
 m29000| Wed Jun 13 10:32:21 [websvr] admin web console waiting for connections on port 30000
 m31201| Wed Jun 13 10:32:21 [rsSyncNotifier] replset setting oplog notifier to ip-0A420969:31200
 m31200| Wed Jun 13 10:32:21 [initandlisten] connection accepted from 10.66.9.105:58624 #14 (8 connections now open)
 m29000| Wed Jun 13 10:32:22 [initandlisten] connection accepted from 127.0.0.1:58623 #1 (1 connection now open)
"ip-0A420969:29000"
 m29000| Wed Jun 13 10:32:22 [initandlisten] connection accepted from 10.66.9.105:58625 #2 (2 connections now open)
ShardingTest remove2 :
{
	"config" : "ip-0A420969:29000",
	"shards" : [
		connection to remove2-rs0/ip-0A420969:31100,ip-0A420969:31101,
		connection to remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
	]
}
 m29000| Wed Jun 13 10:32:22 [FileAllocator] allocating new datafile /data/db/remove2-config0/config.ns, filling with zeroes...
 m29000| Wed Jun 13 10:32:22 [FileAllocator] creating directory /data/db/remove2-config0/_tmp
 m29000| Wed Jun 13 10:32:22 [FileAllocator] done allocating datafile /data/db/remove2-config0/config.ns, size: 16MB,  took 0.055 secs
 m29000| Wed Jun 13 10:32:22 [FileAllocator] allocating new datafile /data/db/remove2-config0/config.0, filling with zeroes...
Wed Jun 13 10:32:22 shell: started program mongos.exe --port 30999 --configdb ip-0A420969:29000
 m29000| Wed Jun 13 10:32:22 [FileAllocator] done allocating datafile /data/db/remove2-config0/config.0, size: 16MB,  took 0.058 secs
 m29000| Wed Jun 13 10:32:22 [FileAllocator] allocating new datafile /data/db/remove2-config0/config.1, filling with zeroes...
 m29000| Wed Jun 13 10:32:22 [conn2] build index config.settings { _id: 1 }
 m29000| Wed Jun 13 10:32:22 [conn2] build index done.  scanned 0 total records. 0.011 secs
 m29000| Wed Jun 13 10:32:22 [conn2] insert config.settings keyUpdates:0 locks(micros) w:130056 129ms
 m29000| Wed Jun 13 10:32:22 [FileAllocator] done allocating datafile /data/db/remove2-config0/config.1, size: 32MB,  took 0.121 secs
 m31200| Wed Jun 13 10:32:22 [slaveTracking] build index local.slaves { _id: 1 }
 m31200| Wed Jun 13 10:32:22 [slaveTracking] build index done.  scanned 0 total records. 0 secs
 m30999| Wed Jun 13 10:32:23 warning: running with 1 config server should be done only for testing purposes and is not recommended for production
 m30999| Wed Jun 13 10:32:23 [mongosMain] MongoS version 2.1.2-pre- starting: pid=3544 port=30999 32-bit host=ip-0A420969 (--help for usage)
 m30999| Wed Jun 13 10:32:23 [mongosMain] git version: 163a2d64ee88f7a4efb604f6208578ef117c4bc3
 m30999| Wed Jun 13 10:32:23 [mongosMain] build info: windows sys.getwindowsversion(major=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB_VERSION=1_49
 m30999| Wed Jun 13 10:32:23 [mongosMain] options: { configdb: "ip-0A420969:29000", port: 30999 }
 m29000| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58627 #3 (3 connections now open)
 m29000| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58628 #4 (4 connections now open)
 m29000| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58629 #5 (5 connections now open)
 m29000| Wed Jun 13 10:32:23 [conn5] build index config.version { _id: 1 }
 m29000| Wed Jun 13 10:32:23 [conn5] build index done.  scanned 0 total records. 0.001 secs
 m29000| Wed Jun 13 10:32:23 [conn4] build index config.chunks { _id: 1 }
 m30999| Wed Jun 13 10:32:23 [mongosMain] waiting for connections on port 30999
 m30999| Wed Jun 13 10:32:23 [Balancer] about to contact config servers and shards
 m29000| Wed Jun 13 10:32:23 [conn4] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Jun 13 10:32:23 [conn4] info: creating collection config.chunks on add index
 m29000| Wed Jun 13 10:32:23 [conn4] build index config.chunks { ns: 1, min: 1 }
 m29000| Wed Jun 13 10:32:23 [conn4] build index done.  scanned 0 total records. 0 secs
 m30999| Wed Jun 13 10:32:23 [Balancer] config servers and shards contacted successfully
 m30999| Wed Jun 13 10:32:23 [Balancer] balancer id: ip-0A420969:30999 started at Jun 13 10:32:23
 m30999| Wed Jun 13 10:32:23 [Balancer] created new distributed lock for balancer on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m29000| Wed Jun 13 10:32:23 [conn4] build index config.chunks { ns: 1, shard: 1, min: 1 }
 m29000| Wed Jun 13 10:32:23 [conn4] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Jun 13 10:32:23 [conn4] build index config.chunks { ns: 1, lastmod: 1 }
 m29000| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58630 #6 (6 connections now open)
 m29000| Wed Jun 13 10:32:23 [conn4] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Jun 13 10:32:23 [conn4] build index config.shards { _id: 1 }
 m30999| Wed Jun 13 10:32:23 [websvr] admin web console waiting for connections on port 31999
 m29000| Wed Jun 13 10:32:23 [conn4] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Jun 13 10:32:23 [conn4] info: creating collection config.shards on add index
 m29000| Wed Jun 13 10:32:23 [conn4] build index config.shards { host: 1 }
 m29000| Wed Jun 13 10:32:23 [conn4] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Jun 13 10:32:23 [conn5] build index config.mongos { _id: 1 }
 m29000| Wed Jun 13 10:32:23 [conn5] build index done.  scanned 0 total records. 0 secs
 m30999| Wed Jun 13 10:32:23 [LockPinger] creating distributed lock ping thread for ip-0A420969:29000 and process ip-0A420969:30999:1339597943:41 (sleeping for 30000ms)
 m29000| Wed Jun 13 10:32:23 [conn6] build index config.locks { _id: 1 }
 m29000| Wed Jun 13 10:32:23 [conn6] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Jun 13 10:32:23 [conn4] build index config.lockpings { _id: 1 }
 m29000| Wed Jun 13 10:32:23 [conn4] build index done.  scanned 0 total records. 0 secs
 m30999| Wed Jun 13 10:32:23 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a477d1d821664bf17407
 m30999| Wed Jun 13 10:32:23 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
 m29000| Wed Jun 13 10:32:23 [conn4] build index config.lockpings { ping: 1 }
 m30999| Wed Jun 13 10:32:23 [mongosMain] connection accepted from 127.0.0.1:58626 #1 (1 connection now open)
 m29000| Wed Jun 13 10:32:23 [conn4] build index done.  scanned 1 total records. 0.001 secs
ShardingTest undefined going to add shard : remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m30999| Wed Jun 13 10:32:23 [conn1] couldn't find database [admin] in config db
 m29000| Wed Jun 13 10:32:23 [conn4] build index config.databases { _id: 1 }
 m29000| Wed Jun 13 10:32:23 [conn4] build index done.  scanned 0 total records. 0 secs
 m30999| Wed Jun 13 10:32:23 [conn1] 	 put [admin] on: config:ip-0A420969:29000
 m30999| Wed Jun 13 10:32:23 [conn1] starting new replica set monitor for replica set remove2-rs0 with seed of ip-0A420969:31100,ip-0A420969:31101
 m30999| Wed Jun 13 10:32:23 [conn1] successfully connected to seed ip-0A420969:31100 for replica set remove2-rs0
 m31100| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58631 #15 (9 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn1] changing hosts to { 0: "ip-0A420969:31100", 1: "ip-0A420969:31101" } from remove2-rs0/
 m30999| Wed Jun 13 10:32:23 [conn1] trying to add new host ip-0A420969:31100 to replica set remove2-rs0
 m30999| Wed Jun 13 10:32:23 [conn1] successfully connected to new host ip-0A420969:31100 in replica set remove2-rs0
 m30999| Wed Jun 13 10:32:23 [conn1] trying to add new host ip-0A420969:31101 to replica set remove2-rs0
 m31100| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58632 #16 (10 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn1] successfully connected to new host ip-0A420969:31101 in replica set remove2-rs0
 m31101| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58633 #8 (6 connections now open)
 m31100| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58634 #17 (11 connections now open)
 m31100| Wed Jun 13 10:32:23 [conn15] end connection 10.66.9.105:58631 (10 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn1] Primary for replica set remove2-rs0 changed to ip-0A420969:31100
 m31101| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58635 #9 (7 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn1] replica set monitor for replica set remove2-rs0 started, address is remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m30999| Wed Jun 13 10:32:23 [ReplicaSetMonitorWatcher] starting
 m31100| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58636 #18 (11 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn1] going to add shard: { _id: "remove2-rs0", host: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101" }
{ "shardAdded" : "remove2-rs0", "ok" : 1 }
ShardingTest undefined going to add shard : remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m30999| Wed Jun 13 10:32:23 [conn1] starting new replica set monitor for replica set remove2-rs1 with seed of ip-0A420969:31200,ip-0A420969:31201
 m30999| Wed Jun 13 10:32:23 [conn1] successfully connected to seed ip-0A420969:31200 for replica set remove2-rs1
 m31200| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58637 #15 (9 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn1] changing hosts to { 0: "ip-0A420969:31200", 1: "ip-0A420969:31201" } from remove2-rs1/
 m30999| Wed Jun 13 10:32:23 [conn1] trying to add new host ip-0A420969:31200 to replica set remove2-rs1
 m30999| Wed Jun 13 10:32:23 [conn1] successfully connected to new host ip-0A420969:31200 in replica set remove2-rs1
 m30999| Wed Jun 13 10:32:23 [conn1] trying to add new host ip-0A420969:31201 to replica set remove2-rs1
 m31200| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58638 #16 (10 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn1] successfully connected to new host ip-0A420969:31201 in replica set remove2-rs1
 m31201| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58639 #8 (6 connections now open)
 m31200| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58640 #17 (11 connections now open)
 m31200| Wed Jun 13 10:32:23 [conn15] end connection 10.66.9.105:58637 (10 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn1] Primary for replica set remove2-rs1 changed to ip-0A420969:31200
 m31201| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58641 #9 (7 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn1] replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31200| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58642 #18 (11 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn1] going to add shard: { _id: "remove2-rs1", host: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201" }
{ "shardAdded" : "remove2-rs1", "ok" : 1 }
 m30999| Wed Jun 13 10:32:23 [mongosMain] connection accepted from 10.66.9.105:58643 #2 (2 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn2] couldn't find database [test] in config db
 m30999| Wed Jun 13 10:32:23 [conn2] 	 put [test] on: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m30999| Wed Jun 13 10:32:23 [conn2] DROP: test.remove2
 m31100| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58644 #19 (12 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn2] creating WriteBackListener for: ip-0A420969:31100 serverID: 4fd8a477d1d821664bf17406
 m30999| Wed Jun 13 10:32:23 [conn2] creating WriteBackListener for: ip-0A420969:31101 serverID: 4fd8a477d1d821664bf17406
 m31100| Wed Jun 13 10:32:23 [conn19] CMD: drop test.remove2
{ "was" : 30, "ok" : 1 }
{ "was" : 30, "ok" : 1 }
 m30999| Wed Jun 13 10:32:23 [conn1] enabling sharding on: test
 m30999| Wed Jun 13 10:32:23 [conn1] CMD: shardcollection: { shardCollection: "test.remove2", key: { i: 1.0 } }
 m30999| Wed Jun 13 10:32:23 [conn1] enable sharding on: test.remove2 with shard key: { i: 1.0 }
 m30999| Wed Jun 13 10:32:23 [conn1] going to create 1 chunk(s) for: test.remove2 using new epoch 4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:23 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0/test.ns, filling with zeroes...
 m30999| Wed Jun 13 10:32:23 [conn1] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 2 version: 1|0||4fd8a477d1d821664bf17408 based on: (empty)
 m29000| Wed Jun 13 10:32:23 [conn4] build index config.collections { _id: 1 }
 m29000| Wed Jun 13 10:32:23 [conn4] build index done.  scanned 0 total records. 0 secs
 m31100| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58645 #20 (13 connections now open)
 m31100| Wed Jun 13 10:32:23 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0/test.ns, size: 16MB,  took 0.058 secs
 m31100| Wed Jun 13 10:32:23 [FileAllocator] allocating new datafile /data/db/remove2-rs0-0/test.0, filling with zeroes...
 m31100| Wed Jun 13 10:32:23 [FileAllocator] done allocating datafile /data/db/remove2-rs0-0/test.0, size: 16MB,  took 0.057 secs
 m31100| Wed Jun 13 10:32:23 [conn18] build index test.remove2 { _id: 1 }
 m31100| Wed Jun 13 10:32:23 [conn18] build index done.  scanned 0 total records. 0 secs
 m31100| Wed Jun 13 10:32:23 [conn18] info: creating collection test.remove2 on add index
 m31100| Wed Jun 13 10:32:23 [conn18] build index test.remove2 { i: 1.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] build index done.  scanned 0 total records. 0 secs
 m31100| Wed Jun 13 10:32:23 [conn18] insert test.system.indexes keyUpdates:0 locks(micros) R:8 r:323 w:121691 120ms
 m31100| Wed Jun 13 10:32:23 [conn20] command admin.$cmd command: { setShardVersion: "test.remove2", configdb: "ip-0A420969:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('4fd8a477d1d821664bf17408'), serverID: ObjectId('4fd8a477d1d821664bf17406'), shard: "remove2-rs0", shardHost: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101" } ntoreturn:1 keyUpdates:0 locks(micros) W:11 reslen:181 116ms
 m31101| Wed Jun 13 10:32:23 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1/test.ns, filling with zeroes...
 m31100| Wed Jun 13 10:32:23 [conn20] no current chunk manager found for this shard, will initialize
 m29000| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58646 #7 (7 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn1] creating WriteBackListener for: ip-0A420969:31200 serverID: 4fd8a477d1d821664bf17406
 m30999| Wed Jun 13 10:32:23 [conn1] creating WriteBackListener for: ip-0A420969:31201 serverID: 4fd8a477d1d821664bf17406
 m31200| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58647 #19 (12 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn1] resetting shard version of test.remove2 on ip-0A420969:31200, version is zero
 m31200| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58648 #20 (13 connections now open)
 m30999| Wed Jun 13 10:32:23 [conn2] resetting shard version of test.remove2 on ip-0A420969:31200, version is zero
 m31101| Wed Jun 13 10:32:23 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1/test.ns, size: 16MB,  took 0.065 secs
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : MinKey } -->> { : MaxKey }
 m31100| Wed Jun 13 10:32:23 [conn18] warning: chunk is larger than 1024 bytes because of key { i: 0.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : MinKey } -->> { : MaxKey }
 m31100| Wed Jun 13 10:32:23 [conn18] warning: chunk is larger than 1024 bytes because of key { i: 0.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : MinKey } -->> { : MaxKey }
 m31100| Wed Jun 13 10:32:23 [conn18] max number of requested split points reached (2) before the end of chunk test.remove2 { : MinKey } -->> { : MaxKey }
 m31100| Wed Jun 13 10:32:23 [conn18] warning: chunk is larger than 1024 bytes because of key { i: 0.0 }
 m31101| Wed Jun 13 10:32:23 [FileAllocator] allocating new datafile /data/db/remove2-rs0-1/test.0, filling with zeroes...
 m29000| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58649 #8 (8 connections now open)
 m31100| Wed Jun 13 10:32:23 [conn18] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: MinKey }, max: { i: MaxKey }, from: "remove2-rs0", splitKeys: [ { i: 0.0 } ], shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:32:23 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4771402b5052316d774
 m31100| Wed Jun 13 10:32:23 [LockPinger] creating distributed lock ping thread for ip-0A420969:29000 and process ip-0A420969:31100:1339597943:15724 (sleeping for 30000ms)
 m31100| Wed Jun 13 10:32:23 [conn18] splitChunk accepted at version 1|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:23 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:23-0", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597943414), what: "split", ns: "test.remove2", details: { before: { min: { i: MinKey }, max: { i: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: MinKey }, max: { i: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') }, right: { min: { i: 0.0 }, max: { i: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') } } }
 m29000| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58650 #9 (9 connections now open)
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m30999| Wed Jun 13 10:32:23 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 3 version: 1|2||4fd8a477d1d821664bf17408 based on: 1|0||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:32:23 [conn2] autosplitted test.remove2 shard: ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 1|0||000000000000000000000000 min: { i: MinKey } max: { i: MaxKey } on: { i: 0.0 } (splitThreshold 921)
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : 0.0 } -->> { : MaxKey }
 m31100| Wed Jun 13 10:32:23 [conn18] max number of requested split points reached (2) before the end of chunk test.remove2 { : 0.0 } -->> { : MaxKey }
 m31100| Wed Jun 13 10:32:23 [conn18] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: 0.0 }, max: { i: MaxKey }, from: "remove2-rs0", splitKeys: [ { i: 9.0 } ], shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:32:23 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4771402b5052316d775
 m31100| Wed Jun 13 10:32:23 [conn18] splitChunk accepted at version 1|2||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:23 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:23-1", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597943447), what: "split", ns: "test.remove2", details: { before: { min: { i: 0.0 }, max: { i: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: 0.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') }, right: { min: { i: 9.0 }, max: { i: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') } } }
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m30999| Wed Jun 13 10:32:23 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 4 version: 1|4||4fd8a477d1d821664bf17408 based on: 1|2||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:32:23 [conn2] autosplitted test.remove2 shard: ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 1|2||000000000000000000000000 min: { i: 0.0 } max: { i: MaxKey } on: { i: 9.0 } (splitThreshold 471859)
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : 0.0 } -->> { : 9.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] max number of requested split points reached (2) before the end of chunk test.remove2 { : 0.0 } -->> { : 9.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: 0.0 }, max: { i: 9.0 }, from: "remove2-rs0", splitKeys: [ { i: 3.0 } ], shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:32:23 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4771402b5052316d776
 m31100| Wed Jun 13 10:32:23 [conn18] splitChunk accepted at version 1|4||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:23 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:23-2", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597943482), what: "split", ns: "test.remove2", details: { before: { min: { i: 0.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: 0.0 }, max: { i: 3.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') }, right: { min: { i: 3.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') } } }
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m30999| Wed Jun 13 10:32:23 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 5 version: 1|6||4fd8a477d1d821664bf17408 based on: 1|4||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:32:23 [conn2] autosplitted test.remove2 shard: ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 1|3||000000000000000000000000 min: { i: 0.0 } max: { i: 9.0 } on: { i: 3.0 } (splitThreshold 1048576)
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : 3.0 } -->> { : 9.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : 0.0 } -->> { : 3.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : 3.0 } -->> { : 9.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] max number of requested split points reached (2) before the end of chunk test.remove2 { : 3.0 } -->> { : 9.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: 3.0 }, max: { i: 9.0 }, from: "remove2-rs0", splitKeys: [ { i: 5.0 } ], shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:32:23 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4771402b5052316d777
 m31100| Wed Jun 13 10:32:23 [conn18] splitChunk accepted at version 1|6||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:23 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:23-3", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597943515), what: "split", ns: "test.remove2", details: { before: { min: { i: 3.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: 3.0 }, max: { i: 5.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') }, right: { min: { i: 5.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') } } }
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m30999| Wed Jun 13 10:32:23 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 6 version: 1|8||4fd8a477d1d821664bf17408 based on: 1|6||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:32:23 [conn2] autosplitted test.remove2 shard: ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 1|6||000000000000000000000000 min: { i: 3.0 } max: { i: 9.0 } on: { i: 5.0 } (splitThreshold 1048576)
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : 5.0 } -->> { : 9.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : 0.0 } -->> { : 3.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : 5.0 } -->> { : 9.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] max number of requested split points reached (2) before the end of chunk test.remove2 { : 5.0 } -->> { : 9.0 }
 m31101| Wed Jun 13 10:32:23 [FileAllocator] done allocating datafile /data/db/remove2-rs0-1/test.0, size: 16MB,  took 0.155 secs
 m31100| Wed Jun 13 10:32:23 [conn18] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: 5.0 }, max: { i: 9.0 }, from: "remove2-rs0", splitKeys: [ { i: 6.0 } ], shardId: "test.remove2-i_5.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:32:23 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31101| Wed Jun 13 10:32:23 [conn4] end connection 10.66.9.105:58588 (6 connections now open)
 m31101| Wed Jun 13 10:32:23 [rsSync] build index test.remove2 { _id: 1 }
 m31101| Wed Jun 13 10:32:23 [rsSync] build index done.  scanned 0 total records. 0 secs
 m31101| Wed Jun 13 10:32:23 [rsSync] info: creating collection test.remove2 on add index
 m31101| Wed Jun 13 10:32:23 [rsSync] build index test.remove2 { i: 1.0 }
 m31101| Wed Jun 13 10:32:23 [rsSync] build index done.  scanned 0 total records. 0 secs
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4771402b5052316d778
 m31100| Wed Jun 13 10:32:23 [conn18] splitChunk accepted at version 1|8||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:23 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:23-4", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597943566), what: "split", ns: "test.remove2", details: { before: { min: { i: 5.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: 5.0 }, max: { i: 6.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') }, right: { min: { i: 6.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') } } }
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m30999| Wed Jun 13 10:32:23 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 7 version: 1|10||4fd8a477d1d821664bf17408 based on: 1|8||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:32:23 [conn2] autosplitted test.remove2 shard: ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 1|8||000000000000000000000000 min: { i: 5.0 } max: { i: 9.0 } on: { i: 6.0 } (splitThreshold 1048576)
 m31101| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58651 #10 (7 connections now open)
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : 0.0 } -->> { : 3.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] max number of requested split points reached (2) before the end of chunk test.remove2 { : 0.0 } -->> { : 3.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: 0.0 }, max: { i: 3.0 }, from: "remove2-rs0", splitKeys: [ { i: 1.0 } ], shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:32:23 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4771402b5052316d779
 m31100| Wed Jun 13 10:32:23 [conn18] splitChunk accepted at version 1|10||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:23 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:23-5", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597943604), what: "split", ns: "test.remove2", details: { before: { min: { i: 0.0 }, max: { i: 3.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: 0.0 }, max: { i: 1.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') }, right: { min: { i: 1.0 }, max: { i: 3.0 }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') } } }
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m30999| Wed Jun 13 10:32:23 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 8 version: 1|12||4fd8a477d1d821664bf17408 based on: 1|10||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:32:23 [conn2] autosplitted test.remove2 shard: ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 1|5||000000000000000000000000 min: { i: 0.0 } max: { i: 3.0 } on: { i: 1.0 } (splitThreshold 1048576)
 m31100| Wed Jun 13 10:32:23 [conn18] request split points lookup for chunk test.remove2 { : 6.0 } -->> { : 9.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] max number of requested split points reached (2) before the end of chunk test.remove2 { : 6.0 } -->> { : 9.0 }
 m31100| Wed Jun 13 10:32:23 [conn18] received splitChunk request: { splitChunk: "test.remove2", keyPattern: { i: 1.0 }, min: { i: 6.0 }, max: { i: 9.0 }, from: "remove2-rs0", splitKeys: [ { i: 7.0 } ], shardId: "test.remove2-i_6.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:32:23 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4771402b5052316d77a
 m31100| Wed Jun 13 10:32:23 [conn18] splitChunk accepted at version 1|12||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:23 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:23-6", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597943634), what: "split", ns: "test.remove2", details: { before: { min: { i: 6.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { i: 6.0 }, max: { i: 7.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') }, right: { min: { i: 7.0 }, max: { i: 9.0 }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408') } } }
 m31100| Wed Jun 13 10:32:23 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m30999| Wed Jun 13 10:32:23 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 9 version: 1|14||4fd8a477d1d821664bf17408 based on: 1|12||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:32:23 [conn2] autosplitted test.remove2 shard: ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 1|10||000000000000000000000000 min: { i: 6.0 } max: { i: 9.0 } on: { i: 7.0 } (splitThreshold 1048576)
 m30999| Wed Jun 13 10:32:23 [conn1] creating WriteBackListener for: ip-0A420969:29000 serverID: 4fd8a477d1d821664bf17406
 m29000| Wed Jun 13 10:32:23 [initandlisten] connection accepted from 10.66.9.105:58652 #10 (10 connections now open)
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31100| Wed Jun 13 10:32:26 [conn5] end connection 10.66.9.105:58597 (12 connections now open)
 m31100| Wed Jun 13 10:32:26 [initandlisten] connection accepted from 10.66.9.105:58653 #21 (13 connections now open)
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31201| Wed Jun 13 10:32:27 [conn4] end connection 10.66.9.105:58596 (6 connections now open)
 m31201| Wed Jun 13 10:32:27 [initandlisten] connection accepted from 10.66.9.105:58654 #10 (7 connections now open)
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31200| Wed Jun 13 10:32:30 [conn5] end connection 10.66.9.105:58599 (12 connections now open)
 m31200| Wed Jun 13 10:32:30 [initandlisten] connection accepted from 10.66.9.105:58655 #21 (13 connections now open)
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m30999| Wed Jun 13 10:32:33 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a481d1d821664bf17409
 m30999| Wed Jun 13 10:32:33 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:32:33 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 1|1||000000000000000000000000 min: { i: MinKey } max: { i: 0.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:32:33 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:32:33 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:32:33 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4811402b5052316d77b
 m31100| Wed Jun 13 10:32:33 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:33-7", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597953122), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:32:33 [conn18] moveChunk request accepted at version 1|14||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:33 [conn18] moveChunk number of documents: 0
 m31100| Wed Jun 13 10:32:33 [conn18] starting new replica set monitor for replica set remove2-rs1 with seed of ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:32:33 [conn18] successfully connected to seed ip-0A420969:31200 for replica set remove2-rs1
 m31200| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58656 #22 (14 connections now open)
 m31100| Wed Jun 13 10:32:33 [conn18] changing hosts to { 0: "ip-0A420969:31200", 1: "ip-0A420969:31201" } from remove2-rs1/
 m31100| Wed Jun 13 10:32:33 [conn18] trying to add new host ip-0A420969:31200 to replica set remove2-rs1
 m31100| Wed Jun 13 10:32:33 [conn18] successfully connected to new host ip-0A420969:31200 in replica set remove2-rs1
 m31100| Wed Jun 13 10:32:33 [conn18] trying to add new host ip-0A420969:31201 to replica set remove2-rs1
 m31200| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58657 #23 (15 connections now open)
 m31100| Wed Jun 13 10:32:33 [conn18] successfully connected to new host ip-0A420969:31201 in replica set remove2-rs1
 m31201| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58658 #11 (8 connections now open)
 m31200| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58659 #24 (16 connections now open)
 m31200| Wed Jun 13 10:32:33 [conn22] end connection 10.66.9.105:58656 (15 connections now open)
 m31100| Wed Jun 13 10:32:33 [conn18] Primary for replica set remove2-rs1 changed to ip-0A420969:31200
 m31201| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58660 #12 (9 connections now open)
 m31100| Wed Jun 13 10:32:33 [conn18] replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:32:33 [ReplicaSetMonitorWatcher] starting
 m31200| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58661 #25 (16 connections now open)
 m31200| Wed Jun 13 10:32:33 [migrateThread] starting new replica set monitor for replica set remove2-rs0 with seed of ip-0A420969:31100,ip-0A420969:31101
 m31100| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58662 #22 (14 connections now open)
 m31100| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58663 #23 (15 connections now open)
 m31100| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58665 #24 (16 connections now open)
 m31100| Wed Jun 13 10:32:33 [conn22] end connection 10.66.9.105:58662 (15 connections now open)
 m31100| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58667 #25 (16 connections now open)
 m31100| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58668 #26 (17 connections now open)
 m31200| Wed Jun 13 10:32:33 [migrateThread] successfully connected to seed ip-0A420969:31100 for replica set remove2-rs0
 m31200| Wed Jun 13 10:32:33 [migrateThread] changing hosts to { 0: "ip-0A420969:31100", 1: "ip-0A420969:31101" } from remove2-rs0/
 m31200| Wed Jun 13 10:32:33 [migrateThread] trying to add new host ip-0A420969:31100 to replica set remove2-rs0
 m31200| Wed Jun 13 10:32:33 [migrateThread] successfully connected to new host ip-0A420969:31100 in replica set remove2-rs0
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
 m31200| Wed Jun 13 10:32:33 [migrateThread] trying to add new host ip-0A420969:31101 to replica set remove2-rs0
 m31200| Wed Jun 13 10:32:33 [migrateThread] successfully connected to new host ip-0A420969:31101 in replica set remove2-rs0
 m31200| Wed Jun 13 10:32:33 [migrateThread] Primary for replica set remove2-rs0 changed to ip-0A420969:31100
 m31200| Wed Jun 13 10:32:33 [migrateThread] replica set monitor for replica set remove2-rs0 started, address is remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:32:33 [ReplicaSetMonitorWatcher] starting
 m31200| Wed Jun 13 10:32:33 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/test.ns, filling with zeroes...
chunk diff: 8
 m31200| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58670 #26 (17 connections now open)
 m31101| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58664 #11 (8 connections now open)
 m31101| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58666 #12 (9 connections now open)
 m31101| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58669 #13 (10 connections now open)
 m31201| Wed Jun 13 10:32:33 [initandlisten] connection accepted from 10.66.9.105:58671 #13 (10 connections now open)
 m31200| Wed Jun 13 10:32:33 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/test.ns, size: 16MB,  took 0.061 secs
 m31200| Wed Jun 13 10:32:33 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/test.0, filling with zeroes...
 m31200| Wed Jun 13 10:32:33 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/test.0, size: 16MB,  took 0.057 secs
 m31200| Wed Jun 13 10:32:33 [migrateThread] build index test.remove2 { _id: 1 }
 m31200| Wed Jun 13 10:32:33 [migrateThread] build index done.  scanned 0 total records. 0 secs
 m31200| Wed Jun 13 10:32:33 [migrateThread] info: creating collection test.remove2 on add index
 m31200| Wed Jun 13 10:32:33 [migrateThread] build index test.remove2 { i: 1.0 }
 m31201| Wed Jun 13 10:32:33 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/test.ns, filling with zeroes...
 m31200| Wed Jun 13 10:32:33 [migrateThread] build index done.  scanned 0 total records. 0 secs
 m31200| Wed Jun 13 10:32:33 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
 m31201| Wed Jun 13 10:32:33 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/test.ns, size: 16MB,  took 0.059 secs
 m31201| Wed Jun 13 10:32:33 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/test.0, filling with zeroes...
 m31201| Wed Jun 13 10:32:33 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/test.0, size: 16MB,  took 0.059 secs
 m31201| Wed Jun 13 10:32:33 [rsSync] build index test.remove2 { _id: 1 }
 m31201| Wed Jun 13 10:32:33 [rsSync] build index done.  scanned 0 total records. 0 secs
 m31201| Wed Jun 13 10:32:33 [rsSync] info: creating collection test.remove2 on add index
 m31201| Wed Jun 13 10:32:33 [rsSync] build index test.remove2 { i: 1.0 }
 m31201| Wed Jun 13 10:32:33 [rsSync] build index done.  scanned 0 total records. 0 secs
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31100| Wed Jun 13 10:32:34 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:32:34 [conn18] moveChunk setting version to: 2|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:32:34 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
 m31200| Wed Jun 13 10:32:34 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:34-0", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339597954134), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 5: 132, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 870 } }
 m31100| Wed Jun 13 10:32:34 [conn18] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:32:34 [conn18] moveChunk updating self version to: 2|1||4fd8a477d1d821664bf17408 through { i: 0.0 } -> { i: 1.0 } for collection 'test.remove2'
 m29000| Wed Jun 13 10:32:34 [initandlisten] connection accepted from 10.66.9.105:58672 #11 (11 connections now open)
 m31100| Wed Jun 13 10:32:34 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:34-8", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597954136), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:32:34 [conn18] doing delete inline
 m31100| Wed Jun 13 10:32:34 [conn18] moveChunk deleted: 0
 m31100| Wed Jun 13 10:32:34 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:32:34 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:34-9", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597954136), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 7, step4 of 6: 1000, step5 of 6: 5, step6 of 6: 0 } }
 m31100| Wed Jun 13 10:32:34 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:6824 w:121774 reslen:37 1015ms
 m30999| Wed Jun 13 10:32:34 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 10 version: 2|1||4fd8a477d1d821664bf17408 based on: 1|14||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:32:34 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m30999| Wed Jun 13 10:32:39 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a487d1d821664bf1740a
 m30999| Wed Jun 13 10:32:39 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_0.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 0.0 }, max: { i: 1.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:32:39 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 2|1||000000000000000000000000 min: { i: 0.0 } max: { i: 1.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:32:39 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:32:39 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:32:39 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4871402b5052316d77c
 m31100| Wed Jun 13 10:32:39 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:39-10", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597959142), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:32:39 [conn18] moveChunk request accepted at version 2|1||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:39 [conn18] moveChunk number of documents: 30
 m31200| Wed Jun 13 10:32:39 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m31100| Wed Jun 13 10:32:40 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:32:40 [conn18] moveChunk setting version to: 3|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:32:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
 m31200| Wed Jun 13 10:32:40 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:40-1", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339597960145), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 989 } }
 m31100| Wed Jun 13 10:32:40 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:32:40 [conn18] moveChunk updating self version to: 3|1||4fd8a477d1d821664bf17408 through { i: 1.0 } -> { i: 3.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:32:40 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:40-11", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597960146), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:32:40 [conn18] doing delete inline
 m31100| Wed Jun 13 10:32:40 [conn18] moveChunk deleted: 30
 m31100| Wed Jun 13 10:32:40 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:32:40 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:40-12", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597960154), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 2, step6 of 6: 7 } }
 m31100| Wed Jun 13 10:32:40 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:6994 w:128872 reslen:37 1012ms
 m30999| Wed Jun 13 10:32:40 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 11 version: 3|1||4fd8a477d1d821664bf17408 based on: 2|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:32:40 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
Wed Jun 13 10:32:43 [clientcursormon] mem (MB) res:18 virt:71 mapped:0
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
 m30999| Wed Jun 13 10:32:45 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a48dd1d821664bf1740b
 m30999| Wed Jun 13 10:32:45 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_1.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 1.0 }, max: { i: 3.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:32:45 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 3|1||000000000000000000000000 min: { i: 1.0 } max: { i: 3.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:32:45 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:32:45 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:32:45 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a48d1402b5052316d77d
 m31100| Wed Jun 13 10:32:45 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:45-13", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597965161), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:32:45 [conn18] moveChunk request accepted at version 3|1||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:45 [conn18] moveChunk number of documents: 60
 m31200| Wed Jun 13 10:32:45 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
 m31100| Wed Jun 13 10:32:45 [clientcursormon] mem (MB) res:79 virt:243 mapped:144
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
 m31101| Wed Jun 13 10:32:46 [clientcursormon] mem (MB) res:79 virt:253 mapped:160
 m31100| Wed Jun 13 10:32:46 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:32:46 [conn18] moveChunk setting version to: 4|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:32:46 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
 m31200| Wed Jun 13 10:32:46 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:46-2", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339597966170), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 18, step4 of 5: 0, step5 of 5: 987 } }
 m31100| Wed Jun 13 10:32:46 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:32:46 [conn18] moveChunk updating self version to: 4|1||4fd8a477d1d821664bf17408 through { i: 3.0 } -> { i: 5.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:32:46 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:46-14", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597966171), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:32:46 [conn18] doing delete inline
 m31100| Wed Jun 13 10:32:46 [conn18] moveChunk deleted: 60
 m31100| Wed Jun 13 10:32:46 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:32:46 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:46-15", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597966181), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 8, step6 of 6: 9 } }
 m31100| Wed Jun 13 10:32:46 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:7221 w:137905 reslen:37 1021ms
 m30999| Wed Jun 13 10:32:46 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 12 version: 4|1||4fd8a477d1d821664bf17408 based on: 3|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:32:46 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m31200| Wed Jun 13 10:32:49 [clientcursormon] mem (MB) res:72 virt:242 mapped:144
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m31201| Wed Jun 13 10:32:49 [clientcursormon] mem (MB) res:72 virt:249 mapped:160
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m30999| Wed Jun 13 10:32:51 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a493d1d821664bf1740c
 m30999| Wed Jun 13 10:32:51 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_3.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 3.0 }, max: { i: 5.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:32:51 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 4|1||000000000000000000000000 min: { i: 3.0 } max: { i: 5.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:32:51 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:32:51 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:32:51 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4931402b5052316d77e
 m31100| Wed Jun 13 10:32:51 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:51-16", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597971188), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:32:51 [conn18] moveChunk request accepted at version 4|1||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:51 [conn18] moveChunk number of documents: 60
 m31200| Wed Jun 13 10:32:51 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m31100| Wed Jun 13 10:32:52 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:32:52 [conn18] moveChunk setting version to: 5|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:32:52 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
 m31200| Wed Jun 13 10:32:52 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:52-3", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339597972193), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 980 } }
 m31100| Wed Jun 13 10:32:52 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:32:52 [conn18] moveChunk updating self version to: 5|1||4fd8a477d1d821664bf17408 through { i: 5.0 } -> { i: 6.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:32:52 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:52-17", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597972194), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:32:52 [conn18] doing delete inline
 m31100| Wed Jun 13 10:32:52 [conn18] moveChunk deleted: 60
 m31100| Wed Jun 13 10:32:52 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:32:52 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:52-18", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339597972206), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 4, step6 of 6: 11 } }
 m31100| Wed Jun 13 10:32:52 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:7426 w:148972 reslen:37 1019ms
 m30999| Wed Jun 13 10:32:52 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 13 version: 5|1||4fd8a477d1d821664bf17408 based on: 4|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:32:52 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 4, "remove2-rs1" : 4 } min: 4 max: 4
chunk diff: 0
{ "was" : 30, "ok" : 1 }
 m31200| Wed Jun 13 10:32:52 [conn20] no current chunk manager found for this shard, will initialize
--- Sharding Status --- 
  sharding version: { "_id" : 1, "version" : 3 }
  shards:
	{  "_id" : "remove2-rs0",  "host" : "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101" }
	{  "_id" : "remove2-rs1",  "host" : "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201" }
  databases:
	{  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
	{  "_id" : "test",  "partitioned" : true,  "primary" : "remove2-rs0" }
		test.remove2 chunks:
				remove2-rs1	4
				remove2-rs0	4
			{ "i" : { $minKey : 1 } } -->> { "i" : 0 } on : remove2-rs1 Timestamp(2000, 0) 
			{ "i" : 0 } -->> { "i" : 1 } on : remove2-rs1 Timestamp(3000, 0) 
			{ "i" : 1 } -->> { "i" : 3 } on : remove2-rs1 Timestamp(4000, 0) 
			{ "i" : 3 } -->> { "i" : 5 } on : remove2-rs1 Timestamp(5000, 0) 
			{ "i" : 5 } -->> { "i" : 6 } on : remove2-rs0 Timestamp(5000, 1) 
			{ "i" : 6 } -->> { "i" : 7 } on : remove2-rs0 Timestamp(1000, 13) 
			{ "i" : 7 } -->> { "i" : 9 } on : remove2-rs0 Timestamp(1000, 14) 
			{ "i" : 9 } -->> { "i" : { $maxKey : 1 } } on : remove2-rs0 Timestamp(1000, 4) 



----
Attempting to remove shard and add it back in
----


Removing shard with name: remove2-rs1
 m30999| Wed Jun 13 10:32:52 [conn1] going to start draining shard: remove2-rs1
 m30999| primaryLocalDoc: { _id: "local", primary: "remove2-rs1" }
{
	"msg" : "draining started successfully",
	"state" : "started",
	"shard" : "remove2-rs1",
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31101| Wed Jun 13 10:32:53 [conn10] end connection 10.66.9.105:58651 (9 connections now open)
 m31101| Wed Jun 13 10:32:53 [initandlisten] connection accepted from 10.66.9.105:58673 #14 (10 connections now open)
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31100| Wed Jun 13 10:32:56 [conn21] end connection 10.66.9.105:58653 (16 connections now open)
 m31100| Wed Jun 13 10:32:56 [initandlisten] connection accepted from 10.66.9.105:58674 #27 (17 connections now open)
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m30999| Wed Jun 13 10:32:57 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a499d1d821664bf1740d
 m30999| Wed Jun 13 10:32:57 [Balancer] chose [remove2-rs1] to [remove2-rs0] { _id: "test.remove2-i_3.0", lastmod: Timestamp 5000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 3.0 }, max: { i: 5.0 }, shard: "remove2-rs1" }
 m30999| Wed Jun 13 10:32:57 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 lastmod: 5|0||000000000000000000000000 min: { i: 3.0 } max: { i: 5.0 }) remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 -> remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m29000| Wed Jun 13 10:32:57 [initandlisten] connection accepted from 10.66.9.105:58675 #12 (12 connections now open)
 m31200| Wed Jun 13 10:32:57 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" }
 m31200| Wed Jun 13 10:32:57 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31200| Wed Jun 13 10:32:57 [LockPinger] creating distributed lock ping thread for ip-0A420969:29000 and process ip-0A420969:31200:1339597977:41 (sleeping for 30000ms)
 m31200| Wed Jun 13 10:32:57 [conn18] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' acquired, ts : 4fd8a499259cec6b7cffafd6
 m31200| Wed Jun 13 10:32:57 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:57-4", server: "ip-0A420969", clientAddr: "10.66.9.105:58642", time: new Date(1339597977216), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:32:57 [conn18] moveChunk request accepted at version 5|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:32:57 [conn18] moveChunk number of documents: 60
 m31100| Wed Jun 13 10:32:57 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31201| Wed Jun 13 10:32:57 [conn10] end connection 10.66.9.105:58654 (9 connections now open)
 m31201| Wed Jun 13 10:32:57 [initandlisten] connection accepted from 10.66.9.105:58676 #14 (10 connections now open)
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:32:58 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31200| Wed Jun 13 10:32:58 [conn18] moveChunk setting version to: 6|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:32:58 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
 m31100| Wed Jun 13 10:32:58 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:58-19", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339597978225), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 16, step4 of 5: 0, step5 of 5: 990 } }
 m31200| Wed Jun 13 10:32:58 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31200| Wed Jun 13 10:32:58 [conn18] moveChunk updating self version to: 6|1||4fd8a477d1d821664bf17408 through { i: MinKey } -> { i: 0.0 } for collection 'test.remove2'
 m31200| Wed Jun 13 10:32:58 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:58-5", server: "ip-0A420969", clientAddr: "10.66.9.105:58642", time: new Date(1339597978226), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:32:58 [conn18] doing delete inline
 m31200| Wed Jun 13 10:32:58 [conn18] moveChunk deleted: 60
 m31200| Wed Jun 13 10:32:58 [conn18] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' unlocked. 
 m31200| Wed Jun 13 10:32:58 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:32:58-6", server: "ip-0A420969", clientAddr: "10.66.9.105:58642", time: new Date(1339597978238), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 6: 1, step2 of 6: 4, step3 of 6: 0, step4 of 6: 999, step5 of 6: 9, step6 of 6: 10 } }
 m31200| Wed Jun 13 10:32:58 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:424 w:10687 reslen:37 1027ms
 m30999| Wed Jun 13 10:32:58 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 14 version: 6|1||4fd8a477d1d821664bf17408 based on: 5|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:32:58 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:33:00 [conn21] end connection 10.66.9.105:58655 (16 connections now open)
 m31200| Wed Jun 13 10:33:00 [initandlisten] connection accepted from 10.66.9.105:58677 #27 (17 connections now open)
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m30999| Wed Jun 13 10:33:03 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a49fd1d821664bf1740e
 m30999| Wed Jun 13 10:33:03 [Balancer] chose [remove2-rs1] to [remove2-rs0] { _id: "test.remove2-i_1.0", lastmod: Timestamp 4000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 1.0 }, max: { i: 3.0 }, shard: "remove2-rs1" }
 m30999| Wed Jun 13 10:33:03 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 lastmod: 4|0||000000000000000000000000 min: { i: 1.0 } max: { i: 3.0 }) remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 -> remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:33:03 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" }
 m31200| Wed Jun 13 10:33:03 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31200| Wed Jun 13 10:33:03 [conn18] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' acquired, ts : 4fd8a49f259cec6b7cffafd7
 m31200| Wed Jun 13 10:33:03 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:03-7", server: "ip-0A420969", clientAddr: "10.66.9.105:58642", time: new Date(1339597983244), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:33:03 [conn18] moveChunk request accepted at version 6|1||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:33:03 [conn18] moveChunk number of documents: 60
 m31100| Wed Jun 13 10:33:03 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:33:04 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31200| Wed Jun 13 10:33:04 [conn18] moveChunk setting version to: 7|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:33:04 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
 m31100| Wed Jun 13 10:33:04 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:04-20", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339597984253), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 15, step4 of 5: 0, step5 of 5: 989 } }
 m31200| Wed Jun 13 10:33:04 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31200| Wed Jun 13 10:33:04 [conn18] moveChunk updating self version to: 7|1||4fd8a477d1d821664bf17408 through { i: MinKey } -> { i: 0.0 } for collection 'test.remove2'
 m31200| Wed Jun 13 10:33:04 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:04-8", server: "ip-0A420969", clientAddr: "10.66.9.105:58642", time: new Date(1339597984254), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:33:04 [conn18] doing delete inline
 m31200| Wed Jun 13 10:33:04 [conn18] moveChunk deleted: 60
 m31200| Wed Jun 13 10:33:04 [conn18] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' unlocked. 
 m31200| Wed Jun 13 10:33:04 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:04-9", server: "ip-0A420969", clientAddr: "10.66.9.105:58642", time: new Date(1339597984268), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 8, step6 of 6: 12 } }
 m31200| Wed Jun 13 10:33:04 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:620 w:22258 reslen:37 1024ms
 m30999| Wed Jun 13 10:33:04 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 15 version: 7|1||4fd8a477d1d821664bf17408 based on: 6|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:33:04 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m30999| Wed Jun 13 10:33:09 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4a5d1d821664bf1740f
 m30999| Wed Jun 13 10:33:09 [Balancer] chose [remove2-rs1] to [remove2-rs0] { _id: "test.remove2-i_0.0", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 0.0 }, max: { i: 1.0 }, shard: "remove2-rs1" }
 m30999| Wed Jun 13 10:33:09 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 lastmod: 3|0||000000000000000000000000 min: { i: 0.0 } max: { i: 1.0 }) remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 -> remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:33:09 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" }
 m31200| Wed Jun 13 10:33:09 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31200| Wed Jun 13 10:33:09 [conn18] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' acquired, ts : 4fd8a4a5259cec6b7cffafd8
 m31200| Wed Jun 13 10:33:09 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:09-10", server: "ip-0A420969", clientAddr: "10.66.9.105:58642", time: new Date(1339597989275), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:33:09 [conn18] moveChunk request accepted at version 7|1||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:33:09 [conn18] moveChunk number of documents: 30
 m31100| Wed Jun 13 10:33:09 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:33:10 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31200| Wed Jun 13 10:33:10 [conn18] moveChunk setting version to: 8|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:33:10 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
 m31100| Wed Jun 13 10:33:10 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:10-21", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339597990278), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 990 } }
 m31200| Wed Jun 13 10:33:10 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 }
 m31200| Wed Jun 13 10:33:10 [conn18] moveChunk updating self version to: 8|1||4fd8a477d1d821664bf17408 through { i: MinKey } -> { i: 0.0 } for collection 'test.remove2'
 m31200| Wed Jun 13 10:33:10 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:10-11", server: "ip-0A420969", clientAddr: "10.66.9.105:58642", time: new Date(1339597990279), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:33:10 [conn18] doing delete inline
 m31200| Wed Jun 13 10:33:10 [conn18] moveChunk deleted: 30
 m31200| Wed Jun 13 10:33:10 [conn18] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' unlocked. 
 m31200| Wed Jun 13 10:33:10 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:10-12", server: "ip-0A420969", clientAddr: "10.66.9.105:58642", time: new Date(1339597990287), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 2, step6 of 6: 6 } }
 m31200| Wed Jun 13 10:33:10 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:787 w:28743 reslen:37 1013ms
 m30999| Wed Jun 13 10:33:10 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 16 version: 8|1||4fd8a477d1d821664bf17408 based on: 7|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:33:10 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m30999| Wed Jun 13 10:33:15 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4abd1d821664bf17410
 m30999| Wed Jun 13 10:33:15 [Balancer] chose [remove2-rs1] to [remove2-rs0] { _id: "test.remove2-i_MinKey", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs1" }
 m30999| Wed Jun 13 10:33:15 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 lastmod: 8|1||000000000000000000000000 min: { i: MinKey } max: { i: 0.0 }) remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 -> remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:33:15 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" }
 m31200| Wed Jun 13 10:33:15 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31200| Wed Jun 13 10:33:15 [conn18] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' acquired, ts : 4fd8a4ab259cec6b7cffafd9
 m31200| Wed Jun 13 10:33:15 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:15-13", server: "ip-0A420969", clientAddr: "10.66.9.105:58642", time: new Date(1339597995294), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:33:15 [conn18] moveChunk request accepted at version 8|1||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:33:15 [conn18] moveChunk number of documents: 0
 m31100| Wed Jun 13 10:33:15 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:33:16 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31200| Wed Jun 13 10:33:16 [conn18] moveChunk setting version to: 9|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:33:16 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
 m31100| Wed Jun 13 10:33:16 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:16-22", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339597996297), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1000 } }
 m31200| Wed Jun 13 10:33:16 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
 m31200| Wed Jun 13 10:33:16 [conn18] moveChunk moved last chunk out for collection 'test.remove2'
 m31200| Wed Jun 13 10:33:16 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:16-14", server: "ip-0A420969", clientAddr: "10.66.9.105:58642", time: new Date(1339597996298), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:33:16 [conn18] doing delete inline
 m31200| Wed Jun 13 10:33:16 [conn18] moveChunk deleted: 0
 m31200| Wed Jun 13 10:33:16 [conn18] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' unlocked. 
 m31200| Wed Jun 13 10:33:16 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:16-15", server: "ip-0A420969", clientAddr: "10.66.9.105:58642", time: new Date(1339597996299), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 3, step6 of 6: 0 } }
 m31200| Wed Jun 13 10:33:16 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:913 w:28782 reslen:37 1006ms
 m30999| Wed Jun 13 10:33:16 [Balancer] ChunkManager: time to load chunks for test.remove2: 1ms sequenceNumber: 17 version: 9|0||4fd8a477d1d821664bf17408 based on: 8|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:33:16 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
 m30999| Wed Jun 13 10:33:16 [conn1] going to remove shard: remove2-rs1
 m30999| Wed Jun 13 10:33:16 [conn1] deleting replica set monitor for: remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31200| Wed Jun 13 10:33:16 [conn16] end connection 10.66.9.105:58638 (16 connections now open)
 m31201| Wed Jun 13 10:33:16 [conn8] end connection 10.66.9.105:58639 (9 connections now open)
{
	"msg" : "removeshard completed successfully",
	"state" : "completed",
	"shard" : "remove2-rs1",
	"ok" : 1
}
 m31200| Wed Jun 13 10:33:16 [conn3] dropDatabase test
Shard removed successfully
Adding shard with seed: remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31201| Wed Jun 13 10:33:16 [rsSync] dropDatabase test
 m30999| Wed Jun 13 10:33:16 [conn1] warning: scoped connection to Trying to get server address for DBClientReplicaSet, but no ReplicaSetMonitor exists for remove2-rs1
 m30999| Wed Jun 13 10:33:16 [conn1] remove2-rs1/ not being returned to the pool
 m30999| Wed Jun 13 10:33:16 [conn1] addshard request { addshard: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201" } failed: couldn't connect to new shard No replica set monitor active and no cached seed found for set: remove2-rs1
 m31200| Wed Jun 13 10:33:16 [conn18] end connection 10.66.9.105:58642 (15 connections now open)
First attempt to addShard failed, trying again
 m30999| Wed Jun 13 10:33:16 [conn1] starting new replica set monitor for replica set remove2-rs1 with seed of ip-0A420969:31200,ip-0A420969:31201
 m30999| Wed Jun 13 10:33:16 [conn1] successfully connected to seed ip-0A420969:31200 for replica set remove2-rs1
 m31200| Wed Jun 13 10:33:16 [initandlisten] connection accepted from 10.66.9.105:58678 #28 (16 connections now open)
 m30999| Wed Jun 13 10:33:16 [conn1] changing hosts to { 0: "ip-0A420969:31200", 1: "ip-0A420969:31201" } from remove2-rs1/
 m30999| Wed Jun 13 10:33:16 [conn1] trying to add new host ip-0A420969:31200 to replica set remove2-rs1
 m30999| Wed Jun 13 10:33:16 [conn1] successfully connected to new host ip-0A420969:31200 in replica set remove2-rs1
 m30999| Wed Jun 13 10:33:16 [conn1] trying to add new host ip-0A420969:31201 to replica set remove2-rs1
 m31200| Wed Jun 13 10:33:16 [initandlisten] connection accepted from 10.66.9.105:58679 #29 (17 connections now open)
 m30999| Wed Jun 13 10:33:16 [conn1] successfully connected to new host ip-0A420969:31201 in replica set remove2-rs1
 m31201| Wed Jun 13 10:33:16 [initandlisten] connection accepted from 10.66.9.105:58680 #15 (10 connections now open)
 m31200| Wed Jun 13 10:33:16 [conn28] end connection 10.66.9.105:58678 (16 connections now open)
 m30999| Wed Jun 13 10:33:16 [conn1] Primary for replica set remove2-rs1 changed to ip-0A420969:31200
 m30999| Wed Jun 13 10:33:16 [conn1] replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31200| Wed Jun 13 10:33:16 [initandlisten] connection accepted from 10.66.9.105:58681 #30 (17 connections now open)
 m30999| Wed Jun 13 10:33:16 [conn1] going to add shard: { _id: "remove2-rs1", host: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201" }
 m30999| Wed Jun 13 10:33:16 [mongosMain] connection accepted from 10.66.9.105:58682 #3 (3 connections now open)
Awaiting ip-0A420969:31201 to be { "ok" : true, "secondary" : true } for connection to ip-0A420969:30999 (rs: undefined)
{
	"remove2-rs0" : {
		"hosts" : [
			{
				"addr" : "ip-0A420969:31100",
				"ok" : true,
				"ismaster" : true,
				"hidden" : false,
				"secondary" : false,
				"pingTimeMillis" : 0
			},
			{
				"addr" : "ip-0A420969:31101",
				"ok" : true,
				"ismaster" : false,
				"hidden" : false,
				"secondary" : true,
				"pingTimeMillis" : 0
			}
		],
		"master" : 0,
		"nextSlave" : 0
	},
	"remove2-rs1" : {
		"hosts" : [
			{
				"addr" : "ip-0A420969:31200",
				"ok" : true,
				"ismaster" : true,
				"hidden" : false,
				"secondary" : false,
				"pingTimeMillis" : 0
			},
			{
				"addr" : "ip-0A420969:31201",
				"ok" : true,
				"ismaster" : false,
				"hidden" : false,
				"secondary" : true,
				"pingTimeMillis" : 0
			}
		],
		"master" : 0,
		"nextSlave" : 0
	}
}
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m30999| Wed Jun 13 10:33:21 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4b1d1d821664bf17411
 m30999| Wed Jun 13 10:33:21 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_MinKey", lastmod: Timestamp 9000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:33:21 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 9|0||000000000000000000000000 min: { i: MinKey } max: { i: 0.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:33:21 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:33:21 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:33:21 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4b11402b5052316d77f
 m31100| Wed Jun 13 10:33:21 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:21-23", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598001305), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:33:21 [conn18] moveChunk request accepted at version 9|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:33:21 [conn18] moveChunk number of documents: 0
 m31200| Wed Jun 13 10:33:21 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/test.ns, filling with zeroes...
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31200| Wed Jun 13 10:33:21 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/test.ns, size: 16MB,  took 0.07 secs
 m31200| Wed Jun 13 10:33:21 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/test.0, filling with zeroes...
 m31200| Wed Jun 13 10:33:21 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/test.0, size: 16MB,  took 0.05 secs
 m31200| Wed Jun 13 10:33:21 [migrateThread] build index test.remove2 { _id: 1 }
 m31200| Wed Jun 13 10:33:21 [migrateThread] build index done.  scanned 0 total records. 0 secs
 m31200| Wed Jun 13 10:33:21 [migrateThread] info: creating collection test.remove2 on add index
 m31200| Wed Jun 13 10:33:21 [migrateThread] build index test.remove2 { i: 1.0 }
 m31200| Wed Jun 13 10:33:21 [migrateThread] build index done.  scanned 0 total records. 0.001 secs
 m31200| Wed Jun 13 10:33:21 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m29000| Wed Jun 13 10:33:21 [clientcursormon] mem (MB) res:37 virt:117 mapped:32
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31100| Wed Jun 13 10:33:22 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:33:22 [conn18] moveChunk setting version to: 10|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:33:22 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
 m31200| Wed Jun 13 10:33:22 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:22-16", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598002308), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 5: 127, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 872 } }
 m31100| Wed Jun 13 10:33:22 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:33:22 [conn18] moveChunk updating self version to: 10|1||4fd8a477d1d821664bf17408 through { i: 0.0 } -> { i: 1.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:33:22 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:22-24", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598002309), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:33:22 [conn18] doing delete inline
 m31100| Wed Jun 13 10:33:22 [conn18] moveChunk deleted: 0
 m31100| Wed Jun 13 10:33:22 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:33:22 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:22-25", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598002310), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 2, step6 of 6: 0 } }
 m31100| Wed Jun 13 10:33:22 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:7751 w:149027 reslen:37 1005ms
 m30999| Wed Jun 13 10:33:22 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 18 version: 10|1||4fd8a477d1d821664bf17408 based on: 9|0||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:33:22 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m31201| Wed Jun 13 10:33:22 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/test.ns, filling with zeroes...
 m31201| Wed Jun 13 10:33:22 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/test.ns, size: 16MB,  took 0.078 secs
 m31201| Wed Jun 13 10:33:22 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/test.0, filling with zeroes...
 m31201| Wed Jun 13 10:33:22 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/test.0, size: 16MB,  took 0.052 secs
 m31201| Wed Jun 13 10:33:22 [rsSync] build index test.remove2 { _id: 1 }
 m31201| Wed Jun 13 10:33:22 [rsSync] build index done.  scanned 0 total records. 0 secs
 m31201| Wed Jun 13 10:33:22 [rsSync] info: creating collection test.remove2 on add index
 m31201| Wed Jun 13 10:33:22 [rsSync] build index test.remove2 { i: 1.0 }
 m31201| Wed Jun 13 10:33:22 [rsSync] build index done.  scanned 0 total records. 0 secs
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m31101| Wed Jun 13 10:33:23 [conn14] end connection 10.66.9.105:58673 (9 connections now open)
 m31101| Wed Jun 13 10:33:23 [initandlisten] connection accepted from 10.66.9.105:58684 #15 (10 connections now open)
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m31100| Wed Jun 13 10:33:26 [conn27] end connection 10.66.9.105:58674 (16 connections now open)
 m31100| Wed Jun 13 10:33:26 [initandlisten] connection accepted from 10.66.9.105:58685 #28 (17 connections now open)
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m30999| Wed Jun 13 10:33:27 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4b7d1d821664bf17412
 m30999| Wed Jun 13 10:33:27 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_0.0", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 0.0 }, max: { i: 1.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:33:27 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 10|1||000000000000000000000000 min: { i: 0.0 } max: { i: 1.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:33:27 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:33:27 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:33:27 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4b71402b5052316d780
 m31100| Wed Jun 13 10:33:27 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:27-26", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598007317), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:33:27 [conn18] moveChunk request accepted at version 10|1||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:33:27 [conn18] moveChunk number of documents: 30
 m31200| Wed Jun 13 10:33:27 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m31201| Wed Jun 13 10:33:27 [conn14] end connection 10.66.9.105:58676 (9 connections now open)
 m31201| Wed Jun 13 10:33:27 [initandlisten] connection accepted from 10.66.9.105:58686 #16 (10 connections now open)
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m31100| Wed Jun 13 10:33:28 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:33:28 [conn18] moveChunk setting version to: 11|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:33:28 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
 m31200| Wed Jun 13 10:33:28 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:28-17", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598008319), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 990 } }
 m31100| Wed Jun 13 10:33:28 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:33:28 [conn18] moveChunk updating self version to: 11|1||4fd8a477d1d821664bf17408 through { i: 1.0 } -> { i: 3.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:33:28 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:28-27", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598008320), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:33:28 [conn18] doing delete inline
 m31100| Wed Jun 13 10:33:28 [conn18] moveChunk deleted: 30
 m31100| Wed Jun 13 10:33:28 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:33:28 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:28-28", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598008327), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 2, step6 of 6: 6 } }
 m31100| Wed Jun 13 10:33:28 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:7928 w:155099 reslen:37 1011ms
 m30999| Wed Jun 13 10:33:28 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 19 version: 11|1||4fd8a477d1d821664bf17408 based on: 10|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:33:28 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
 m31200| Wed Jun 13 10:33:30 [conn27] end connection 10.66.9.105:58677 (16 connections now open)
 m31200| Wed Jun 13 10:33:30 [initandlisten] connection accepted from 10.66.9.105:58687 #31 (17 connections now open)
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
 m30999| Wed Jun 13 10:33:33 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4bdd1d821664bf17413
 m30999| Wed Jun 13 10:33:33 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_1.0", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 1.0 }, max: { i: 3.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:33:33 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 11|1||000000000000000000000000 min: { i: 1.0 } max: { i: 3.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:33:33 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:33:33 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:33:33 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4bd1402b5052316d781
 m31100| Wed Jun 13 10:33:33 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:33-29", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598013335), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:33:33 [conn18] moveChunk request accepted at version 11|1||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:33:33 [conn18] moveChunk number of documents: 60
 m31200| Wed Jun 13 10:33:33 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
 m31100| Wed Jun 13 10:33:34 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:33:34 [conn18] moveChunk setting version to: 12|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:33:34 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
 m31200| Wed Jun 13 10:33:34 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:34-18", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598014340), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 979 } }
 m31100| Wed Jun 13 10:33:34 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:33:34 [conn18] moveChunk updating self version to: 12|1||4fd8a477d1d821664bf17408 through { i: 3.0 } -> { i: 5.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:33:34 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:34-30", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598014341), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:33:34 [conn18] doing delete inline
 m31100| Wed Jun 13 10:33:34 [conn18] moveChunk deleted: 60
 m31100| Wed Jun 13 10:33:34 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:33:34 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:34-31", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598014357), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 3, step6 of 6: 14 } }
 m31100| Wed Jun 13 10:33:34 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:8178 w:166717 reslen:37 1023ms
 m30999| Wed Jun 13 10:33:34 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 20 version: 12|1||4fd8a477d1d821664bf17408 based on: 11|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:33:34 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m30999| Wed Jun 13 10:33:39 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4c3d1d821664bf17414
 m30999| Wed Jun 13 10:33:39 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_3.0", lastmod: Timestamp 12000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 3.0 }, max: { i: 5.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:33:39 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 12|1||000000000000000000000000 min: { i: 3.0 } max: { i: 5.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:33:39 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:33:39 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:33:39 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a4c31402b5052316d782
 m31100| Wed Jun 13 10:33:39 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:39-32", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598019363), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:33:39 [conn18] moveChunk request accepted at version 12|1||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:33:39 [conn18] moveChunk number of documents: 60
 m31200| Wed Jun 13 10:33:39 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m31100| Wed Jun 13 10:33:40 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:33:40 [conn18] moveChunk setting version to: 13|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:33:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
 m31200| Wed Jun 13 10:33:40 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:40-19", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598020374), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 17, step4 of 5: 0, step5 of 5: 990 } }
 m31100| Wed Jun 13 10:33:40 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:33:40 [conn18] moveChunk updating self version to: 13|1||4fd8a477d1d821664bf17408 through { i: 5.0 } -> { i: 6.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:33:40 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:40-33", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598020375), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:33:40 [conn18] doing delete inline
 m31100| Wed Jun 13 10:33:40 [conn18] moveChunk deleted: 60
 m31100| Wed Jun 13 10:33:40 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:33:40 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:40-34", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598020394), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 10, step6 of 6: 17 } }
 m31100| Wed Jun 13 10:33:40 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:8381 w:183987 reslen:37 1032ms
 m30999| Wed Jun 13 10:33:40 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 21 version: 13|1||4fd8a477d1d821664bf17408 based on: 12|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:33:40 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 4, "remove2-rs1" : 4 } min: 4 max: 4
chunk diff: 0
Shard added successfully


----
Attempting to remove shard, restart the set, and then add it back in
----


Removing shard with name: remove2-rs1
 m30999| Wed Jun 13 10:33:40 [conn1] going to start draining shard: remove2-rs1
 m30999| primaryLocalDoc: { _id: "local", primary: "remove2-rs1" }
{
	"msg" : "draining started successfully",
	"state" : "started",
	"shard" : "remove2-rs1",
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m30999| Wed Jun 13 10:33:45 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4c9d1d821664bf17415
 m30999| Wed Jun 13 10:33:45 [Balancer] chose [remove2-rs1] to [remove2-rs0] { _id: "test.remove2-i_3.0", lastmod: Timestamp 13000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 3.0 }, max: { i: 5.0 }, shard: "remove2-rs1" }
 m30999| Wed Jun 13 10:33:45 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 lastmod: 13|0||000000000000000000000000 min: { i: 3.0 } max: { i: 5.0 }) remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 -> remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:33:45 [conn30] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" }
 m31200| Wed Jun 13 10:33:45 [conn30] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31200| Wed Jun 13 10:33:45 [conn30] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' acquired, ts : 4fd8a4c9259cec6b7cffafda
 m31200| Wed Jun 13 10:33:45 [conn30] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:45-20", server: "ip-0A420969", clientAddr: "10.66.9.105:58681", time: new Date(1339598025402), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:33:45 [conn30] moveChunk request accepted at version 13|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:33:45 [conn30] moveChunk number of documents: 60
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31100| Wed Jun 13 10:33:45 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:33:46 [conn30] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31200| Wed Jun 13 10:33:46 [conn30] moveChunk setting version to: 14|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:33:46 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
 m31100| Wed Jun 13 10:33:46 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:46-35", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598026414), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 17, step4 of 5: 0, step5 of 5: 990 } }
 m31200| Wed Jun 13 10:33:46 [conn30] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31200| Wed Jun 13 10:33:46 [conn30] moveChunk updating self version to: 14|1||4fd8a477d1d821664bf17408 through { i: MinKey } -> { i: 0.0 } for collection 'test.remove2'
 m31200| Wed Jun 13 10:33:46 [conn30] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:46-21", server: "ip-0A420969", clientAddr: "10.66.9.105:58681", time: new Date(1339598026415), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:33:46 [conn30] doing delete inline
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:33:46 [conn30] moveChunk deleted: 60
 m31200| Wed Jun 13 10:33:46 [conn30] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' unlocked. 
 m31200| Wed Jun 13 10:33:46 [conn30] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:46-22", server: "ip-0A420969", clientAddr: "10.66.9.105:58681", time: new Date(1339598026427), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 10, step6 of 6: 11 } }
 m31200| Wed Jun 13 10:33:46 [conn30] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:7 r:413 w:11310 reslen:37 1025ms
 m30999| Wed Jun 13 10:33:46 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 22 version: 14|1||4fd8a477d1d821664bf17408 based on: 13|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:33:46 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m30999| Wed Jun 13 10:33:51 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4cfd1d821664bf17416
 m30999| Wed Jun 13 10:33:51 [Balancer] chose [remove2-rs1] to [remove2-rs0] { _id: "test.remove2-i_1.0", lastmod: Timestamp 12000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 1.0 }, max: { i: 3.0 }, shard: "remove2-rs1" }
 m30999| Wed Jun 13 10:33:51 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 lastmod: 12|0||000000000000000000000000 min: { i: 1.0 } max: { i: 3.0 }) remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 -> remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:33:51 [conn30] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" }
 m31200| Wed Jun 13 10:33:51 [conn30] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31200| Wed Jun 13 10:33:51 [conn30] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' acquired, ts : 4fd8a4cf259cec6b7cffafdb
 m31200| Wed Jun 13 10:33:51 [conn30] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:51-23", server: "ip-0A420969", clientAddr: "10.66.9.105:58681", time: new Date(1339598031433), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:33:51 [conn30] moveChunk request accepted at version 14|1||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:33:51 [conn30] moveChunk number of documents: 60
 m31100| Wed Jun 13 10:33:51 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:33:52 [conn30] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31200| Wed Jun 13 10:33:52 [conn30] moveChunk setting version to: 15|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:33:52 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
 m31100| Wed Jun 13 10:33:52 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:52-36", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598032443), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 16, step4 of 5: 0, step5 of 5: 990 } }
 m31200| Wed Jun 13 10:33:52 [conn30] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31200| Wed Jun 13 10:33:52 [conn30] moveChunk updating self version to: 15|1||4fd8a477d1d821664bf17408 through { i: MinKey } -> { i: 0.0 } for collection 'test.remove2'
 m31200| Wed Jun 13 10:33:52 [conn30] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:52-24", server: "ip-0A420969", clientAddr: "10.66.9.105:58681", time: new Date(1339598032444), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:33:52 [conn30] doing delete inline
 m31200| Wed Jun 13 10:33:52 [conn30] moveChunk deleted: 60
 m31200| Wed Jun 13 10:33:52 [conn30] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' unlocked. 
 m31200| Wed Jun 13 10:33:52 [conn30] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:52-25", server: "ip-0A420969", clientAddr: "10.66.9.105:58681", time: new Date(1339598032456), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 10, step6 of 6: 10 } }
 m31200| Wed Jun 13 10:33:52 [conn30] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:7 r:612 w:22199 reslen:37 1024ms
 m30999| Wed Jun 13 10:33:52 [Balancer] ChunkManager: time to load chunks for test.remove2: 1ms sequenceNumber: 23 version: 15|1||4fd8a477d1d821664bf17408 based on: 14|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:33:52 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31101| Wed Jun 13 10:33:53 [conn15] end connection 10.66.9.105:58684 (9 connections now open)
 m31101| Wed Jun 13 10:33:53 [initandlisten] connection accepted from 10.66.9.105:58688 #16 (10 connections now open)
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31100| Wed Jun 13 10:33:56 [conn28] end connection 10.66.9.105:58685 (16 connections now open)
 m31100| Wed Jun 13 10:33:56 [initandlisten] connection accepted from 10.66.9.105:58689 #29 (17 connections now open)
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31201| Wed Jun 13 10:33:57 [conn16] end connection 10.66.9.105:58686 (9 connections now open)
 m31201| Wed Jun 13 10:33:57 [initandlisten] connection accepted from 10.66.9.105:58690 #17 (10 connections now open)
 m30999| Wed Jun 13 10:33:57 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4d5d1d821664bf17417
 m30999| Wed Jun 13 10:33:57 [Balancer] chose [remove2-rs1] to [remove2-rs0] { _id: "test.remove2-i_0.0", lastmod: Timestamp 11000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 0.0 }, max: { i: 1.0 }, shard: "remove2-rs1" }
 m30999| Wed Jun 13 10:33:57 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 lastmod: 11|0||000000000000000000000000 min: { i: 0.0 } max: { i: 1.0 }) remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 -> remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:33:57 [conn30] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" }
 m31200| Wed Jun 13 10:33:57 [conn30] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31200| Wed Jun 13 10:33:57 [conn30] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' acquired, ts : 4fd8a4d5259cec6b7cffafdc
 m31200| Wed Jun 13 10:33:57 [conn30] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:57-26", server: "ip-0A420969", clientAddr: "10.66.9.105:58681", time: new Date(1339598037464), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:33:57 [conn30] moveChunk request accepted at version 15|1||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:33:57 [conn30] moveChunk number of documents: 30
 m31100| Wed Jun 13 10:33:57 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:33:58 [conn30] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31200| Wed Jun 13 10:33:58 [conn30] moveChunk setting version to: 16|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:33:58 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
 m31100| Wed Jun 13 10:33:58 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:58-37", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598038475), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 1000 } }
 m31200| Wed Jun 13 10:33:58 [conn30] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 }
 m31200| Wed Jun 13 10:33:58 [conn30] moveChunk updating self version to: 16|1||4fd8a477d1d821664bf17408 through { i: MinKey } -> { i: 0.0 } for collection 'test.remove2'
 m31200| Wed Jun 13 10:33:58 [conn30] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:58-27", server: "ip-0A420969", clientAddr: "10.66.9.105:58681", time: new Date(1339598038476), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:33:58 [conn30] doing delete inline
 m31200| Wed Jun 13 10:33:58 [conn30] moveChunk deleted: 30
 m31200| Wed Jun 13 10:33:58 [conn30] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' unlocked. 
 m31200| Wed Jun 13 10:33:58 [conn30] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:33:58-28", server: "ip-0A420969", clientAddr: "10.66.9.105:58681", time: new Date(1339598038482), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 11, step6 of 6: 5 } }
 m31200| Wed Jun 13 10:33:58 [conn30] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:7 r:817 w:27247 reslen:37 1019ms
 m30999| Wed Jun 13 10:33:58 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 24 version: 16|1||4fd8a477d1d821664bf17408 based on: 15|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:33:58 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:34:00 [conn31] end connection 10.66.9.105:58687 (16 connections now open)
 m31200| Wed Jun 13 10:34:00 [initandlisten] connection accepted from 10.66.9.105:58691 #32 (17 connections now open)
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m30999| Wed Jun 13 10:34:03 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4dbd1d821664bf17418
 m30999| Wed Jun 13 10:34:03 [Balancer] chose [remove2-rs1] to [remove2-rs0] { _id: "test.remove2-i_MinKey", lastmod: Timestamp 16000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs1" }
 m30999| Wed Jun 13 10:34:03 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 lastmod: 16|1||000000000000000000000000 min: { i: MinKey } max: { i: 0.0 }) remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 -> remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:34:03 [conn30] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" }
 m31200| Wed Jun 13 10:34:03 [conn30] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31200| Wed Jun 13 10:34:03 [conn30] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' acquired, ts : 4fd8a4db259cec6b7cffafdd
 m31200| Wed Jun 13 10:34:03 [conn30] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:34:03-29", server: "ip-0A420969", clientAddr: "10.66.9.105:58681", time: new Date(1339598043489), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:34:03 [conn30] moveChunk request accepted at version 16|1||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:34:03 [conn30] moveChunk number of documents: 0
 m31100| Wed Jun 13 10:34:03 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:34:04 [conn30] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31200| Wed Jun 13 10:34:04 [conn30] moveChunk setting version to: 17|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:34:04 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
 m31100| Wed Jun 13 10:34:04 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:34:04-38", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598044492), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1000 } }
 m31200| Wed Jun 13 10:34:04 [conn30] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
 m31200| Wed Jun 13 10:34:04 [conn30] moveChunk moved last chunk out for collection 'test.remove2'
 m31200| Wed Jun 13 10:34:04 [conn30] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:34:04-30", server: "ip-0A420969", clientAddr: "10.66.9.105:58681", time: new Date(1339598044493), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:34:04 [conn30] doing delete inline
 m31200| Wed Jun 13 10:34:04 [conn30] moveChunk deleted: 0
 m31200| Wed Jun 13 10:34:04 [conn30] distributed lock 'test.remove2/ip-0A420969:31200:1339597977:41' unlocked. 
 m31200| Wed Jun 13 10:34:04 [conn30] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:34:04-31", server: "ip-0A420969", clientAddr: "10.66.9.105:58681", time: new Date(1339598044494), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1000, step5 of 6: 2, step6 of 6: 0 } }
 m31200| Wed Jun 13 10:34:04 [conn30] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:7 r:942 w:27337 reslen:37 1006ms
 m30999| Wed Jun 13 10:34:04 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 25 version: 17|0||4fd8a477d1d821664bf17408 based on: 16|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:34:04 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
 m30999| Wed Jun 13 10:34:04 [conn1] going to remove shard: remove2-rs1
 m30999| Wed Jun 13 10:34:04 [conn1] deleting replica set monitor for: remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31201| Wed Jun 13 10:34:04 [conn15] end connection 10.66.9.105:58680 (9 connections now open)
 m31200| Wed Jun 13 10:34:04 [conn29] end connection 10.66.9.105:58679 (16 connections now open)
{
	"msg" : "removeshard completed successfully",
	"state" : "completed",
	"shard" : "remove2-rs1",
	"ok" : 1
}
 m31200| Wed Jun 13 10:34:04 [conn3] dropDatabase test
Shard removed successfully
ReplSetTest n: 0 ports: [ 31200, 31201 ]	31200 number
ReplSetTest stop *** Shutting down mongod in port 31200 ***
 m31200| Wed Jun 13 10:34:04 [initandlisten] connection accepted from 127.0.0.1:58692 #33 (17 connections now open)
 m31200| Wed Jun 13 10:34:04 [conn33] terminating, shutdown command received
 m31200| Wed Jun 13 10:34:04 dbexit: shutdown called
 m31200| Wed Jun 13 10:34:04 [conn33] shutdown: going to close listening sockets...
 m31200| Wed Jun 13 10:34:04 [conn33] closing listening socket: 440
 m31200| Wed Jun 13 10:34:04 [conn33] closing listening socket: 444
 m31200| Wed Jun 13 10:34:04 [conn33] shutdown: going to flush diaglog...
 m31200| Wed Jun 13 10:34:04 [conn33] shutdown: going to close sockets...
 m31200| Wed Jun 13 10:34:04 [conn33] shutdown: waiting for fs preallocator...
 m31200| Wed Jun 13 10:34:04 [conn33] shutdown: closing all files...
Wed Jun 13 10:34:04 DBClientCursor::init call() failed
Wed Jun 13 10:34:04 shell: stopped mongo program on port 31200
 m29000| Wed Jun 13 10:34:04 [conn12] end connection 10.66.9.105:58675 (11 connections now open)
 m29000| Wed Jun 13 10:34:04 [conn11] end connection 10.66.9.105:58672 (10 connections now open)
 m31200| Wed Jun 13 10:34:04 [conn2] end connection 127.0.0.1:58591 (16 connections now open)
 m31200| Wed Jun 13 10:34:04 [conn33] closeAllFiles() finished
 m31200| Wed Jun 13 10:34:04 [conn33] shutdown: removing fs lock...
 m31200| Wed Jun 13 10:34:04 dbexit: really exiting now
 m31201| Wed Jun 13 10:34:04 [conn17] end connection 10.66.9.105:58690 (8 connections now open)
 m31201| Wed Jun 13 10:34:04 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: ip-0A420969:31200
 m31101| Wed Jun 13 10:34:04 [conn12] end connection 10.66.9.105:58666 (9 connections now open)
ReplSetTest n: 1 ports: [ 31200, 31201 ]	31201 number
 m31101| Wed Jun 13 10:34:04 [conn11] end connection 10.66.9.105:58664 (8 connections now open)
ReplSetTest stop *** Shutting down mongod in port 31201 ***
 m31100| Wed Jun 13 10:34:04 [conn23] end connection 10.66.9.105:58663 (16 connections now open)
 m31100| Wed Jun 13 10:34:04 [conn24] end connection 10.66.9.105:58665 (16 connections now open)
 m31100| Wed Jun 13 10:34:04 [conn25] end connection 10.66.9.105:58667 (14 connections now open)
 m30999| Wed Jun 13 10:34:04 [WriteBackListener-ip-0A420969:31200] DBClientCursor::init call() failed
 m30999| Wed Jun 13 10:34:04 [WriteBackListener-ip-0A420969:31200] WriteBackListener exception : DBClientBase::findN: transport error: ip-0A420969:31200 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd8a477d1d821664bf17406') }
 m31201| Wed Jun 13 10:34:04 [initandlisten] connection accepted from 127.0.0.1:58693 #18 (9 connections now open)
 m31201| Wed Jun 13 10:34:04 [conn18] terminating, shutdown command received
 m31201| Wed Jun 13 10:34:04 dbexit: shutdown called
 m31201| Wed Jun 13 10:34:04 [conn18] shutdown: going to close listening sockets...
 m31201| Wed Jun 13 10:34:04 [conn18] closing listening socket: 444
 m31201| Wed Jun 13 10:34:04 [conn18] closing listening socket: 448
 m31201| Wed Jun 13 10:34:04 [conn18] shutdown: going to flush diaglog...
 m31201| Wed Jun 13 10:34:04 [conn18] shutdown: going to close sockets...
 m31201| Wed Jun 13 10:34:04 [conn18] shutdown: waiting for fs preallocator...
 m31201| Wed Jun 13 10:34:04 [conn18] shutdown: closing all files...
Wed Jun 13 10:34:04 DBClientCursor::init call() failed
 m31201| Wed Jun 13 10:34:04 [conn2] end connection 127.0.0.1:58594 (8 connections now open)
 m30999| Wed Jun 13 10:34:04 [WriteBackListener-ip-0A420969:31201] DBClientCursor::init call() failed
 m30999| Wed Jun 13 10:34:04 [WriteBackListener-ip-0A420969:31201] WriteBackListener exception : DBClientBase::findN: transport error: ip-0A420969:31201 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd8a477d1d821664bf17406') }
 m31201| Wed Jun 13 10:34:04 [conn12] end connection 10.66.9.105:58660 (7 connections now open)
 m31201| Wed Jun 13 10:34:04 [conn6] end connection 10.66.9.105:58619 (6 connections now open)
 m31201| Wed Jun 13 10:34:04 [conn3] end connection 127.0.0.1:58592 (5 connections now open)
 m31201| Wed Jun 13 10:34:04 [conn7] end connection 10.66.9.105:58621 (4 connections now open)
 m31201| Wed Jun 13 10:34:04 [conn11] end connection 10.66.9.105:58658 (3 connections now open)
 m31201| Wed Jun 13 10:34:04 [conn13] end connection 10.66.9.105:58671 (2 connections now open)
 m31201| Wed Jun 13 10:34:04 [conn18] closeAllFiles() finished
 m31201| Wed Jun 13 10:34:04 [conn18] shutdown: removing fs lock...
 m31201| Wed Jun 13 10:34:04 dbexit: really exiting now
Wed Jun 13 10:34:05 shell: stopped mongo program on port 31201
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
Sleeping for 20 seconds to let the other shard's ReplicaSetMonitor time out
Wed Jun 13 10:34:07 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
Wed Jun 13 10:34:07 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
Wed Jun 13 10:34:08 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:34:08 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31201
Wed Jun 13 10:34:09 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
 m30999| Wed Jun 13 10:34:09 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4e1d1d821664bf17419
 m30999| Wed Jun 13 10:34:09 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
Wed Jun 13 10:34:10 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:34:10 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31200 socket exception
Wed Jun 13 10:34:10 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
Wed Jun 13 10:34:11 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
Wed Jun 13 10:34:11 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31201 socket exception
Wed Jun 13 10:34:12 [ReplicaSetMonitorWatcher] warning: No primary detected for set remove2-rs1
Wed Jun 13 10:34:12 [ReplicaSetMonitorWatcher] All nodes for set remove2-rs1 are down. This has happened for 1 checks in a row. Polling will stop after 29 more failed checks
 m31100| Wed Jun 13 10:34:13 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
 m31100| Wed Jun 13 10:34:13 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
 m31100| Wed Jun 13 10:34:14 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
 m31100| Wed Jun 13 10:34:14 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31201
 m31100| Wed Jun 13 10:34:15 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
 m31100| Wed Jun 13 10:34:16 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
 m31100| Wed Jun 13 10:34:16 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31200 socket exception
 m31100| Wed Jun 13 10:34:16 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
 m31100| Wed Jun 13 10:34:17 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
 m31100| Wed Jun 13 10:34:17 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31201 socket exception
 m31100| Wed Jun 13 10:34:18 [ReplicaSetMonitorWatcher] warning: No primary detected for set remove2-rs1
 m31100| Wed Jun 13 10:34:18 [ReplicaSetMonitorWatcher] All nodes for set remove2-rs1 are down. This has happened for 1 checks in a row. Polling will stop after 0 more failed checks
 m31100| Wed Jun 13 10:34:18 [ReplicaSetMonitorWatcher] Replica set remove2-rs1 was down for 1 checks in a row. Stopping polled monitoring of the set.
 m31100| Wed Jun 13 10:34:18 [ReplicaSetMonitorWatcher] deleting replica set monitor for: remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m30999| Wed Jun 13 10:34:19 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4ebd1d821664bf1741a
 m30999| Wed Jun 13 10:34:19 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
Wed Jun 13 10:34:22 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
Wed Jun 13 10:34:23 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:34:23 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
 m31101| Wed Jun 13 10:34:23 [conn16] end connection 10.66.9.105:58688 (7 connections now open)
 m31101| Wed Jun 13 10:34:23 [initandlisten] connection accepted from 10.66.9.105:58703 #17 (8 connections now open)
Wed Jun 13 10:34:24 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
Wed Jun 13 10:34:25 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 31200, 31201 ]	31200 number
{
	"useHostName" : true,
	"oplogSize" : 40,
	"keyFile" : undefined,
	"port" : 31200,
	"noprealloc" : "",
	"smallfiles" : "",
	"rest" : "",
	"replSet" : "remove2-rs1",
	"dbpath" : "$set-$node",
	"restart" : undefined,
	"pathOpts" : {
		"node" : 0,
		"set" : "remove2-rs1"
	}
}
ReplSetTest Starting....
Resetting db path '/data/db/remove2-rs1-0'
Wed Jun 13 10:34:25 shell: started program mongod.exe --oplogSize 40 --port 31200 --noprealloc --smallfiles --rest --replSet remove2-rs1 --dbpath /data/db/remove2-rs1-0
 m31200| note: noprealloc may hurt performance in many applications
 m31200| Wed Jun 13 10:34:25 
 m31200| Wed Jun 13 10:34:25 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
 m31200| Wed Jun 13 10:34:25 
 m31200| Wed Jun 13 10:34:25 [initandlisten] MongoDB starting : pid=460 port=31200 dbpath=/data/db/remove2-rs1-0 32-bit host=ip-0A420969
 m31200| Wed Jun 13 10:34:25 [initandlisten] 
 m31200| Wed Jun 13 10:34:25 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
 m31200| Wed Jun 13 10:34:25 [initandlisten] **       Not recommended for production.
 m31200| Wed Jun 13 10:34:25 [initandlisten] 
 m31200| Wed Jun 13 10:34:25 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
 m31200| Wed Jun 13 10:34:25 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
 m31200| Wed Jun 13 10:34:25 [initandlisten] **       with --journal, the limit is lower
 m31200| Wed Jun 13 10:34:25 [initandlisten] 
 m31200| Wed Jun 13 10:34:25 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
 m31200| Wed Jun 13 10:34:25 [initandlisten] git version: 163a2d64ee88f7a4efb604f6208578ef117c4bc3
 m31200| Wed Jun 13 10:34:25 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB_VERSION=1_49
 m31200| Wed Jun 13 10:34:25 [initandlisten] options: { dbpath: "/data/db/remove2-rs1-0", noprealloc: true, oplogSize: 40, port: 31200, replSet: "remove2-rs1", rest: true, smallfiles: true }
 m31200| Wed Jun 13 10:34:25 [initandlisten] waiting for connections on port 31200
 m31200| Wed Jun 13 10:34:25 [websvr] admin web console waiting for connections on port 32200
 m31200| Wed Jun 13 10:34:25 [initandlisten] connection accepted from 127.0.0.1:58706 #1 (1 connection now open)
 m31200| Wed Jun 13 10:34:25 [conn1] end connection 127.0.0.1:58706 (0 connections now open)
 m31200| Wed Jun 13 10:34:25 [initandlisten] connection accepted from 127.0.0.1:58707 #2 (1 connection now open)
 m31200| Wed Jun 13 10:34:25 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
 m31200| Wed Jun 13 10:34:25 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
Wed Jun 13 10:34:26 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 ok
 m31200| Wed Jun 13 10:34:26 [initandlisten] connection accepted from 10.66.9.105:58704 #3 (2 connections now open)
Wed Jun 13 10:34:26 [ReplicaSetMonitorWatcher] warning: node: ip-0A420969:31200 isn't a part of set: remove2-rs1 ismaster: { ismaster: false, secondary: false, info: "can't get local.system.replset config from self or any seed (EMPTYCONFIG)", isreplicaset: true, maxBsonObjectSize: 16777216, localTime: new Date(1339598066214), ok: 1.0 }
Wed Jun 13 10:34:26 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
[ connection to ip-0A420969:31200, connection to ip-0A420969:31201 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 31200, 31201 ]	31201 number
{
	"useHostName" : true,
	"oplogSize" : 40,
	"keyFile" : undefined,
	"port" : 31201,
	"noprealloc" : "",
	"smallfiles" : "",
	"rest" : "",
	"replSet" : "remove2-rs1",
	"dbpath" : "$set-$node",
	"restart" : undefined,
	"pathOpts" : {
		"node" : 1,
		"set" : "remove2-rs1"
	}
}
ReplSetTest Starting....
Resetting db path '/data/db/remove2-rs1-1'
 m31200| Wed Jun 13 10:34:26 [initandlisten] connection accepted from 127.0.0.1:58705 #4 (3 connections now open)
Wed Jun 13 10:34:26 shell: started program mongod.exe --oplogSize 40 --port 31201 --noprealloc --smallfiles --rest --replSet remove2-rs1 --dbpath /data/db/remove2-rs1-1
 m31201| note: noprealloc may hurt performance in many applications
 m31201| Wed Jun 13 10:34:26 
 m31201| Wed Jun 13 10:34:26 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
 m31201| Wed Jun 13 10:34:26 
 m31201| Wed Jun 13 10:34:26 [initandlisten] MongoDB starting : pid=2444 port=31201 dbpath=/data/db/remove2-rs1-1 32-bit host=ip-0A420969
 m31201| Wed Jun 13 10:34:26 [initandlisten] 
 m31201| Wed Jun 13 10:34:26 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
 m31201| Wed Jun 13 10:34:26 [initandlisten] **       Not recommended for production.
 m31201| Wed Jun 13 10:34:26 [initandlisten] 
 m31201| Wed Jun 13 10:34:26 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
 m31201| Wed Jun 13 10:34:26 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
 m31201| Wed Jun 13 10:34:26 [initandlisten] **       with --journal, the limit is lower
 m31201| Wed Jun 13 10:34:26 [initandlisten] 
 m31201| Wed Jun 13 10:34:26 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
 m31201| Wed Jun 13 10:34:26 [initandlisten] git version: 163a2d64ee88f7a4efb604f6208578ef117c4bc3
 m31201| Wed Jun 13 10:34:26 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB_VERSION=1_49
 m31201| Wed Jun 13 10:34:26 [initandlisten] options: { dbpath: "/data/db/remove2-rs1-1", noprealloc: true, oplogSize: 40, port: 31201, replSet: "remove2-rs1", rest: true, smallfiles: true }
 m31201| Wed Jun 13 10:34:26 [initandlisten] waiting for connections on port 31201
 m31201| Wed Jun 13 10:34:26 [websvr] admin web console waiting for connections on port 32201
 m31201| Wed Jun 13 10:34:26 [initandlisten] connection accepted from 127.0.0.1:58710 #1 (1 connection now open)
 m31201| Wed Jun 13 10:34:26 [conn1] end connection 127.0.0.1:58710 (0 connections now open)
 m31201| Wed Jun 13 10:34:26 [initandlisten] connection accepted from 127.0.0.1:58711 #2 (1 connection now open)
 m31201| Wed Jun 13 10:34:26 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
 m31201| Wed Jun 13 10:34:26 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
 m31100| Wed Jun 13 10:34:26 [conn29] end connection 10.66.9.105:58689 (13 connections now open)
 m31100| Wed Jun 13 10:34:26 [initandlisten] connection accepted from 10.66.9.105:58712 #30 (14 connections now open)
Wed Jun 13 10:34:26 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 ok
 m31201| Wed Jun 13 10:34:26 [initandlisten] connection accepted from 10.66.9.105:58708 #3 (2 connections now open)
Wed Jun 13 10:34:26 [ReplicaSetMonitorWatcher] warning: node: ip-0A420969:31201 isn't a part of set: remove2-rs1 ismaster: { ismaster: false, secondary: false, info: "can't get local.system.replset config from self or any seed (EMPTYCONFIG)", isreplicaset: true, maxBsonObjectSize: 16777216, localTime: new Date(1339598066714), ok: 1.0 }
 m31201| Wed Jun 13 10:34:26 [initandlisten] connection accepted from 127.0.0.1:58709 #4 (3 connections now open)
[ connection to ip-0A420969:31200, connection to ip-0A420969:31201 ]
{
	"replSetInitiate" : {
		"_id" : "remove2-rs1",
		"members" : [
			{
				"_id" : 0,
				"host" : "ip-0A420969:31200"
			},
			{
				"_id" : 1,
				"host" : "ip-0A420969:31201"
			}
		]
	}
}
 m31200| Wed Jun 13 10:34:26 [conn4] replSet replSetInitiate admin command received from client
 m31200| Wed Jun 13 10:34:26 [conn4] replSet replSetInitiate config object parses ok, 2 members specified
 m31200| Wed Jun 13 10:34:26 [initandlisten] connection accepted from 10.66.9.105:58713 #5 (4 connections now open)
 m31200| Wed Jun 13 10:34:26 [conn5] end connection 10.66.9.105:58713 (3 connections now open)
 m31201| Wed Jun 13 10:34:26 [initandlisten] connection accepted from 10.66.9.105:58714 #5 (4 connections now open)
 m31200| Wed Jun 13 10:34:26 [conn4] replSet replSetInitiate all members seem up
 m31200| Wed Jun 13 10:34:26 [conn4] ******
 m31200| Wed Jun 13 10:34:26 [conn4] creating replication oplog of size: 40MB...
 m31200| Wed Jun 13 10:34:26 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/local.ns, filling with zeroes...
 m31200| Wed Jun 13 10:34:26 [FileAllocator] creating directory /data/db/remove2-rs1-0/_tmp
 m31200| Wed Jun 13 10:34:26 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/local.ns, size: 16MB,  took 0.05 secs
 m31200| Wed Jun 13 10:34:26 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/local.0, filling with zeroes...
 m31200| Wed Jun 13 10:34:27 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/local.0, size: 64MB,  took 0.197 secs
Wed Jun 13 10:34:27 [ReplicaSetMonitorWatcher] warning: No primary detected for set remove2-rs1
Wed Jun 13 10:34:27 [ReplicaSetMonitorWatcher] All nodes for set remove2-rs1 are down. This has happened for 2 checks in a row. Polling will stop after 28 more failed checks
 m31200| Wed Jun 13 10:34:29 [conn4] ******
 m31200| Wed Jun 13 10:34:29 [conn4] replSet info saving a newer config version to local.system.replset
 m31200| Wed Jun 13 10:34:29 [conn4] replSet saveConfigLocally done
 m31200| Wed Jun 13 10:34:29 [conn4] replSet replSetInitiate config now saved locally.  Should come online in about a minute.
 m31200| Wed Jun 13 10:34:29 [conn4] command admin.$cmd command: { replSetInitiate: { _id: "remove2-rs1", members: [ { _id: 0.0, host: "ip-0A420969:31200" }, { _id: 1.0, host: "ip-0A420969:31201" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:2685068 w:35 reslen:112 2682ms
{
	"info" : "Config now saved locally.  Should come online in about a minute.",
	"ok" : 1
}
 m30999| Wed Jun 13 10:34:29 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4f5d1d821664bf1741b
 m30999| Wed Jun 13 10:34:29 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
 m31200| Wed Jun 13 10:34:35 [rsStart] replSet I am ip-0A420969:31200
 m31200| Wed Jun 13 10:34:35 [rsStart] replSet STARTUP2
 m31200| Wed Jun 13 10:34:35 [rsHealthPoll] replSet member ip-0A420969:31201 is up
 m31200| Wed Jun 13 10:34:35 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
 m31200| Wed Jun 13 10:34:35 [rsSync] replSet SECONDARY
 m31200| Wed Jun 13 10:34:35 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes
 m31201| Wed Jun 13 10:34:36 [rsStart] trying to contact ip-0A420969:31200
 m31200| Wed Jun 13 10:34:36 [initandlisten] connection accepted from 10.66.9.105:58715 #6 (4 connections now open)
 m31201| Wed Jun 13 10:34:36 [initandlisten] connection accepted from 10.66.9.105:58716 #6 (5 connections now open)
 m31201| Wed Jun 13 10:34:36 [rsStart] replSet I am ip-0A420969:31201
 m31201| Wed Jun 13 10:34:36 [conn6] end connection 10.66.9.105:58716 (4 connections now open)
 m31201| Wed Jun 13 10:34:36 [rsStart] replSet got config version 1 from a remote, saving locally
 m31201| Wed Jun 13 10:34:36 [rsStart] replSet info saving a newer config version to local.system.replset
 m31201| Wed Jun 13 10:34:36 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/local.ns, filling with zeroes...
 m31201| Wed Jun 13 10:34:36 [FileAllocator] creating directory /data/db/remove2-rs1-1/_tmp
 m31201| Wed Jun 13 10:34:36 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/local.ns, size: 16MB,  took 0.111 secs
 m31201| Wed Jun 13 10:34:36 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/local.0, filling with zeroes...
 m31201| Wed Jun 13 10:34:36 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/local.0, size: 16MB,  took 0.05 secs
 m31201| Wed Jun 13 10:34:37 [rsStart] replSet saveConfigLocally done
 m31201| Wed Jun 13 10:34:37 [rsStart] replSet STARTUP2
 m31201| Wed Jun 13 10:34:37 [rsSync] ******
 m31201| Wed Jun 13 10:34:37 [rsSync] creating replication oplog of size: 40MB...
 m31201| Wed Jun 13 10:34:37 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
 m31201| Wed Jun 13 10:34:37 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/local.1, filling with zeroes...
 m31201| Wed Jun 13 10:34:37 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/local.1, size: 64MB,  took 0.2 secs
Wed Jun 13 10:34:37 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
Wed Jun 13 10:34:37 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31201
 m31200| Wed Jun 13 10:34:37 [rsHealthPoll] replSet member ip-0A420969:31201 is now in state STARTUP2
 m31200| Wed Jun 13 10:34:37 [rsMgr] not electing self, ip-0A420969:31201 would veto
Wed Jun 13 10:34:38 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: ip-0A420969:31200 { setName: "remove2-rs1", ismaster: false, secondary: true, hosts: [ "ip-0A420969:31200", "ip-0A420969:31201" ], me: "ip-0A420969:31200", maxBsonObjectSize: 16777216, localTime: new Date(1339598078715), ok: 1.0 }
 m31200| Wed Jun 13 10:34:38 [initandlisten] connection accepted from 10.66.9.105:58717 #7 (5 connections now open)
Wed Jun 13 10:34:38 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: ip-0A420969:31201 { setName: "remove2-rs1", ismaster: false, secondary: false, hosts: [ "ip-0A420969:31201", "ip-0A420969:31200" ], me: "ip-0A420969:31201", maxBsonObjectSize: 16777216, localTime: new Date(1339598078787), ok: 1.0 }
 m31201| Wed Jun 13 10:34:38 [initandlisten] connection accepted from 10.66.9.105:58718 #7 (5 connections now open)
 m31201| Wed Jun 13 10:34:38 [rsHealthPoll] replSet member ip-0A420969:31200 is up
 m31201| Wed Jun 13 10:34:38 [rsHealthPoll] replSet member ip-0A420969:31200 is now in state SECONDARY
 m30999| Wed Jun 13 10:34:39 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a4ffd1d821664bf1741c
 m30999| Wed Jun 13 10:34:39 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
Wed Jun 13 10:34:39 [ReplicaSetMonitorWatcher] warning: No primary detected for set remove2-rs1
Wed Jun 13 10:34:39 [ReplicaSetMonitorWatcher] All nodes for set remove2-rs1 are down. This has happened for 3 checks in a row. Polling will stop after 27 more failed checks
 m31201| Wed Jun 13 10:34:40 [rsSync] ******
 m31201| Wed Jun 13 10:34:40 [rsSync] replSet initial sync pending
 m31201| Wed Jun 13 10:34:40 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
 m31200| Wed Jun 13 10:34:43 [rsMgr] replSet info electSelf 0
 m31201| Wed Jun 13 10:34:43 [conn5] replSet RECOVERING
 m31201| Wed Jun 13 10:34:43 [conn5] replSet info voting yea for ip-0A420969:31200 (0)
 m31200| Wed Jun 13 10:34:43 [rsMgr] replSet PRIMARY
 m31201| Wed Jun 13 10:34:44 [rsHealthPoll] replSet member ip-0A420969:31200 is now in state PRIMARY
ReplSetTest Timestamp(1339598069000, 1)
ReplSetTest waiting for connection to ip-0A420969:31201 to have an oplog built.
 m31200| Wed Jun 13 10:34:45 [rsHealthPoll] replSet member ip-0A420969:31201 is now in state RECOVERING
ReplSetTest waiting for connection to ip-0A420969:31201 to have an oplog built.
ReplSetTest waiting for connection to ip-0A420969:31201 to have an oplog built.
 m30999| Wed Jun 13 10:34:49 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a509d1d821664bf1741d
 m30999| Wed Jun 13 10:34:49 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
Wed Jun 13 10:34:49 [ReplicaSetMonitorWatcher] Primary for replica set remove2-rs1 changed to ip-0A420969:31200
ReplSetTest waiting for connection to ip-0A420969:31201 to have an oplog built.
ReplSetTest waiting for connection to ip-0A420969:31201 to have an oplog built.
 m31101| Wed Jun 13 10:34:53 [conn17] end connection 10.66.9.105:58703 (7 connections now open)
 m31101| Wed Jun 13 10:34:53 [initandlisten] connection accepted from 10.66.9.105:58719 #18 (8 connections now open)
ReplSetTest waiting for connection to ip-0A420969:31201 to have an oplog built.
 m31201| Wed Jun 13 10:34:56 [rsSync] replSet initial sync pending
 m31201| Wed Jun 13 10:34:56 [rsSync] replSet syncing to: ip-0A420969:31200
 m31200| Wed Jun 13 10:34:56 [initandlisten] connection accepted from 10.66.9.105:58720 #8 (6 connections now open)
 m31201| Wed Jun 13 10:34:56 [rsSync] build index local.me { _id: 1 }
 m31201| Wed Jun 13 10:34:56 [rsSync] build index done.  scanned 0 total records. 0 secs
 m31201| Wed Jun 13 10:34:56 [rsSync] replSet initial sync drop all databases
 m31201| Wed Jun 13 10:34:56 [rsSync] dropAllDatabasesExceptLocal 1
 m31201| Wed Jun 13 10:34:56 [rsSync] replSet initial sync clone all databases
 m31201| Wed Jun 13 10:34:56 [rsSync] replSet initial sync data copy, starting syncup
 m31201| Wed Jun 13 10:34:56 [rsSync] replSet initial sync building indexes
 m31201| Wed Jun 13 10:34:56 [rsSync] replSet initial sync query minValid
 m31201| Wed Jun 13 10:34:56 [rsSync] replSet initial sync finishing up
 m31201| Wed Jun 13 10:34:56 [rsSync] replSet set minValid=4fd8a4f5:1
 m31201| Wed Jun 13 10:34:56 [rsSync] build index local.replset.minvalid { _id: 1 }
 m31201| Wed Jun 13 10:34:56 [rsSync] build index done.  scanned 0 total records. 0 secs
 m31201| Wed Jun 13 10:34:56 [rsSync] replSet initial sync done
 m31200| Wed Jun 13 10:34:56 [conn8] end connection 10.66.9.105:58720 (5 connections now open)
 m31100| Wed Jun 13 10:34:56 [conn30] end connection 10.66.9.105:58712 (13 connections now open)
 m31100| Wed Jun 13 10:34:56 [initandlisten] connection accepted from 10.66.9.105:58721 #31 (14 connections now open)
 m31201| Wed Jun 13 10:34:57 [rsSync] replSet SECONDARY
{
	"ts" : Timestamp(1339598069000, 1),
	"h" : NumberLong(0),
	"op" : "n",
	"ns" : "",
	"o" : {
		"msg" : "initiating set"
	}
}
ReplSetTest await TS for connection to ip-0A420969:31201 is 1339598069000:1 and latest is 1339598069000:1
ReplSetTest await oplog size for connection to ip-0A420969:31201 is 1
ReplSetTest await synced=true
Adding shard with seed: remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m30999| Wed Jun 13 10:34:57 [conn1] warning: scoped connection to Trying to get server address for DBClientReplicaSet, but no ReplicaSetMonitor exists for remove2-rs1
 m30999| Wed Jun 13 10:34:57 [conn1] remove2-rs1/ not being returned to the pool
 m30999| Wed Jun 13 10:34:57 [conn1] addshard request { addshard: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201" } failed: couldn't connect to new shard No replica set monitor active and no cached seed found for set: remove2-rs1
First attempt to addShard failed, trying again
 m30999| Wed Jun 13 10:34:57 [conn1] starting new replica set monitor for replica set remove2-rs1 with seed of ip-0A420969:31200,ip-0A420969:31201
 m30999| Wed Jun 13 10:34:57 [conn1] successfully connected to seed ip-0A420969:31200 for replica set remove2-rs1
 m31200| Wed Jun 13 10:34:57 [initandlisten] connection accepted from 10.66.9.105:58722 #9 (6 connections now open)
 m30999| Wed Jun 13 10:34:57 [conn1] changing hosts to { 0: "ip-0A420969:31200", 1: "ip-0A420969:31201" } from remove2-rs1/
 m30999| Wed Jun 13 10:34:57 [conn1] trying to add new host ip-0A420969:31200 to replica set remove2-rs1
 m30999| Wed Jun 13 10:34:57 [conn1] successfully connected to new host ip-0A420969:31200 in replica set remove2-rs1
 m30999| Wed Jun 13 10:34:57 [conn1] trying to add new host ip-0A420969:31201 to replica set remove2-rs1
 m31200| Wed Jun 13 10:34:57 [initandlisten] connection accepted from 10.66.9.105:58723 #10 (7 connections now open)
 m30999| Wed Jun 13 10:34:57 [conn1] successfully connected to new host ip-0A420969:31201 in replica set remove2-rs1
 m31201| Wed Jun 13 10:34:57 [initandlisten] connection accepted from 10.66.9.105:58724 #8 (6 connections now open)
 m30999| Wed Jun 13 10:34:57 [conn1] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
 m31201| Wed Jun 13 10:34:57 [rsBackgroundSync] replSet syncing to: ip-0A420969:31200
 m31200| Wed Jun 13 10:34:57 [initandlisten] connection accepted from 10.66.9.105:58725 #11 (8 connections now open)
 m31200| Wed Jun 13 10:34:57 [conn9] end connection 10.66.9.105:58722 (7 connections now open)
 m31200| Wed Jun 13 10:34:57 [initandlisten] connection accepted from 10.66.9.105:58726 #12 (8 connections now open)
 m30999| Wed Jun 13 10:34:57 [conn1] Primary for replica set remove2-rs1 changed to ip-0A420969:31200
 m30999| Wed Jun 13 10:34:57 [conn1] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31201
 m30999| Wed Jun 13 10:34:57 [conn1] replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31200| Wed Jun 13 10:34:57 [initandlisten] connection accepted from 10.66.9.105:58727 #13 (9 connections now open)
 m30999| Wed Jun 13 10:34:57 [conn1] going to add shard: { _id: "remove2-rs1", host: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201" }
 m30999| Wed Jun 13 10:34:57 [mongosMain] connection accepted from 10.66.9.105:58728 #4 (4 connections now open)
Awaiting ip-0A420969:31201 to be { "ok" : true, "secondary" : true } for connection to ip-0A420969:30999 (rs: undefined)
{
	"remove2-rs0" : {
		"hosts" : [
			{
				"addr" : "ip-0A420969:31100",
				"ok" : true,
				"ismaster" : true,
				"hidden" : false,
				"secondary" : false,
				"pingTimeMillis" : 0
			},
			{
				"addr" : "ip-0A420969:31101",
				"ok" : true,
				"ismaster" : false,
				"hidden" : false,
				"secondary" : true,
				"pingTimeMillis" : 0
			}
		],
		"master" : 0,
		"nextSlave" : 0
	},
	"remove2-rs1" : {
		"hosts" : [
			{
				"addr" : "ip-0A420969:31200",
				"ok" : true,
				"ismaster" : true,
				"hidden" : false,
				"secondary" : false,
				"pingTimeMillis" : 0
			},
			{
				"addr" : "ip-0A420969:31201",
				"ok" : false,
				"ismaster" : false,
				"hidden" : false,
				"secondary" : true,
				"pingTimeMillis" : 0
			}
		],
		"master" : 0,
		"nextSlave" : 0
	}
}
 m31200| Wed Jun 13 10:34:57 [rsHealthPoll] replSet member ip-0A420969:31201 is now in state SECONDARY
 m31201| Wed Jun 13 10:34:58 [rsSyncNotifier] replset setting oplog notifier to ip-0A420969:31200
 m31200| Wed Jun 13 10:34:58 [initandlisten] connection accepted from 10.66.9.105:58729 #14 (10 connections now open)
 m31200| Wed Jun 13 10:34:59 [slaveTracking] build index local.slaves { _id: 1 }
 m31200| Wed Jun 13 10:34:59 [slaveTracking] build index done.  scanned 0 total records. 0.001 secs
 m30999| Wed Jun 13 10:34:59 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a513d1d821664bf1741e
 m30999| Wed Jun 13 10:34:59 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_MinKey", lastmod: Timestamp 17000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:34:59 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 17|0||000000000000000000000000 min: { i: MinKey } max: { i: 0.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:34:59 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:34:59 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:34:59 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a5131402b5052316d783
 m31100| Wed Jun 13 10:34:59 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:34:59-39", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598099508), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:34:59 [conn18] moveChunk request accepted at version 17|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:34:59 [conn18] moveChunk number of documents: 0
 m31100| Wed Jun 13 10:34:59 [conn18] starting new replica set monitor for replica set remove2-rs1 with seed of ip-0A420969:31200,ip-0A420969:31201
 m31200| Wed Jun 13 10:34:59 [initandlisten] connection accepted from 10.66.9.105:58730 #15 (11 connections now open)
 m31100| Wed Jun 13 10:34:59 [conn18] successfully connected to seed ip-0A420969:31200 for replica set remove2-rs1
 m31100| Wed Jun 13 10:34:59 [conn18] changing hosts to { 0: "ip-0A420969:31200", 1: "ip-0A420969:31201" } from remove2-rs1/
 m31100| Wed Jun 13 10:34:59 [conn18] trying to add new host ip-0A420969:31200 to replica set remove2-rs1
 m31200| Wed Jun 13 10:34:59 [initandlisten] connection accepted from 10.66.9.105:58731 #16 (12 connections now open)
 m31100| Wed Jun 13 10:34:59 [conn18] successfully connected to new host ip-0A420969:31200 in replica set remove2-rs1
 m31100| Wed Jun 13 10:34:59 [conn18] trying to add new host ip-0A420969:31201 to replica set remove2-rs1
 m31100| Wed Jun 13 10:34:59 [conn18] successfully connected to new host ip-0A420969:31201 in replica set remove2-rs1
 m31201| Wed Jun 13 10:34:59 [initandlisten] connection accepted from 10.66.9.105:58732 #9 (7 connections now open)
 m31100| Wed Jun 13 10:34:59 [conn18] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
 m31200| Wed Jun 13 10:34:59 [conn15] end connection 10.66.9.105:58730 (11 connections now open)
 m31200| Wed Jun 13 10:34:59 [initandlisten] connection accepted from 10.66.9.105:58733 #17 (12 connections now open)
 m31100| Wed Jun 13 10:34:59 [conn18] Primary for replica set remove2-rs1 changed to ip-0A420969:31200
 m31100| Wed Jun 13 10:34:59 [conn18] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31201
 m31100| Wed Jun 13 10:34:59 [conn18] replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:34:59 [conn18] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
 m31100| Wed Jun 13 10:34:59 [conn18] warning: moveChunk could not contact to: shard remove2-rs1 to start transfer :: caused by :: 9001 socket exception [2] server [10.66.9.105:31200] 
 m31100| Wed Jun 13 10:34:59 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:34:59 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:34:59-40", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598099516), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 6: 0, step2 of 6: 2, note: "aborted" } }
{
	"remove2-rs0" : {
		"hosts" : [
			{
				"addr" : "ip-0A420969:31100",
				"ok" : true,
				"ismaster" : true,
				"hidden" : false,
				"secondary" : false,
				"pingTimeMillis" : 0
			},
			{
				"addr" : "ip-0A420969:31101",
				"ok" : true,
				"ismaster" : false,
				"hidden" : false,
				"secondary" : true,
				"pingTimeMillis" : 0
			}
		],
		"master" : 0,
		"nextSlave" : 0
	},
	"remove2-rs1" : {
		"hosts" : [
			{
				"addr" : "ip-0A420969:31200",
				"ok" : true,
				"ismaster" : true,
				"hidden" : false,
				"secondary" : false,
				"pingTimeMillis" : 0
			},
			{
				"addr" : "ip-0A420969:31201",
				"ok" : false,
				"ismaster" : false,
				"hidden" : false,
				"secondary" : true,
				"pingTimeMillis" : 0
			}
		],
		"master" : 0,
		"nextSlave" : 0
	}
}
 m30999| Wed Jun 13 10:34:59 [Balancer] moveChunk result: { errmsg: "moveChunk could not contact to: shard remove2-rs1 to start transfer :: caused by :: 9001 socket exception [2] server [10.66.9.105:31200] ", ok: 0.0 }
 m30999| Wed Jun 13 10:34:59 [Balancer] balancer move failed: { errmsg: "moveChunk could not contact to: shard remove2-rs1 to start transfer :: caused by :: 9001 socket exception [2] server [10.66.9.105:31200] ", ok: 0.0 } from: remove2-rs0 to: remove2-rs1 chunk: { _id: "test.remove2-i_MinKey", lastmod: Timestamp 17000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:34:59 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
{
	"remove2-rs0" : {
		"hosts" : [
			{
				"addr" : "ip-0A420969:31100",
				"ok" : true,
				"ismaster" : true,
				"hidden" : false,
				"secondary" : false,
				"pingTimeMillis" : 0
			},
			{
				"addr" : "ip-0A420969:31101",
				"ok" : true,
				"ismaster" : false,
				"hidden" : false,
				"secondary" : true,
				"pingTimeMillis" : 0
			}
		],
		"master" : 0,
		"nextSlave" : 0
	},
	"remove2-rs1" : {
		"hosts" : [
			{
				"addr" : "ip-0A420969:31200",
				"ok" : true,
				"ismaster" : true,
				"hidden" : false,
				"secondary" : false,
				"pingTimeMillis" : 0
			},
			{
				"addr" : "ip-0A420969:31201",
				"ok" : false,
				"ismaster" : false,
				"hidden" : false,
				"secondary" : true,
				"pingTimeMillis" : 0
			}
		],
		"master" : 0,
		"nextSlave" : 0
	}
}
 m31201| Wed Jun 13 10:35:03 [initandlisten] connection accepted from 10.66.9.105:58734 #10 (8 connections now open)
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31201| Wed Jun 13 10:35:03 [conn5] end connection 10.66.9.105:58714 (7 connections now open)
 m31201| Wed Jun 13 10:35:03 [initandlisten] connection accepted from 10.66.9.105:58735 #11 (8 connections now open)
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31200| Wed Jun 13 10:35:06 [conn6] end connection 10.66.9.105:58715 (11 connections now open)
 m31200| Wed Jun 13 10:35:06 [initandlisten] connection accepted from 10.66.9.105:58736 #18 (12 connections now open)
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31201| Wed Jun 13 10:35:08 [initandlisten] connection accepted from 10.66.9.105:58737 #12 (9 connections now open)
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m30999| Wed Jun 13 10:35:09 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a51dd1d821664bf1741f
 m30999| Wed Jun 13 10:35:09 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_MinKey", lastmod: Timestamp 17000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:35:09 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 17|0||000000000000000000000000 min: { i: MinKey } max: { i: 0.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:35:09 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:35:09 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:35:09 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a51d1402b5052316d784
 m31100| Wed Jun 13 10:35:09 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:09-41", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598109521), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:35:09 [conn18] moveChunk request accepted at version 17|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:35:09 [conn18] moveChunk number of documents: 0
 m31200| Wed Jun 13 10:35:09 [initandlisten] connection accepted from 10.66.9.105:58738 #19 (13 connections now open)
 m31200| Wed Jun 13 10:35:09 [migrateThread] starting new replica set monitor for replica set remove2-rs0 with seed of ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:35:09 [migrateThread] successfully connected to seed ip-0A420969:31100 for replica set remove2-rs0
 m31100| Wed Jun 13 10:35:09 [initandlisten] connection accepted from 10.66.9.105:58739 #32 (15 connections now open)
 m31200| Wed Jun 13 10:35:09 [migrateThread] changing hosts to { 0: "ip-0A420969:31100", 1: "ip-0A420969:31101" } from remove2-rs0/
 m31200| Wed Jun 13 10:35:09 [migrateThread] trying to add new host ip-0A420969:31100 to replica set remove2-rs0
 m31200| Wed Jun 13 10:35:09 [migrateThread] successfully connected to new host ip-0A420969:31100 in replica set remove2-rs0
 m31200| Wed Jun 13 10:35:09 [migrateThread] trying to add new host ip-0A420969:31101 to replica set remove2-rs0
 m31100| Wed Jun 13 10:35:09 [initandlisten] connection accepted from 10.66.9.105:58740 #33 (16 connections now open)
 m31200| Wed Jun 13 10:35:09 [migrateThread] successfully connected to new host ip-0A420969:31101 in replica set remove2-rs0
 m31101| Wed Jun 13 10:35:09 [initandlisten] connection accepted from 10.66.9.105:58741 #19 (9 connections now open)
 m31100| Wed Jun 13 10:35:09 [initandlisten] connection accepted from 10.66.9.105:58742 #34 (17 connections now open)
 m31100| Wed Jun 13 10:35:09 [conn32] end connection 10.66.9.105:58739 (16 connections now open)
 m31200| Wed Jun 13 10:35:09 [migrateThread] Primary for replica set remove2-rs0 changed to ip-0A420969:31100
 m31101| Wed Jun 13 10:35:09 [initandlisten] connection accepted from 10.66.9.105:58743 #20 (10 connections now open)
 m31200| Wed Jun 13 10:35:09 [migrateThread] replica set monitor for replica set remove2-rs0 started, address is remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:35:09 [ReplicaSetMonitorWatcher] starting
 m31100| Wed Jun 13 10:35:09 [initandlisten] connection accepted from 10.66.9.105:58744 #35 (17 connections now open)
 m31200| Wed Jun 13 10:35:09 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/test.ns, filling with zeroes...
 m31200| Wed Jun 13 10:35:09 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/test.ns, size: 16MB,  took 0.051 secs
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31200| Wed Jun 13 10:35:09 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/test.0, filling with zeroes...
 m31200| Wed Jun 13 10:35:09 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/test.0, size: 16MB,  took 0.05 secs
 m31200| Wed Jun 13 10:35:09 [migrateThread] build index test.remove2 { _id: 1 }
 m31200| Wed Jun 13 10:35:09 [migrateThread] build index done.  scanned 0 total records. 0 secs
 m31200| Wed Jun 13 10:35:09 [migrateThread] info: creating collection test.remove2 on add index
 m31200| Wed Jun 13 10:35:09 [migrateThread] build index test.remove2 { i: 1.0 }
 m31201| Wed Jun 13 10:35:09 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/test.ns, filling with zeroes...
 m31200| Wed Jun 13 10:35:09 [migrateThread] build index done.  scanned 0 total records. 0 secs
 m31200| Wed Jun 13 10:35:09 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
 m31201| Wed Jun 13 10:35:09 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/test.ns, size: 16MB,  took 0.051 secs
 m31201| Wed Jun 13 10:35:09 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/test.0, filling with zeroes...
 m31201| Wed Jun 13 10:35:09 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/test.0, size: 16MB,  took 0.051 secs
 m31201| Wed Jun 13 10:35:09 [rsSync] build index test.remove2 { _id: 1 }
 m31201| Wed Jun 13 10:35:09 [rsSync] build index done.  scanned 0 total records. 0 secs
 m31201| Wed Jun 13 10:35:09 [rsSync] info: creating collection test.remove2 on add index
 m31201| Wed Jun 13 10:35:09 [rsSync] build index test.remove2 { i: 1.0 }
 m31201| Wed Jun 13 10:35:09 [rsSync] build index done.  scanned 0 total records. 0 secs
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31100| Wed Jun 13 10:35:10 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:35:10 [conn18] moveChunk setting version to: 18|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:35:10 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
 m31200| Wed Jun 13 10:35:10 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:10-0", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598110530), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 5: 114, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 890 } }
 m31100| Wed Jun 13 10:35:10 [conn18] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:35:10 [conn18] moveChunk updating self version to: 18|1||4fd8a477d1d821664bf17408 through { i: 0.0 } -> { i: 1.0 } for collection 'test.remove2'
 m29000| Wed Jun 13 10:35:10 [initandlisten] connection accepted from 10.66.9.105:58745 #13 (11 connections now open)
 m31100| Wed Jun 13 10:35:10 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:10-42", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598110531), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:35:10 [conn18] doing delete inline
 m31100| Wed Jun 13 10:35:10 [conn18] moveChunk deleted: 0
 m31100| Wed Jun 13 10:35:10 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:35:10 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:10-43", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598110532), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 999, step5 of 6: 7, step6 of 6: 0 } }
 m31100| Wed Jun 13 10:35:10 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:8806 w:184035 reslen:37 1012ms
 m30999| Wed Jun 13 10:35:10 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 26 version: 18|1||4fd8a477d1d821664bf17408 based on: 17|0||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:35:10 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m31200| Wed Jun 13 10:35:13 [initandlisten] connection accepted from 10.66.9.105:58746 #20 (14 connections now open)
 m31201| Wed Jun 13 10:35:13 [initandlisten] connection accepted from 10.66.9.105:58747 #13 (10 connections now open)
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m30999| Wed Jun 13 10:35:15 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a523d1d821664bf17420
 m30999| Wed Jun 13 10:35:15 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_0.0", lastmod: Timestamp 18000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 0.0 }, max: { i: 1.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:35:15 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 18|1||000000000000000000000000 min: { i: 0.0 } max: { i: 1.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:35:15 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:35:15 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:35:15 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a5231402b5052316d785
 m31100| Wed Jun 13 10:35:15 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:15-44", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598115539), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:35:15 [conn18] moveChunk request accepted at version 18|1||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:35:15 [conn18] moveChunk number of documents: 30
 m31200| Wed Jun 13 10:35:15 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m31100| Wed Jun 13 10:35:16 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:35:16 [conn18] moveChunk setting version to: 19|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:35:16 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
 m31200| Wed Jun 13 10:35:16 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:16-1", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598116550), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 999 } }
 m31100| Wed Jun 13 10:35:16 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:35:16 [conn18] moveChunk updating self version to: 19|1||4fd8a477d1d821664bf17408 through { i: 1.0 } -> { i: 3.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:35:16 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:16-45", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598116551), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:35:16 [conn18] doing delete inline
 m31100| Wed Jun 13 10:35:16 [conn18] moveChunk deleted: 30
 m31100| Wed Jun 13 10:35:16 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:35:16 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:16-46", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598116559), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 11, step6 of 6: 6 } }
 m31100| Wed Jun 13 10:35:16 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:8973 w:190447 reslen:37 1021ms
 m30999| Wed Jun 13 10:35:16 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 27 version: 19|1||4fd8a477d1d821664bf17408 based on: 18|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:35:16 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
 m30999| Wed Jun 13 10:35:21 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a529d1d821664bf17421
 m30999| Wed Jun 13 10:35:21 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_1.0", lastmod: Timestamp 19000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 1.0 }, max: { i: 3.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:35:21 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 19|1||000000000000000000000000 min: { i: 1.0 } max: { i: 3.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:35:21 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:35:21 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:35:21 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a5291402b5052316d786
 m31100| Wed Jun 13 10:35:21 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:21-47", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598121566), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:35:21 [conn18] moveChunk request accepted at version 19|1||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:35:21 [conn18] moveChunk number of documents: 60
 m31200| Wed Jun 13 10:35:21 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
 m31100| Wed Jun 13 10:35:22 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:35:22 [conn18] moveChunk setting version to: 20|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:35:22 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
 m31200| Wed Jun 13 10:35:22 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:22-2", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598122576), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 16, step4 of 5: 0, step5 of 5: 990 } }
 m31100| Wed Jun 13 10:35:22 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:35:22 [conn18] moveChunk updating self version to: 20|1||4fd8a477d1d821664bf17408 through { i: 3.0 } -> { i: 5.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:35:22 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:22-48", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598122577), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:35:22 [conn18] doing delete inline
 m31100| Wed Jun 13 10:35:22 [conn18] moveChunk deleted: 60
 m31100| Wed Jun 13 10:35:22 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:35:22 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:22-49", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598122587), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 9, step6 of 6: 8 } }
 m31100| Wed Jun 13 10:35:22 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:9260 w:199068 reslen:37 1022ms
 m30999| Wed Jun 13 10:35:22 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 28 version: 20|1||4fd8a477d1d821664bf17408 based on: 19|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:35:22 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m31101| Wed Jun 13 10:35:23 [conn18] end connection 10.66.9.105:58719 (9 connections now open)
 m31101| Wed Jun 13 10:35:23 [initandlisten] connection accepted from 10.66.9.105:58749 #21 (10 connections now open)
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m31200| Wed Jun 13 10:35:25 [clientcursormon] mem (MB) res:55 virt:207 mapped:112
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m31201| Wed Jun 13 10:35:26 [clientcursormon] mem (MB) res:56 virt:217 mapped:128
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m31100| Wed Jun 13 10:35:26 [conn31] end connection 10.66.9.105:58721 (16 connections now open)
 m31100| Wed Jun 13 10:35:26 [initandlisten] connection accepted from 10.66.9.105:58750 #36 (17 connections now open)
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m30999| Wed Jun 13 10:35:27 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a52fd1d821664bf17422
 m30999| Wed Jun 13 10:35:27 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_3.0", lastmod: Timestamp 20000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 3.0 }, max: { i: 5.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:35:27 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 20|1||000000000000000000000000 min: { i: 3.0 } max: { i: 5.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31100| Wed Jun 13 10:35:27 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:35:27 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:35:27 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a52f1402b5052316d787
 m31100| Wed Jun 13 10:35:27 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:27-50", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598127593), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:35:27 [conn18] moveChunk request accepted at version 20|1||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:35:27 [conn18] moveChunk number of documents: 60
 m31200| Wed Jun 13 10:35:27 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m31100| Wed Jun 13 10:35:28 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:35:28 [conn18] moveChunk setting version to: 21|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:35:28 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
 m31200| Wed Jun 13 10:35:28 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:28-3", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598128597), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 17, step4 of 5: 0, step5 of 5: 983 } }
 m31100| Wed Jun 13 10:35:28 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:35:28 [conn18] moveChunk updating self version to: 21|1||4fd8a477d1d821664bf17408 through { i: 5.0 } -> { i: 6.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:35:28 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:28-51", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598128598), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:35:28 [conn18] doing delete inline
 m31100| Wed Jun 13 10:35:28 [conn18] moveChunk deleted: 60
 m31100| Wed Jun 13 10:35:28 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:35:28 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:28-52", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598128608), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 4, step6 of 6: 8 } }
 m31100| Wed Jun 13 10:35:28 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:9459 w:207464 reslen:37 1016ms
 m30999| Wed Jun 13 10:35:28 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 29 version: 21|1||4fd8a477d1d821664bf17408 based on: 20|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:35:28 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 4, "remove2-rs1" : 4 } min: 4 max: 4
chunk diff: 0
 m30999| Wed Jun 13 10:35:28 [conn2] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
 m30999| Wed Jun 13 10:35:28 [conn2] warning: socket exception when initializing on remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201, current connection state is { state: { conn: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", vinfo: "test.remove2 @ 21|1||4fd8a477d1d821664bf17408", cursor: "(none)", count: 0, done: false }, retryNext: false, init: false, finish: false, errored: false } :: caused by :: 9001 socket exception [2] server [10.66.9.105:31200] 
 m30999| Wed Jun 13 10:35:28 [conn2] DBException in process: socket exception
"error: { \"$err\" : \"socket exception\", \"code\" : 9001 }"
 m31200| Wed Jun 13 10:35:28 [initandlisten] connection accepted from 10.66.9.105:58751 #21 (15 connections now open)
 m31200| Wed Jun 13 10:35:28 [conn21] no current chunk manager found for this shard, will initialize
Shard added successfully


----
Attempt removing shard and adding a new shard with the same Replica Set name
----


Removing shard with name: remove2-rs1
 m30999| Wed Jun 13 10:35:28 [conn1] going to start draining shard: remove2-rs1
 m30999| primaryLocalDoc: { _id: "local", primary: "remove2-rs1" }
{
	"msg" : "draining started successfully",
	"state" : "started",
	"shard" : "remove2-rs1",
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m30999| Wed Jun 13 10:35:33 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a535d1d821664bf17423
 m30999| Wed Jun 13 10:35:33 [Balancer] chose [remove2-rs1] to [remove2-rs0] { _id: "test.remove2-i_3.0", lastmod: Timestamp 21000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 3.0 }, max: { i: 5.0 }, shard: "remove2-rs1" }
 m30999| Wed Jun 13 10:35:33 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 lastmod: 21|0||000000000000000000000000 min: { i: 3.0 } max: { i: 5.0 }) remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 -> remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m29000| Wed Jun 13 10:35:33 [initandlisten] connection accepted from 10.66.9.105:58752 #14 (12 connections now open)
 m31200| Wed Jun 13 10:35:33 [conn13] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" }
 m31200| Wed Jun 13 10:35:33 [conn13] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31200| Wed Jun 13 10:35:33 [conn13] distributed lock 'test.remove2/ip-0A420969:31200:1339598133:41' acquired, ts : 4fd8a535ddbff0b83f154dc4
 m31200| Wed Jun 13 10:35:33 [conn13] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:33-4", server: "ip-0A420969", clientAddr: "10.66.9.105:58727", time: new Date(1339598133616), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:35:33 [LockPinger] creating distributed lock ping thread for ip-0A420969:29000 and process ip-0A420969:31200:1339598133:41 (sleeping for 30000ms)
 m31200| Wed Jun 13 10:35:33 [conn13] moveChunk request accepted at version 21|0||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:35:33 [conn13] moveChunk number of documents: 60
 m29000| Wed Jun 13 10:35:33 [initandlisten] connection accepted from 10.66.9.105:58753 #15 (13 connections now open)
 m31100| Wed Jun 13 10:35:33 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31201| Wed Jun 13 10:35:33 [conn11] end connection 10.66.9.105:58735 (9 connections now open)
 m31201| Wed Jun 13 10:35:33 [initandlisten] connection accepted from 10.66.9.105:58754 #14 (10 connections now open)
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(4),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:35:34 [conn13] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31200| Wed Jun 13 10:35:34 [conn13] moveChunk setting version to: 22|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:35:34 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
 m31100| Wed Jun 13 10:35:34 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:34-53", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598134619), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 17, step4 of 5: 0, step5 of 5: 981 } }
 m31200| Wed Jun 13 10:35:34 [conn13] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31200| Wed Jun 13 10:35:34 [conn13] moveChunk updating self version to: 22|1||4fd8a477d1d821664bf17408 through { i: MinKey } -> { i: 0.0 } for collection 'test.remove2'
 m31200| Wed Jun 13 10:35:34 [conn13] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:34-5", server: "ip-0A420969", clientAddr: "10.66.9.105:58727", time: new Date(1339598134620), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:35:34 [conn13] doing delete inline
 m31200| Wed Jun 13 10:35:34 [conn13] moveChunk deleted: 60
 m31200| Wed Jun 13 10:35:34 [conn13] distributed lock 'test.remove2/ip-0A420969:31200:1339598133:41' unlocked. 
 m31200| Wed Jun 13 10:35:34 [conn13] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:34-6", server: "ip-0A420969", clientAddr: "10.66.9.105:58727", time: new Date(1339598134632), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 6: 1, step2 of 6: 3, step3 of 6: 0, step4 of 6: 999, step5 of 6: 2, step6 of 6: 10 } }
 m31200| Wed Jun 13 10:35:34 [conn13] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:393 w:11160 reslen:37 1019ms
 m30999| Wed Jun 13 10:35:34 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 30 version: 22|1||4fd8a477d1d821664bf17408 based on: 21|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:35:34 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:35:36 [conn18] end connection 10.66.9.105:58736 (14 connections now open)
 m31200| Wed Jun 13 10:35:36 [initandlisten] connection accepted from 10.66.9.105:58755 #22 (15 connections now open)
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m30999| Wed Jun 13 10:35:39 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a53bd1d821664bf17424
 m30999| Wed Jun 13 10:35:39 [Balancer] chose [remove2-rs1] to [remove2-rs0] { _id: "test.remove2-i_1.0", lastmod: Timestamp 20000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 1.0 }, max: { i: 3.0 }, shard: "remove2-rs1" }
 m30999| Wed Jun 13 10:35:39 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 lastmod: 20|0||000000000000000000000000 min: { i: 1.0 } max: { i: 3.0 }) remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 -> remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:35:39 [conn13] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" }
 m31200| Wed Jun 13 10:35:39 [conn13] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31200| Wed Jun 13 10:35:39 [conn13] distributed lock 'test.remove2/ip-0A420969:31200:1339598133:41' acquired, ts : 4fd8a53bddbff0b83f154dc5
 m31200| Wed Jun 13 10:35:39 [conn13] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:39-7", server: "ip-0A420969", clientAddr: "10.66.9.105:58727", time: new Date(1339598139638), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:35:39 [conn13] moveChunk request accepted at version 22|1||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:35:39 [conn13] moveChunk number of documents: 60
 m31100| Wed Jun 13 10:35:39 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(3),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:35:40 [conn13] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31200| Wed Jun 13 10:35:40 [conn13] moveChunk setting version to: 23|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:35:40 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
 m31100| Wed Jun 13 10:35:40 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:40-54", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598140649), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 17, step4 of 5: 0, step5 of 5: 989 } }
 m31200| Wed Jun 13 10:35:40 [conn13] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31200| Wed Jun 13 10:35:40 [conn13] moveChunk updating self version to: 23|1||4fd8a477d1d821664bf17408 through { i: MinKey } -> { i: 0.0 } for collection 'test.remove2'
 m31200| Wed Jun 13 10:35:40 [conn13] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:40-8", server: "ip-0A420969", clientAddr: "10.66.9.105:58727", time: new Date(1339598140650), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:35:40 [conn13] doing delete inline
 m31200| Wed Jun 13 10:35:40 [conn13] moveChunk deleted: 60
 m31200| Wed Jun 13 10:35:40 [conn13] distributed lock 'test.remove2/ip-0A420969:31200:1339598133:41' unlocked. 
 m31200| Wed Jun 13 10:35:40 [conn13] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:40-9", server: "ip-0A420969", clientAddr: "10.66.9.105:58727", time: new Date(1339598140660), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 10, step6 of 6: 9 } }
 m31200| Wed Jun 13 10:35:40 [conn13] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:609 w:19673 reslen:37 1023ms
 m30999| Wed Jun 13 10:35:40 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 31 version: 23|1||4fd8a477d1d821664bf17408 based on: 22|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:35:40 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m30999| Wed Jun 13 10:35:45 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a541d1d821664bf17425
 m30999| Wed Jun 13 10:35:45 [Balancer] chose [remove2-rs1] to [remove2-rs0] { _id: "test.remove2-i_0.0", lastmod: Timestamp 19000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 0.0 }, max: { i: 1.0 }, shard: "remove2-rs1" }
 m30999| Wed Jun 13 10:35:45 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 lastmod: 19|0||000000000000000000000000 min: { i: 0.0 } max: { i: 1.0 }) remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 -> remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:35:45 [conn13] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" }
 m31200| Wed Jun 13 10:35:45 [conn13] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31200| Wed Jun 13 10:35:45 [conn13] distributed lock 'test.remove2/ip-0A420969:31200:1339598133:41' acquired, ts : 4fd8a541ddbff0b83f154dc6
 m31200| Wed Jun 13 10:35:45 [conn13] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:45-10", server: "ip-0A420969", clientAddr: "10.66.9.105:58727", time: new Date(1339598145667), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:35:45 [conn13] moveChunk request accepted at version 23|1||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:35:45 [conn13] moveChunk number of documents: 30
 m31100| Wed Jun 13 10:35:45 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(2),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:35:46 [conn13] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31200| Wed Jun 13 10:35:46 [conn13] moveChunk setting version to: 24|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:35:46 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
 m31100| Wed Jun 13 10:35:46 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:46-55", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598146678), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 1000 } }
 m31200| Wed Jun 13 10:35:46 [conn13] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 }
 m31200| Wed Jun 13 10:35:46 [conn13] moveChunk updating self version to: 24|1||4fd8a477d1d821664bf17408 through { i: MinKey } -> { i: 0.0 } for collection 'test.remove2'
 m31200| Wed Jun 13 10:35:46 [conn13] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:46-11", server: "ip-0A420969", clientAddr: "10.66.9.105:58727", time: new Date(1339598146680), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:35:46 [conn13] doing delete inline
 m31200| Wed Jun 13 10:35:46 [conn13] moveChunk deleted: 30
 m31200| Wed Jun 13 10:35:46 [conn13] distributed lock 'test.remove2/ip-0A420969:31200:1339598133:41' unlocked. 
 m31200| Wed Jun 13 10:35:46 [conn13] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:46-12", server: "ip-0A420969", clientAddr: "10.66.9.105:58727", time: new Date(1339598146687), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 11, step6 of 6: 6 } }
 m31200| Wed Jun 13 10:35:46 [conn13] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:783 w:25883 reslen:37 1021ms
 m30999| Wed Jun 13 10:35:46 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 32 version: 24|1||4fd8a477d1d821664bf17408 based on: 23|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:35:46 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m30999| Wed Jun 13 10:35:51 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a547d1d821664bf17426
 m30999| Wed Jun 13 10:35:51 [Balancer] chose [remove2-rs1] to [remove2-rs0] { _id: "test.remove2-i_MinKey", lastmod: Timestamp 24000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs1" }
 m30999| Wed Jun 13 10:35:51 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 lastmod: 24|1||000000000000000000000000 min: { i: MinKey } max: { i: 0.0 }) remove2-rs1:remove2-rs1/ip-0A420969:31200,ip-0A420969:31201 -> remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m31200| Wed Jun 13 10:35:51 [conn13] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" }
 m31200| Wed Jun 13 10:35:51 [conn13] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31200| Wed Jun 13 10:35:51 [conn13] distributed lock 'test.remove2/ip-0A420969:31200:1339598133:41' acquired, ts : 4fd8a547ddbff0b83f154dc7
 m31200| Wed Jun 13 10:35:51 [conn13] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:51-13", server: "ip-0A420969", clientAddr: "10.66.9.105:58727", time: new Date(1339598151693), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:35:51 [conn13] moveChunk request accepted at version 24|1||4fd8a477d1d821664bf17408
 m31200| Wed Jun 13 10:35:51 [conn13] moveChunk number of documents: 0
 m31100| Wed Jun 13 10:35:51 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
{
	"msg" : "draining ongoing",
	"state" : "ongoing",
	"remaining" : {
		"chunks" : NumberLong(1),
		"dbs" : NumberLong(0)
	},
	"ok" : 1
}
 m31200| Wed Jun 13 10:35:52 [conn13] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31200| Wed Jun 13 10:35:52 [conn13] moveChunk setting version to: 25|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:35:52 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
 m31100| Wed Jun 13 10:35:52 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:52-56", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598152697), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 1001 } }
 m31200| Wed Jun 13 10:35:52 [conn13] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
 m31200| Wed Jun 13 10:35:52 [conn13] moveChunk moved last chunk out for collection 'test.remove2'
 m31200| Wed Jun 13 10:35:52 [conn13] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:52-14", server: "ip-0A420969", clientAddr: "10.66.9.105:58727", time: new Date(1339598152698), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs1", to: "remove2-rs0" } }
 m31200| Wed Jun 13 10:35:52 [conn13] doing delete inline
 m31200| Wed Jun 13 10:35:52 [conn13] moveChunk deleted: 0
 m31200| Wed Jun 13 10:35:52 [conn13] distributed lock 'test.remove2/ip-0A420969:31200:1339598133:41' unlocked. 
 m31200| Wed Jun 13 10:35:52 [conn13] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:35:52-15", server: "ip-0A420969", clientAddr: "10.66.9.105:58727", time: new Date(1339598152699), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 4, step6 of 6: 0 } }
 m31200| Wed Jun 13 10:35:52 [conn13] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs1/ip-0A420969:31200,ip-0A420969:31201", to: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", fromShard: "remove2-rs1", toShard: "remove2-rs0", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:6 r:904 w:25923 reslen:37 1007ms
 m30999| Wed Jun 13 10:35:52 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 33 version: 25|0||4fd8a477d1d821664bf17408 based on: 24|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:35:52 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
 m30999| Wed Jun 13 10:35:52 [conn1] going to remove shard: remove2-rs1
 m30999| Wed Jun 13 10:35:52 [conn1] deleting replica set monitor for: remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
 m31201| Wed Jun 13 10:35:52 [conn8] end connection 10.66.9.105:58724 (9 connections now open)
 m31200| Wed Jun 13 10:35:52 [conn10] end connection 10.66.9.105:58723 (14 connections now open)
{
	"msg" : "removeshard completed successfully",
	"state" : "completed",
	"shard" : "remove2-rs1",
	"ok" : 1
}
 m31200| Wed Jun 13 10:35:52 [conn4] dropDatabase test
Shard removed successfully
ReplSetTest n: 0 ports: [ 31200, 31201 ]	31200 number
ReplSetTest stop *** Shutting down mongod in port 31200 ***
 m31201| Wed Jun 13 10:35:52 [rsSync] dropDatabase test
 m31200| Wed Jun 13 10:35:52 [initandlisten] connection accepted from 127.0.0.1:58756 #23 (15 connections now open)
 m31200| Wed Jun 13 10:35:52 [conn23] terminating, shutdown command received
 m31200| Wed Jun 13 10:35:52 dbexit: shutdown called
 m31200| Wed Jun 13 10:35:52 [conn23] shutdown: going to close listening sockets...
 m31200| Wed Jun 13 10:35:52 [conn23] closing listening socket: 504
 m31200| Wed Jun 13 10:35:52 [conn23] closing listening socket: 524
 m31200| Wed Jun 13 10:35:52 [conn23] shutdown: going to flush diaglog...
 m31200| Wed Jun 13 10:35:52 [conn23] shutdown: going to close sockets...
 m31200| Wed Jun 13 10:35:52 [conn23] shutdown: waiting for fs preallocator...
 m31200| Wed Jun 13 10:35:52 [conn23] shutdown: closing all files...
 m31200| Wed Jun 13 10:35:52 [conn23] closeAllFiles() finished
 m31200| Wed Jun 13 10:35:52 [conn23] shutdown: removing fs lock...
 m31200| Wed Jun 13 10:35:52 dbexit: really exiting now
 m30999| Wed Jun 13 10:35:52 [WriteBackListener-ip-0A420969:31200] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
 m30999| Wed Jun 13 10:35:52 [WriteBackListener-ip-0A420969:31200] SocketException: remote: 10.66.9.105:31200 error: 9001 socket exception [1] server [10.66.9.105:31200] 
 m30999| Wed Jun 13 10:35:52 [WriteBackListener-ip-0A420969:31200] DBClientCursor::init call() failed
 m30999| Wed Jun 13 10:35:52 [WriteBackListener-ip-0A420969:31200] WriteBackListener exception : DBClientBase::findN: transport error: ip-0A420969:31200 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd8a477d1d821664bf17406') }
 m31201| Wed Jun 13 10:35:52 [rsBackgroundSync] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
 m31201| Wed Jun 13 10:35:52 [rsSyncNotifier] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
 m31201| Wed Jun 13 10:35:52 [rsBackgroundSync] SocketException: remote: 10.66.9.105:31200 error: 9001 socket exception [1] server [10.66.9.105:31200] 
 m31201| Wed Jun 13 10:35:52 [rsSyncNotifier] SocketException: remote: 10.66.9.105:31200 error: 9001 socket exception [1] server [10.66.9.105:31200] 
 m31201| Wed Jun 13 10:35:52 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: ip-0A420969:31200
 m31201| Wed Jun 13 10:35:52 [rsBackgroundSync] Socket flush send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
 m31201| Wed Jun 13 10:35:52 [rsBackgroundSync]   caught exception (socket exception) in destructor (mongo::PiggyBackData::~PiggyBackData)
 m31201| Wed Jun 13 10:35:52 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: ip-0A420969:31200
 m31201| Wed Jun 13 10:35:52 [conn14] end connection 10.66.9.105:58754 (8 connections now open)
Wed Jun 13 10:35:52 Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:31200
Wed Jun 13 10:35:52 SocketException: remote: 127.0.0.1:31200 error: 9001 socket exception [1] server [127.0.0.1:31200] 
Wed Jun 13 10:35:52 DBClientCursor::init call() failed
 m29000| Wed Jun 13 10:35:52 [conn14] end connection 10.66.9.105:58752 (12 connections now open)
 m29000| Wed Jun 13 10:35:52 [conn13] end connection 10.66.9.105:58745 (11 connections now open)
 m29000| Wed Jun 13 10:35:52 [conn15] end connection 10.66.9.105:58753 (10 connections now open)
Wed Jun 13 10:35:52 shell: stopped mongo program on port 31200
ReplSetTest n: 1 ports: [ 31200, 31201 ]	31201 number
ReplSetTest stop *** Shutting down mongod in port 31201 ***
 m31100| Wed Jun 13 10:35:52 [conn34] end connection 10.66.9.105:58742 (16 connections now open)
 m31100| Wed Jun 13 10:35:52 [conn33] end connection 10.66.9.105:58740 (15 connections now open)
 m31100| Wed Jun 13 10:35:52 [conn35] end connection 10.66.9.105:58744 (14 connections now open)
 m31101| Wed Jun 13 10:35:52 [conn19] end connection 10.66.9.105:58741 (9 connections now open)
 m31101| Wed Jun 13 10:35:52 [conn20] end connection 10.66.9.105:58743 (8 connections now open)
 m31201| Wed Jun 13 10:35:52 [rsHealthPoll] replSet info ip-0A420969:31200 is down (or slow to respond): socket exception
 m31201| Wed Jun 13 10:35:52 [rsHealthPoll] replSet member ip-0A420969:31200 is now in state DOWN
 m31201| Wed Jun 13 10:35:52 [rsMgr] replSet can't see a majority, will not try to elect self
 m31201| Wed Jun 13 10:35:52 [initandlisten] connection accepted from 127.0.0.1:58757 #15 (9 connections now open)
 m31201| Wed Jun 13 10:35:52 [conn15] terminating, shutdown command received
 m31201| Wed Jun 13 10:35:52 dbexit: shutdown called
 m31201| Wed Jun 13 10:35:52 [conn15] shutdown: going to close listening sockets...
 m31201| Wed Jun 13 10:35:52 [conn15] closing listening socket: 520
 m31201| Wed Jun 13 10:35:52 [conn15] closing listening socket: 540
 m31201| Wed Jun 13 10:35:52 [conn15] shutdown: going to flush diaglog...
 m31201| Wed Jun 13 10:35:52 [conn15] shutdown: going to close sockets...
 m31201| Wed Jun 13 10:35:52 [conn15] shutdown: waiting for fs preallocator...
 m31201| Wed Jun 13 10:35:52 [conn15] shutdown: closing all files...
 m31201| Wed Jun 13 10:35:52 [conn15] closeAllFiles() finished
 m31201| Wed Jun 13 10:35:52 [conn15] shutdown: removing fs lock...
 m31201| Wed Jun 13 10:35:52 dbexit: really exiting now
Wed Jun 13 10:35:52 Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:31201
Wed Jun 13 10:35:52 SocketException: remote: 127.0.0.1:31201 error: 9001 socket exception [1] server [127.0.0.1:31201] 
Wed Jun 13 10:35:52 DBClientCursor::init call() failed
Wed Jun 13 10:35:52 shell: stopped mongo program on port 31201
ReplSetTest stopSet deleting all dbpaths
 m30999| Wed Jun 13 10:35:52 [WriteBackListener-ip-0A420969:31201] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31201
 m30999| Wed Jun 13 10:35:52 [WriteBackListener-ip-0A420969:31201] SocketException: remote: 10.66.9.105:31201 error: 9001 socket exception [1] server [10.66.9.105:31201] 
 m30999| Wed Jun 13 10:35:52 [WriteBackListener-ip-0A420969:31201] DBClientCursor::init call() failed
 m30999| Wed Jun 13 10:35:52 [WriteBackListener-ip-0A420969:31201] WriteBackListener exception : DBClientBase::findN: transport error: ip-0A420969:31201 ns: admin.$cmd query: { writebacklisten: ObjectId('4fd8a477d1d821664bf17406') }
ReplSetTest stopSet *** Shut down repl set - test worked ****
Sleeping for 20 seconds to let the other shard's ReplicaSetMonitor time out
 m31101| Wed Jun 13 10:35:53 [conn21] end connection 10.66.9.105:58749 (7 connections now open)
 m31101| Wed Jun 13 10:35:53 [initandlisten] connection accepted from 10.66.9.105:58758 #22 (8 connections now open)
 m31100| Wed Jun 13 10:35:56 [conn36] end connection 10.66.9.105:58750 (13 connections now open)
 m31100| Wed Jun 13 10:35:56 [initandlisten] connection accepted from 10.66.9.105:58759 #37 (14 connections now open)
 m30999| Wed Jun 13 10:35:57 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a54dd1d821664bf17427
 m30999| Wed Jun 13 10:35:57 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
 m31100| Wed Jun 13 10:35:58 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
 m31100| Wed Jun 13 10:35:58 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
 m31100| Wed Jun 13 10:35:59 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
 m31100| Wed Jun 13 10:35:59 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31201
Wed Jun 13 10:35:59 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31200
Wed Jun 13 10:35:59 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
 m31100| Wed Jun 13 10:36:00 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
Wed Jun 13 10:36:00 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:36:00 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31201
 m31100| Wed Jun 13 10:36:01 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
 m31100| Wed Jun 13 10:36:01 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31200 socket exception
 m31100| Wed Jun 13 10:36:01 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
Wed Jun 13 10:36:01 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
 m31100| Wed Jun 13 10:36:02 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
 m31100| Wed Jun 13 10:36:02 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31201 socket exception
Wed Jun 13 10:36:02 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:36:02 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31200 socket exception
Wed Jun 13 10:36:02 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
 m31100| Wed Jun 13 10:36:03 [ReplicaSetMonitorWatcher] warning: No primary detected for set remove2-rs1
 m31100| Wed Jun 13 10:36:03 [ReplicaSetMonitorWatcher] All nodes for set remove2-rs1 are down. This has happened for 1 checks in a row. Polling will stop after 0 more failed checks
 m31100| Wed Jun 13 10:36:03 [ReplicaSetMonitorWatcher] Replica set remove2-rs1 was down for 1 checks in a row. Stopping polled monitoring of the set.
 m31100| Wed Jun 13 10:36:03 [ReplicaSetMonitorWatcher] deleting replica set monitor for: remove2-rs1/ip-0A420969:31200,ip-0A420969:31201
Wed Jun 13 10:36:03 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
Wed Jun 13 10:36:03 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31201 socket exception
Wed Jun 13 10:36:04 [ReplicaSetMonitorWatcher] warning: No primary detected for set remove2-rs1
Wed Jun 13 10:36:04 [ReplicaSetMonitorWatcher] All nodes for set remove2-rs1 are down. This has happened for 4 checks in a row. Polling will stop after 26 more failed checks
 m30999| Wed Jun 13 10:36:07 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a557d1d821664bf17428
 m30999| Wed Jun 13 10:36:07 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ReplSetTest Starting Set
ReplSetTest n is : 0
ReplSetTest n: 0 ports: [ 32700, 32701 ]	32700 number
{
	"useHostName" : true,
	"oplogSize" : 40,
	"keyFile" : undefined,
	"port" : 32700,
	"noprealloc" : "",
	"smallfiles" : "",
	"rest" : "",
	"replSet" : "remove2-rs1",
	"dbpath" : "$set-$node",
	"restart" : undefined,
	"pathOpts" : {
		"node" : 0,
		"set" : "remove2-rs1"
	}
}
ReplSetTest Starting....
Resetting db path '/data/db/remove2-rs1-0'
Wed Jun 13 10:36:12 shell: started program mongod.exe --oplogSize 40 --port 32700 --noprealloc --smallfiles --rest --replSet remove2-rs1 --dbpath /data/db/remove2-rs1-0
 m32700| note: noprealloc may hurt performance in many applications
 m32700| Wed Jun 13 10:36:12 
 m32700| Wed Jun 13 10:36:12 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
 m32700| Wed Jun 13 10:36:12 
 m32700| Wed Jun 13 10:36:13 [initandlisten] MongoDB starting : pid=720 port=32700 dbpath=/data/db/remove2-rs1-0 32-bit host=ip-0A420969
 m32700| Wed Jun 13 10:36:13 [initandlisten] 
 m32700| Wed Jun 13 10:36:13 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
 m32700| Wed Jun 13 10:36:13 [initandlisten] **       Not recommended for production.
 m32700| Wed Jun 13 10:36:13 [initandlisten] 
 m32700| Wed Jun 13 10:36:13 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
 m32700| Wed Jun 13 10:36:13 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
 m32700| Wed Jun 13 10:36:13 [initandlisten] **       with --journal, the limit is lower
 m32700| Wed Jun 13 10:36:13 [initandlisten] 
 m32700| Wed Jun 13 10:36:13 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
 m32700| Wed Jun 13 10:36:13 [initandlisten] git version: 163a2d64ee88f7a4efb604f6208578ef117c4bc3
 m32700| Wed Jun 13 10:36:13 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB_VERSION=1_49
 m32700| Wed Jun 13 10:36:13 [initandlisten] options: { dbpath: "/data/db/remove2-rs1-0", noprealloc: true, oplogSize: 40, port: 32700, replSet: "remove2-rs1", rest: true, smallfiles: true }
 m32700| Wed Jun 13 10:36:13 [websvr] admin web console waiting for connections on port 33700
 m32700| Wed Jun 13 10:36:13 [initandlisten] waiting for connections on port 32700
 m32700| Wed Jun 13 10:36:13 [initandlisten] connection accepted from 127.0.0.1:58767 #1 (1 connection now open)
 m32700| Wed Jun 13 10:36:13 [conn1] end connection 127.0.0.1:58767 (0 connections now open)
 m32700| Wed Jun 13 10:36:13 [initandlisten] connection accepted from 127.0.0.1:58768 #2 (1 connection now open)
 m32700| Wed Jun 13 10:36:13 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
 m32700| Wed Jun 13 10:36:13 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
[ connection to ip-0A420969:32700 ]
ReplSetTest n is : 1
ReplSetTest n: 1 ports: [ 32700, 32701 ]	32701 number
 m32700| Wed Jun 13 10:36:13 [initandlisten] connection accepted from 127.0.0.1:58766 #3 (2 connections now open)
{
	"useHostName" : true,
	"oplogSize" : 40,
	"keyFile" : undefined,
	"port" : 32701,
	"noprealloc" : "",
	"smallfiles" : "",
	"rest" : "",
	"replSet" : "remove2-rs1",
	"dbpath" : "$set-$node",
	"restart" : undefined,
	"pathOpts" : {
		"node" : 1,
		"set" : "remove2-rs1"
	}
}
ReplSetTest Starting....
Resetting db path '/data/db/remove2-rs1-1'
Wed Jun 13 10:36:13 shell: started program mongod.exe --oplogSize 40 --port 32701 --noprealloc --smallfiles --rest --replSet remove2-rs1 --dbpath /data/db/remove2-rs1-1
 m32701| note: noprealloc may hurt performance in many applications
 m32701| Wed Jun 13 10:36:13 
 m32701| Wed Jun 13 10:36:13 warning: 32-bit servers don't have journaling enabled by default. Please use --journal if you want durability.
 m32701| Wed Jun 13 10:36:13 
 m32701| Wed Jun 13 10:36:13 [initandlisten] MongoDB starting : pid=2320 port=32701 dbpath=/data/db/remove2-rs1-1 32-bit host=ip-0A420969
 m32701| Wed Jun 13 10:36:13 [initandlisten] 
 m32701| Wed Jun 13 10:36:13 [initandlisten] ** NOTE: This is a development version (2.1.2-pre-) of MongoDB.
 m32701| Wed Jun 13 10:36:13 [initandlisten] **       Not recommended for production.
 m32701| Wed Jun 13 10:36:13 [initandlisten] 
 m32701| Wed Jun 13 10:36:13 [initandlisten] ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data
 m32701| Wed Jun 13 10:36:13 [initandlisten] **       see http://blog.mongodb.org/post/137788967/32-bit-limitations
 m32701| Wed Jun 13 10:36:13 [initandlisten] **       with --journal, the limit is lower
 m32701| Wed Jun 13 10:36:13 [initandlisten] 
 m32701| Wed Jun 13 10:36:13 [initandlisten] db version v2.1.2-pre-, pdfile version 4.5
 m32701| Wed Jun 13 10:36:13 [initandlisten] git version: 163a2d64ee88f7a4efb604f6208578ef117c4bc3
 m32701| Wed Jun 13 10:36:13 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=0, build=6002, platform=2, service_pack='Service Pack 2') BOOST_LIB_VERSION=1_49
 m32701| Wed Jun 13 10:36:13 [initandlisten] options: { dbpath: "/data/db/remove2-rs1-1", noprealloc: true, oplogSize: 40, port: 32701, replSet: "remove2-rs1", rest: true, smallfiles: true }
 m32701| Wed Jun 13 10:36:13 [initandlisten] waiting for connections on port 32701
 m32701| Wed Jun 13 10:36:13 [websvr] admin web console waiting for connections on port 33701
 m32701| Wed Jun 13 10:36:13 [initandlisten] connection accepted from 127.0.0.1:58770 #1 (1 connection now open)
 m32701| Wed Jun 13 10:36:13 [conn1] end connection 127.0.0.1:58770 (0 connections now open)
 m32701| Wed Jun 13 10:36:13 [initandlisten] connection accepted from 127.0.0.1:58771 #2 (1 connection now open)
 m32701| Wed Jun 13 10:36:13 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG)
 m32701| Wed Jun 13 10:36:13 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done
 m32701| Wed Jun 13 10:36:13 [initandlisten] connection accepted from 127.0.0.1:58769 #3 (2 connections now open)
[ connection to ip-0A420969:32700, connection to ip-0A420969:32701 ]
{
	"replSetInitiate" : {
		"_id" : "remove2-rs1",
		"members" : [
			{
				"_id" : 0,
				"host" : "ip-0A420969:32700"
			},
			{
				"_id" : 1,
				"host" : "ip-0A420969:32701"
			}
		]
	}
}
 m32700| Wed Jun 13 10:36:13 [conn3] replSet replSetInitiate admin command received from client
 m32700| Wed Jun 13 10:36:13 [conn3] replSet replSetInitiate config object parses ok, 2 members specified
 m32700| Wed Jun 13 10:36:13 [initandlisten] connection accepted from 10.66.9.105:58772 #4 (3 connections now open)
 m32700| Wed Jun 13 10:36:13 [conn4] end connection 10.66.9.105:58772 (2 connections now open)
 m32701| Wed Jun 13 10:36:13 [initandlisten] connection accepted from 10.66.9.105:58773 #4 (3 connections now open)
 m32700| Wed Jun 13 10:36:13 [conn3] replSet replSetInitiate all members seem up
 m32700| Wed Jun 13 10:36:13 [conn3] ******
 m32700| Wed Jun 13 10:36:13 [conn3] creating replication oplog of size: 40MB...
 m32700| Wed Jun 13 10:36:13 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/local.ns, filling with zeroes...
 m32700| Wed Jun 13 10:36:13 [FileAllocator] creating directory /data/db/remove2-rs1-0/_tmp
 m32700| Wed Jun 13 10:36:14 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/local.ns, size: 16MB,  took 0.049 secs
 m32700| Wed Jun 13 10:36:14 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/local.0, filling with zeroes...
 m32700| Wed Jun 13 10:36:14 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/local.0, size: 64MB,  took 0.198 secs
Wed Jun 13 10:36:14 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
Wed Jun 13 10:36:15 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:36:15 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
Wed Jun 13 10:36:16 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
 m32700| Wed Jun 13 10:36:17 [conn3] ******
 m32700| Wed Jun 13 10:36:17 [conn3] replSet info saving a newer config version to local.system.replset
 m32700| Wed Jun 13 10:36:17 [conn3] replSet saveConfigLocally done
 m32700| Wed Jun 13 10:36:17 [conn3] replSet replSetInitiate config now saved locally.  Should come online in about a minute.
 m32700| Wed Jun 13 10:36:17 [conn3] command admin.$cmd command: { replSetInitiate: { _id: "remove2-rs1", members: [ { _id: 0.0, host: "ip-0A420969:32700" }, { _id: 1.0, host: "ip-0A420969:32701" } ] } } ntoreturn:1 keyUpdates:0 locks(micros) W:3085480 w:35 reslen:112 3083ms
{
	"info" : "Config now saved locally.  Should come online in about a minute.",
	"ok" : 1
}
 m30999| Wed Jun 13 10:36:17 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a561d1d821664bf17429
 m30999| Wed Jun 13 10:36:17 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
Wed Jun 13 10:36:17 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
Wed Jun 13 10:36:18 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:36:18 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31200 socket exception
Wed Jun 13 10:36:18 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
Wed Jun 13 10:36:19 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
Wed Jun 13 10:36:19 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31201 socket exception
Wed Jun 13 10:36:20 [ReplicaSetMonitorWatcher] warning: No primary detected for set remove2-rs1
Wed Jun 13 10:36:20 [ReplicaSetMonitorWatcher] All nodes for set remove2-rs1 are down. This has happened for 5 checks in a row. Polling will stop after 25 more failed checks
 m32700| Wed Jun 13 10:36:23 [rsStart] replSet I am ip-0A420969:32700
 m32700| Wed Jun 13 10:36:23 [rsStart] replSet STARTUP2
 m32700| Wed Jun 13 10:36:23 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
 m32700| Wed Jun 13 10:36:23 [rsHealthPoll] replSet member ip-0A420969:32701 is up
 m32700| Wed Jun 13 10:36:23 [rsSync] replSet SECONDARY
 m32701| Wed Jun 13 10:36:23 [rsStart] trying to contact ip-0A420969:32700
 m32700| Wed Jun 13 10:36:23 [initandlisten] connection accepted from 10.66.9.105:58779 #5 (3 connections now open)
 m31101| Wed Jun 13 10:36:23 [conn22] end connection 10.66.9.105:58758 (7 connections now open)
 m31101| Wed Jun 13 10:36:23 [initandlisten] connection accepted from 10.66.9.105:58780 #23 (8 connections now open)
 m32701| Wed Jun 13 10:36:24 [initandlisten] connection accepted from 10.66.9.105:58781 #5 (4 connections now open)
 m32701| Wed Jun 13 10:36:24 [rsStart] replSet I am ip-0A420969:32701
 m32701| Wed Jun 13 10:36:24 [conn5] end connection 10.66.9.105:58781 (3 connections now open)
 m32701| Wed Jun 13 10:36:24 [rsStart] replSet got config version 1 from a remote, saving locally
 m32701| Wed Jun 13 10:36:24 [rsStart] replSet info saving a newer config version to local.system.replset
 m32701| Wed Jun 13 10:36:24 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/local.ns, filling with zeroes...
 m32701| Wed Jun 13 10:36:24 [FileAllocator] creating directory /data/db/remove2-rs1-1/_tmp
 m32701| Wed Jun 13 10:36:24 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/local.ns, size: 16MB,  took 0.05 secs
 m32701| Wed Jun 13 10:36:24 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/local.0, filling with zeroes...
 m32701| Wed Jun 13 10:36:24 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/local.0, size: 16MB,  took 0.05 secs
 m32701| Wed Jun 13 10:36:24 [rsStart] replSet saveConfigLocally done
 m32701| Wed Jun 13 10:36:24 [rsStart] replSet STARTUP2
 m32701| Wed Jun 13 10:36:24 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote
 m32701| Wed Jun 13 10:36:24 [rsSync] ******
 m32701| Wed Jun 13 10:36:24 [rsSync] creating replication oplog of size: 40MB...
 m32701| Wed Jun 13 10:36:24 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/local.1, filling with zeroes...
 m32701| Wed Jun 13 10:36:24 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/local.1, size: 64MB,  took 0.202 secs
 m32700| Wed Jun 13 10:36:25 [rsHealthPoll] replSet member ip-0A420969:32701 is now in state STARTUP2
 m32700| Wed Jun 13 10:36:25 [rsMgr] not electing self, ip-0A420969:32701 would veto
 m32701| Wed Jun 13 10:36:26 [rsHealthPoll] replSet member ip-0A420969:32700 is up
 m32701| Wed Jun 13 10:36:26 [rsHealthPoll] replSet member ip-0A420969:32700 is now in state SECONDARY
 m31100| Wed Jun 13 10:36:26 [conn37] end connection 10.66.9.105:58759 (13 connections now open)
 m31100| Wed Jun 13 10:36:26 [initandlisten] connection accepted from 10.66.9.105:58782 #38 (14 connections now open)
 m32701| Wed Jun 13 10:36:27 [rsSync] ******
 m32701| Wed Jun 13 10:36:27 [rsSync] replSet initial sync pending
 m32701| Wed Jun 13 10:36:27 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync
 m30999| Wed Jun 13 10:36:27 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a56bd1d821664bf1742a
 m30999| Wed Jun 13 10:36:27 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
Wed Jun 13 10:36:30 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
 m32700| Wed Jun 13 10:36:31 [rsMgr] replSet info electSelf 0
 m32701| Wed Jun 13 10:36:31 [conn4] replSet RECOVERING
 m32701| Wed Jun 13 10:36:31 [conn4] replSet info voting yea for ip-0A420969:32700 (0)
 m32700| Wed Jun 13 10:36:31 [rsMgr] replSet PRIMARY
ReplSetTest Timestamp(1339598177000, 1)
ReplSetTest waiting for connection to ip-0A420969:32701 to have an oplog built.
Wed Jun 13 10:36:31 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:36:31 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
 m32701| Wed Jun 13 10:36:32 [rsHealthPoll] replSet member ip-0A420969:32700 is now in state PRIMARY
Wed Jun 13 10:36:32 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
 m32700| Wed Jun 13 10:36:33 [rsHealthPoll] replSet member ip-0A420969:32701 is now in state RECOVERING
ReplSetTest waiting for connection to ip-0A420969:32701 to have an oplog built.
Wed Jun 13 10:36:33 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
Wed Jun 13 10:36:34 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:36:34 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31200 socket exception
Wed Jun 13 10:36:34 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
ReplSetTest waiting for connection to ip-0A420969:32701 to have an oplog built.
Wed Jun 13 10:36:35 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
Wed Jun 13 10:36:35 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31201 socket exception
Wed Jun 13 10:36:36 [ReplicaSetMonitorWatcher] warning: No primary detected for set remove2-rs1
Wed Jun 13 10:36:36 [ReplicaSetMonitorWatcher] All nodes for set remove2-rs1 are down. This has happened for 6 checks in a row. Polling will stop after 24 more failed checks
ReplSetTest waiting for connection to ip-0A420969:32701 to have an oplog built.
 m30999| Wed Jun 13 10:36:37 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a575d1d821664bf1742b
 m30999| Wed Jun 13 10:36:37 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ReplSetTest waiting for connection to ip-0A420969:32701 to have an oplog built.
ReplSetTest waiting for connection to ip-0A420969:32701 to have an oplog built.
ReplSetTest waiting for connection to ip-0A420969:32701 to have an oplog built.
 m32701| Wed Jun 13 10:36:43 [rsSync] replSet initial sync pending
 m32701| Wed Jun 13 10:36:43 [rsSync] replSet syncing to: ip-0A420969:32700
 m32700| Wed Jun 13 10:36:43 [initandlisten] connection accepted from 10.66.9.105:58787 #6 (4 connections now open)
 m32701| Wed Jun 13 10:36:43 [rsSync] build index local.me { _id: 1 }
 m32701| Wed Jun 13 10:36:43 [rsSync] build index done.  scanned 0 total records. 0.001 secs
 m32701| Wed Jun 13 10:36:43 [rsSync] replSet initial sync drop all databases
 m32701| Wed Jun 13 10:36:43 [rsSync] dropAllDatabasesExceptLocal 1
 m32701| Wed Jun 13 10:36:43 [rsSync] replSet initial sync clone all databases
 m32701| Wed Jun 13 10:36:43 [rsSync] replSet initial sync data copy, starting syncup
 m32701| Wed Jun 13 10:36:43 [rsSync] replSet initial sync building indexes
 m32701| Wed Jun 13 10:36:43 [rsSync] replSet initial sync query minValid
 m32701| Wed Jun 13 10:36:43 [rsSync] replSet initial sync finishing up
 m32701| Wed Jun 13 10:36:43 [rsSync] replSet set minValid=4fd8a561:1
 m32701| Wed Jun 13 10:36:43 [rsSync] build index local.replset.minvalid { _id: 1 }
 m32701| Wed Jun 13 10:36:43 [rsSync] build index done.  scanned 0 total records. 0 secs
 m32701| Wed Jun 13 10:36:43 [rsSync] replSet initial sync done
 m32700| Wed Jun 13 10:36:43 [conn6] end connection 10.66.9.105:58787 (3 connections now open)
 m32701| Wed Jun 13 10:36:44 [rsSync] replSet SECONDARY
 m32701| Wed Jun 13 10:36:44 [rsBackgroundSync] replSet syncing to: ip-0A420969:32700
 m32700| Wed Jun 13 10:36:44 [initandlisten] connection accepted from 10.66.9.105:58788 #7 (4 connections now open)
 m32700| Wed Jun 13 10:36:45 [rsHealthPoll] replSet member ip-0A420969:32701 is now in state SECONDARY
{
	"ts" : Timestamp(1339598177000, 1),
	"h" : NumberLong(0),
	"op" : "n",
	"ns" : "",
	"o" : {
		"msg" : "initiating set"
	}
}
ReplSetTest await TS for connection to ip-0A420969:32701 is 1339598177000:1 and latest is 1339598177000:1
ReplSetTest await oplog size for connection to ip-0A420969:32701 is 1
ReplSetTest await synced=true
Adding shard with seed: remove2-rs1/ip-0A420969:32700,ip-0A420969:32701
 m30999| Wed Jun 13 10:36:45 [conn1] warning: scoped connection to Trying to get server address for DBClientReplicaSet, but no ReplicaSetMonitor exists for remove2-rs1
 m30999| Wed Jun 13 10:36:45 [conn1] remove2-rs1/ not being returned to the pool
 m30999| Wed Jun 13 10:36:45 [conn1] addshard request { addshard: "remove2-rs1/ip-0A420969:32700,ip-0A420969:32701" } failed: couldn't connect to new shard No replica set monitor active and no cached seed found for set: remove2-rs1
First attempt to addShard failed, trying again
 m30999| Wed Jun 13 10:36:45 [conn1] starting new replica set monitor for replica set remove2-rs1 with seed of ip-0A420969:32700,ip-0A420969:32701
 m32700| Wed Jun 13 10:36:45 [initandlisten] connection accepted from 10.66.9.105:58789 #8 (5 connections now open)
 m30999| Wed Jun 13 10:36:45 [conn1] successfully connected to seed ip-0A420969:32700 for replica set remove2-rs1
 m30999| Wed Jun 13 10:36:45 [conn1] changing hosts to { 0: "ip-0A420969:32700", 1: "ip-0A420969:32701" } from remove2-rs1/
 m30999| Wed Jun 13 10:36:45 [conn1] trying to add new host ip-0A420969:32700 to replica set remove2-rs1
 m30999| Wed Jun 13 10:36:45 [conn1] successfully connected to new host ip-0A420969:32700 in replica set remove2-rs1
 m30999| Wed Jun 13 10:36:45 [conn1] trying to add new host ip-0A420969:32701 to replica set remove2-rs1
 m32700| Wed Jun 13 10:36:45 [initandlisten] connection accepted from 10.66.9.105:58790 #9 (6 connections now open)
 m32701| Wed Jun 13 10:36:45 [initandlisten] connection accepted from 10.66.9.105:58791 #6 (4 connections now open)
 m30999| Wed Jun 13 10:36:45 [conn1] successfully connected to new host ip-0A420969:32701 in replica set remove2-rs1
 m32700| Wed Jun 13 10:36:45 [initandlisten] connection accepted from 10.66.9.105:58792 #10 (7 connections now open)
 m32700| Wed Jun 13 10:36:45 [conn8] end connection 10.66.9.105:58789 (6 connections now open)
 m30999| Wed Jun 13 10:36:45 [conn1] Primary for replica set remove2-rs1 changed to ip-0A420969:32700
 m32701| Wed Jun 13 10:36:45 [initandlisten] connection accepted from 10.66.9.105:58793 #7 (5 connections now open)
 m30999| Wed Jun 13 10:36:45 [conn1] replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/ip-0A420969:32700,ip-0A420969:32701
 m32700| Wed Jun 13 10:36:45 [initandlisten] connection accepted from 10.66.9.105:58794 #11 (7 connections now open)
 m30999| Wed Jun 13 10:36:45 [conn1] going to add shard: { _id: "remove2-rs1", host: "remove2-rs1/ip-0A420969:32700,ip-0A420969:32701" }
 m30999| Wed Jun 13 10:36:45 [mongosMain] connection accepted from 10.66.9.105:58795 #5 (5 connections now open)
Awaiting ip-0A420969:32701 to be { "ok" : true, "secondary" : true } for connection to ip-0A420969:30999 (rs: undefined)
{
	"remove2-rs0" : {
		"hosts" : [
			{
				"addr" : "ip-0A420969:31100",
				"ok" : true,
				"ismaster" : true,
				"hidden" : false,
				"secondary" : false,
				"pingTimeMillis" : 0
			},
			{
				"addr" : "ip-0A420969:31101",
				"ok" : true,
				"ismaster" : false,
				"hidden" : false,
				"secondary" : true,
				"pingTimeMillis" : 0
			}
		],
		"master" : 0,
		"nextSlave" : 0
	},
	"remove2-rs1" : {
		"hosts" : [
			{
				"addr" : "ip-0A420969:32700",
				"ok" : true,
				"ismaster" : true,
				"hidden" : false,
				"secondary" : false,
				"pingTimeMillis" : 0
			},
			{
				"addr" : "ip-0A420969:32701",
				"ok" : true,
				"ismaster" : false,
				"hidden" : false,
				"secondary" : true,
				"pingTimeMillis" : 0
			}
		],
		"master" : 0,
		"nextSlave" : 0
	}
}
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m32701| Wed Jun 13 10:36:45 [rsSyncNotifier] replset setting oplog notifier to ip-0A420969:32700
 m32700| Wed Jun 13 10:36:45 [initandlisten] connection accepted from 10.66.9.105:58796 #12 (8 connections now open)
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31101| Wed Jun 13 10:36:46 [clientcursormon] mem (MB) res:87 virt:252 mapped:160
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m32700| Wed Jun 13 10:36:46 [slaveTracking] build index local.slaves { _id: 1 }
 m32700| Wed Jun 13 10:36:46 [slaveTracking] build index done.  scanned 0 total records. 0.001 secs
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
Wed Jun 13 10:36:46 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m30999| Wed Jun 13 10:36:47 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a57fd1d821664bf1742c
 m30999| Wed Jun 13 10:36:47 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_MinKey", lastmod: Timestamp 25000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:36:47 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 25|0||000000000000000000000000 min: { i: MinKey } max: { i: 0.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:32700,ip-0A420969:32701
 m31100| Wed Jun 13 10:36:47 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:32700,ip-0A420969:32701", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:36:47 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:36:47 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a57f1402b5052316d788
 m31100| Wed Jun 13 10:36:47 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:36:47-57", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598207711), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:36:47 [conn18] moveChunk request accepted at version 25|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:36:47 [conn18] moveChunk number of documents: 0
 m31100| Wed Jun 13 10:36:47 [conn18] starting new replica set monitor for replica set remove2-rs1 with seed of ip-0A420969:31200,ip-0A420969:31201
Wed Jun 13 10:36:47 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:36:47 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31100| Wed Jun 13 10:36:48 [conn18] error connecting to seed ip-0A420969:31200 :: caused by :: 15928 couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:36:48 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31100| Wed Jun 13 10:36:49 [conn18] error connecting to seed ip-0A420969:31201 :: caused by :: 15928 couldn't connect to server ip-0A420969:31201
Wed Jun 13 10:36:49 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
Wed Jun 13 10:36:50 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:36:50 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31200 socket exception
Wed Jun 13 10:36:50 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m32701| Wed Jun 13 10:36:51 [conn4] end connection 10.66.9.105:58773 (4 connections now open)
 m32701| Wed Jun 13 10:36:51 [initandlisten] connection accepted from 10.66.9.105:58803 #8 (5 connections now open)
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31100| Wed Jun 13 10:36:51 [conn18] warning: No primary detected for set remove2-rs1
 m31100| Wed Jun 13 10:36:51 [conn18] All nodes for set remove2-rs1 are down. This has happened for 1 checks in a row. Polling will stop after 0 more failed checks
 m31100| Wed Jun 13 10:36:51 [conn18] replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
Wed Jun 13 10:36:51 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
Wed Jun 13 10:36:51 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31201 socket exception
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
Wed Jun 13 10:36:52 [ReplicaSetMonitorWatcher] warning: No primary detected for set remove2-rs1
Wed Jun 13 10:36:52 [ReplicaSetMonitorWatcher] All nodes for set remove2-rs1 are down. This has happened for 7 checks in a row. Polling will stop after 23 more failed checks
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m30999| Wed Jun 13 10:36:53 [LockPinger] cluster ip-0A420969:29000 pinged successfully at Wed Jun 13 10:36:53 2012 by distributed lock pinger 'ip-0A420969:29000/ip-0A420969:30999:1339597943:41', sleeping for 30000ms
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31100| Wed Jun 13 10:36:53 [LockPinger] cluster ip-0A420969:29000 pinged successfully at Wed Jun 13 10:36:53 2012 by distributed lock pinger 'ip-0A420969:29000/ip-0A420969:31100:1339597943:15724', sleeping for 30000ms
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31101| Wed Jun 13 10:36:53 [conn23] end connection 10.66.9.105:58780 (7 connections now open)
 m31101| Wed Jun 13 10:36:53 [initandlisten] connection accepted from 10.66.9.105:58804 #24 (8 connections now open)
 m31100| Wed Jun 13 10:36:53 [conn18] warning: No primary detected for set remove2-rs1
 m31100| Wed Jun 13 10:36:53 [conn18] warning: moveChunk could not contact to: shard remove2-rs1 to start transfer :: caused by :: 10009 ReplicaSetMonitor no master found for set: remove2-rs1
 m31100| Wed Jun 13 10:36:53 [conn18] scoped connection to remove2-rs1/ not being returned to the pool
 m31100| Wed Jun 13 10:36:53 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:36:53 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:36:53-58", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598213710), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 6: 0, step2 of 6: 2, note: "aborted" } }
 m31100| Wed Jun 13 10:36:53 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:32700,ip-0A420969:32701", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:9748 w:207464 reslen:194 5999ms
 m30999| Wed Jun 13 10:36:53 [Balancer] moveChunk result: { errmsg: "moveChunk could not contact to: shard remove2-rs1 to start transfer :: caused by :: 10009 ReplicaSetMonitor no master found for set: remove2-rs1", ok: 0.0 }
 m30999| Wed Jun 13 10:36:53 [Balancer] balancer move failed: { errmsg: "moveChunk could not contact to: shard remove2-rs1 to start transfer :: caused by :: 10009 ReplicaSetMonitor no master found for set: remove2-rs1", ok: 0.0 } from: remove2-rs0 to: remove2-rs1 chunk: { _id: "test.remove2-i_MinKey", lastmod: Timestamp 25000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:36:53 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m32700| Wed Jun 13 10:36:54 [conn5] end connection 10.66.9.105:58779 (7 connections now open)
 m32700| Wed Jun 13 10:36:54 [initandlisten] connection accepted from 10.66.9.105:58805 #13 (8 connections now open)
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31100| Wed Jun 13 10:36:55 [ReplicaSetMonitorWatcher] warning: No primary detected for set remove2-rs1
 m31100| Wed Jun 13 10:36:55 [ReplicaSetMonitorWatcher] All nodes for set remove2-rs1 are down. This has happened for 2 checks in a row. Polling will stop after -1 more failed checks
 m31100| Wed Jun 13 10:36:55 [ReplicaSetMonitorWatcher] Replica set remove2-rs1 was down for 2 checks in a row. Stopping polled monitoring of the set.
 m31100| Wed Jun 13 10:36:55 [ReplicaSetMonitorWatcher] deleting replica set monitor for: remove2-rs1/
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31100| Wed Jun 13 10:36:56 [conn38] end connection 10.66.9.105:58782 (13 connections now open)
 m31100| Wed Jun 13 10:36:56 [initandlisten] connection accepted from 10.66.9.105:58806 #39 (14 connections now open)
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
Wed Jun 13 10:37:02 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m30999| Wed Jun 13 10:37:03 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a58fd1d821664bf1742d
 m30999| Wed Jun 13 10:37:03 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_MinKey", lastmod: Timestamp 25000|0, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: MinKey }, max: { i: 0.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:37:03 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 25|0||000000000000000000000000 min: { i: MinKey } max: { i: 0.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:32700,ip-0A420969:32701
 m31100| Wed Jun 13 10:37:03 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:32700,ip-0A420969:32701", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:37:03 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:37:03 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a58f1402b5052316d789
 m31100| Wed Jun 13 10:37:03 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:03-59", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598223714), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:37:03 [conn18] moveChunk request accepted at version 25|0||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:37:03 [conn18] moveChunk number of documents: 0
 m31100| Wed Jun 13 10:37:03 [conn18] starting new replica set monitor for replica set remove2-rs1 with seed of ip-0A420969:32700,ip-0A420969:32701
 m31100| Wed Jun 13 10:37:03 [conn18] successfully connected to seed ip-0A420969:32700 for replica set remove2-rs1
 m32700| Wed Jun 13 10:37:03 [initandlisten] connection accepted from 10.66.9.105:58808 #14 (9 connections now open)
 m31100| Wed Jun 13 10:37:03 [conn18] changing hosts to { 0: "ip-0A420969:32700", 1: "ip-0A420969:32701" } from remove2-rs1/
 m31100| Wed Jun 13 10:37:03 [conn18] trying to add new host ip-0A420969:32700 to replica set remove2-rs1
 m31100| Wed Jun 13 10:37:03 [conn18] successfully connected to new host ip-0A420969:32700 in replica set remove2-rs1
 m31100| Wed Jun 13 10:37:03 [conn18] trying to add new host ip-0A420969:32701 to replica set remove2-rs1
 m32700| Wed Jun 13 10:37:03 [initandlisten] connection accepted from 10.66.9.105:58809 #15 (10 connections now open)
 m32701| Wed Jun 13 10:37:03 [initandlisten] connection accepted from 10.66.9.105:58810 #9 (6 connections now open)
 m31100| Wed Jun 13 10:37:03 [conn18] successfully connected to new host ip-0A420969:32701 in replica set remove2-rs1
 m32700| Wed Jun 13 10:37:03 [initandlisten] connection accepted from 10.66.9.105:58811 #16 (11 connections now open)
 m32700| Wed Jun 13 10:37:03 [conn14] end connection 10.66.9.105:58808 (10 connections now open)
 m31100| Wed Jun 13 10:37:03 [conn18] Primary for replica set remove2-rs1 changed to ip-0A420969:32700
 m32701| Wed Jun 13 10:37:03 [initandlisten] connection accepted from 10.66.9.105:58812 #10 (7 connections now open)
 m31100| Wed Jun 13 10:37:03 [conn18] replica set monitor for replica set remove2-rs1 started, address is remove2-rs1/ip-0A420969:32700,ip-0A420969:32701
 m32700| Wed Jun 13 10:37:03 [initandlisten] connection accepted from 10.66.9.105:58813 #17 (11 connections now open)
 m32700| Wed Jun 13 10:37:03 [migrateThread] starting new replica set monitor for replica set remove2-rs0 with seed of ip-0A420969:31100,ip-0A420969:31101
 m32700| Wed Jun 13 10:37:03 [migrateThread] successfully connected to seed ip-0A420969:31100 for replica set remove2-rs0
 m32700| Wed Jun 13 10:37:03 [migrateThread] changing hosts to { 0: "ip-0A420969:31100", 1: "ip-0A420969:31101" } from remove2-rs0/
 m32700| Wed Jun 13 10:37:03 [migrateThread] trying to add new host ip-0A420969:31100 to replica set remove2-rs0
 m32700| Wed Jun 13 10:37:03 [migrateThread] successfully connected to new host ip-0A420969:31100 in replica set remove2-rs0
 m32700| Wed Jun 13 10:37:03 [migrateThread] trying to add new host ip-0A420969:31101 to replica set remove2-rs0
 m32700| Wed Jun 13 10:37:03 [migrateThread] successfully connected to new host ip-0A420969:31101 in replica set remove2-rs0
 m32700| Wed Jun 13 10:37:03 [migrateThread] Primary for replica set remove2-rs0 changed to ip-0A420969:31100
 m32700| Wed Jun 13 10:37:03 [migrateThread] replica set monitor for replica set remove2-rs0 started, address is remove2-rs0/ip-0A420969:31100,ip-0A420969:31101
 m32700| Wed Jun 13 10:37:03 [ReplicaSetMonitorWatcher] starting
 m32700| Wed Jun 13 10:37:03 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/test.ns, filling with zeroes...
 m31100| Wed Jun 13 10:37:03 [initandlisten] connection accepted from 10.66.9.105:58814 #40 (15 connections now open)
 m31100| Wed Jun 13 10:37:03 [initandlisten] connection accepted from 10.66.9.105:58815 #41 (16 connections now open)
 m31100| Wed Jun 13 10:37:03 [initandlisten] connection accepted from 10.66.9.105:58817 #42 (17 connections now open)
 m31100| Wed Jun 13 10:37:03 [conn40] end connection 10.66.9.105:58814 (16 connections now open)
 m31100| Wed Jun 13 10:37:03 [initandlisten] connection accepted from 10.66.9.105:58819 #43 (17 connections now open)
 m31101| Wed Jun 13 10:37:03 [initandlisten] connection accepted from 10.66.9.105:58816 #25 (9 connections now open)
 m31101| Wed Jun 13 10:37:03 [initandlisten] connection accepted from 10.66.9.105:58818 #26 (10 connections now open)
 m32700| Wed Jun 13 10:37:03 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/test.ns, size: 16MB,  took 0.05 secs
 m32700| Wed Jun 13 10:37:03 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/test.0, filling with zeroes...
Wed Jun 13 10:37:03 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:37:03 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m32700| Wed Jun 13 10:37:03 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/test.0, size: 16MB,  took 0.05 secs
 m32700| Wed Jun 13 10:37:03 [migrateThread] build index test.remove2 { _id: 1 }
 m32700| Wed Jun 13 10:37:03 [migrateThread] build index done.  scanned 0 total records. 0 secs
 m32700| Wed Jun 13 10:37:03 [migrateThread] info: creating collection test.remove2 on add index
 m32700| Wed Jun 13 10:37:03 [migrateThread] build index test.remove2 { i: 1.0 }
 m32700| Wed Jun 13 10:37:03 [migrateThread] build index done.  scanned 0 total records. 0 secs
 m32700| Wed Jun 13 10:37:03 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m32701| Wed Jun 13 10:37:04 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/test.ns, filling with zeroes...
 m32701| Wed Jun 13 10:37:04 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/test.ns, size: 16MB,  took 0.05 secs
 m32701| Wed Jun 13 10:37:04 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/test.0, filling with zeroes...
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m32701| Wed Jun 13 10:37:04 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/test.0, size: 16MB,  took 0.051 secs
 m32701| Wed Jun 13 10:37:04 [rsSync] build index test.remove2 { _id: 1 }
 m32701| Wed Jun 13 10:37:04 [rsSync] build index done.  scanned 0 total records. 0 secs
 m32701| Wed Jun 13 10:37:04 [rsSync] info: creating collection test.remove2 on add index
 m32701| Wed Jun 13 10:37:04 [rsSync] build index test.remove2 { i: 1.0 }
 m32701| Wed Jun 13 10:37:04 [rsSync] build index done.  scanned 0 total records. 0 secs
ShardingTest input: { "remove2-rs0" : 8, "remove2-rs1" : 0 } min: 0 max: 8
chunk diff: 8
 m31100| Wed Jun 13 10:37:04 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:37:04 [conn18] moveChunk setting version to: 26|0||4fd8a477d1d821664bf17408
 m32700| Wed Jun 13 10:37:04 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: MinKey } -> { i: 0.0 }
 m32700| Wed Jun 13 10:37:04 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:04-0", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598224728), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 5: 114, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 890 } }
 m31100| Wed Jun 13 10:37:04 [conn18] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: MinKey }, max: { i: 0.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:37:04 [conn18] moveChunk updating self version to: 26|1||4fd8a477d1d821664bf17408 through { i: 0.0 } -> { i: 1.0 } for collection 'test.remove2'
 m29000| Wed Jun 13 10:37:04 [initandlisten] connection accepted from 10.66.9.105:58821 #16 (11 connections now open)
 m31100| Wed Jun 13 10:37:04 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:04-60", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598224730), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:37:04 [conn18] doing delete inline
 m31100| Wed Jun 13 10:37:04 [conn18] moveChunk deleted: 0
 m31100| Wed Jun 13 10:37:04 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:37:04 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:04-61", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598224731), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: MinKey }, max: { i: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 7, step4 of 6: 1000, step5 of 6: 6, step6 of 6: 0 } }
 m31100| Wed Jun 13 10:37:04 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:32700,ip-0A420969:32701", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: MinKey }, max: { i: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_MinKey", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:9873 w:207539 reslen:37 1017ms
 m30999| Wed Jun 13 10:37:04 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 34 version: 26|1||4fd8a477d1d821664bf17408 based on: 25|0||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:37:04 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
Wed Jun 13 10:37:04 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
Wed Jun 13 10:37:05 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
Wed Jun 13 10:37:06 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:37:06 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31200 socket exception
Wed Jun 13 10:37:06 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
Wed Jun 13 10:37:07 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
Wed Jun 13 10:37:07 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31201 socket exception
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
Wed Jun 13 10:37:08 [ReplicaSetMonitorWatcher] warning: No primary detected for set remove2-rs1
Wed Jun 13 10:37:08 [ReplicaSetMonitorWatcher] All nodes for set remove2-rs1 are down. This has happened for 8 checks in a row. Polling will stop after 22 more failed checks
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m30999| Wed Jun 13 10:37:09 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a595d1d821664bf1742e
 m30999| Wed Jun 13 10:37:09 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_0.0", lastmod: Timestamp 26000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 0.0 }, max: { i: 1.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:37:09 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 26|1||000000000000000000000000 min: { i: 0.0 } max: { i: 1.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:32700,ip-0A420969:32701
 m31100| Wed Jun 13 10:37:09 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:32700,ip-0A420969:32701", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:37:09 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:37:09 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a5951402b5052316d78a
 m31100| Wed Jun 13 10:37:09 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:09-62", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598229737), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:37:09 [conn18] moveChunk request accepted at version 26|1||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:37:09 [conn18] moveChunk number of documents: 30
 m32700| Wed Jun 13 10:37:09 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
ShardingTest input: { "remove2-rs0" : 7, "remove2-rs1" : 1 } min: 1 max: 7
chunk diff: 6
 m31100| Wed Jun 13 10:37:10 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:37:10 [conn18] moveChunk setting version to: 27|0||4fd8a477d1d821664bf17408
 m32700| Wed Jun 13 10:37:10 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 0.0 } -> { i: 1.0 }
 m32700| Wed Jun 13 10:37:10 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:10-1", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598230740), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 990 } }
 m31100| Wed Jun 13 10:37:10 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 0.0 }, max: { i: 1.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 30, clonedBytes: 492810, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:37:10 [conn18] moveChunk updating self version to: 27|1||4fd8a477d1d821664bf17408 through { i: 1.0 } -> { i: 3.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:37:10 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:10-63", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598230741), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:37:10 [conn18] doing delete inline
 m31100| Wed Jun 13 10:37:10 [conn18] moveChunk deleted: 30
 m31100| Wed Jun 13 10:37:10 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:37:10 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:10-64", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598230747), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 0.0 }, max: { i: 1.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 2, step6 of 6: 4 } }
 m31100| Wed Jun 13 10:37:10 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:32700,ip-0A420969:32701", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 0.0 }, max: { i: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_0.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:10356 w:212339 reslen:37 1011ms
 m30999| Wed Jun 13 10:37:10 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 35 version: 27|1||4fd8a477d1d821664bf17408 based on: 26|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:37:10 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
 m32700| Wed Jun 13 10:37:13 [clientcursormon] mem (MB) res:53 virt:204 mapped:112
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
 m32701| Wed Jun 13 10:37:13 [clientcursormon] mem (MB) res:53 virt:214 mapped:128
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
 m30999| Wed Jun 13 10:37:15 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a59bd1d821664bf1742f
 m30999| Wed Jun 13 10:37:15 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_1.0", lastmod: Timestamp 27000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 1.0 }, max: { i: 3.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:37:15 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 27|1||000000000000000000000000 min: { i: 1.0 } max: { i: 3.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:32700,ip-0A420969:32701
 m31100| Wed Jun 13 10:37:15 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:32700,ip-0A420969:32701", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:37:15 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:37:15 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a59b1402b5052316d78b
 m31100| Wed Jun 13 10:37:15 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:15-65", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598235754), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:37:15 [conn18] moveChunk request accepted at version 27|1||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:37:15 [conn18] moveChunk number of documents: 60
 m32700| Wed Jun 13 10:37:15 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
ShardingTest input: { "remove2-rs0" : 6, "remove2-rs1" : 2 } min: 2 max: 6
chunk diff: 4
 m31100| Wed Jun 13 10:37:16 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:37:16 [conn18] moveChunk setting version to: 28|0||4fd8a477d1d821664bf17408
 m32700| Wed Jun 13 10:37:16 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 1.0 } -> { i: 3.0 }
 m32700| Wed Jun 13 10:37:16 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:16-2", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598236764), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 16, step4 of 5: 0, step5 of 5: 990 } }
 m31100| Wed Jun 13 10:37:16 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 1.0 }, max: { i: 3.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:37:16 [conn18] moveChunk updating self version to: 28|1||4fd8a477d1d821664bf17408 through { i: 3.0 } -> { i: 5.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:37:16 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:16-66", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598236765), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:37:16 [conn18] doing delete inline
 m31100| Wed Jun 13 10:37:16 [conn18] moveChunk deleted: 60
 m31100| Wed Jun 13 10:37:16 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:37:16 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:16-67", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598236776), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 1.0 }, max: { i: 3.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 9, step6 of 6: 9 } }
 m31100| Wed Jun 13 10:37:16 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:32700,ip-0A420969:32701", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 1.0 }, max: { i: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_1.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:10558 w:221261 reslen:37 1022ms
 m30999| Wed Jun 13 10:37:16 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 36 version: 28|1||4fd8a477d1d821664bf17408 based on: 27|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:37:16 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
Wed Jun 13 10:37:18 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
Wed Jun 13 10:37:19 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:37:19 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
Wed Jun 13 10:37:20 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m32701| Wed Jun 13 10:37:21 [conn8] end connection 10.66.9.105:58803 (6 connections now open)
 m32701| Wed Jun 13 10:37:21 [initandlisten] connection accepted from 10.66.9.105:58827 #11 (7 connections now open)
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m30999| Wed Jun 13 10:37:21 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' acquired, ts : 4fd8a5a1d1d821664bf17430
 m30999| Wed Jun 13 10:37:21 [Balancer] chose [remove2-rs0] to [remove2-rs1] { _id: "test.remove2-i_3.0", lastmod: Timestamp 28000|1, lastmodEpoch: ObjectId('4fd8a477d1d821664bf17408'), ns: "test.remove2", min: { i: 3.0 }, max: { i: 5.0 }, shard: "remove2-rs0" }
 m30999| Wed Jun 13 10:37:21 [Balancer] moving chunk ns: test.remove2 moving ( ns:test.remove2 at: remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 lastmod: 28|1||000000000000000000000000 min: { i: 3.0 } max: { i: 5.0 }) remove2-rs0:remove2-rs0/ip-0A420969:31100,ip-0A420969:31101 -> remove2-rs1:remove2-rs1/ip-0A420969:32700,ip-0A420969:32701
 m31100| Wed Jun 13 10:37:21 [conn18] received moveChunk request: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:32700,ip-0A420969:32701", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" }
 m31100| Wed Jun 13 10:37:21 [conn18] created new distributed lock for test.remove2 on ip-0A420969:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m31100| Wed Jun 13 10:37:21 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' acquired, ts : 4fd8a5a11402b5052316d78c
 m31100| Wed Jun 13 10:37:21 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:21-68", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598241782), what: "moveChunk.start", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:37:21 [conn18] moveChunk request accepted at version 28|1||4fd8a477d1d821664bf17408
 m31100| Wed Jun 13 10:37:21 [conn18] moveChunk number of documents: 60
Wed Jun 13 10:37:21 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31200
 m32700| Wed Jun 13 10:37:21 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
ShardingTest input: { "remove2-rs0" : 5, "remove2-rs1" : 3 } min: 3 max: 5
chunk diff: 2
 m31100| Wed Jun 13 10:37:22 [conn18] moveChunk data transfer progress: { active: true, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "steady", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m31100| Wed Jun 13 10:37:22 [conn18] moveChunk setting version to: 29|0||4fd8a477d1d821664bf17408
Wed Jun 13 10:37:22 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31200 failed couldn't connect to server ip-0A420969:31200
Wed Jun 13 10:37:22 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31200 socket exception
Wed Jun 13 10:37:22 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31201
 m32700| Wed Jun 13 10:37:22 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.remove2' { i: 3.0 } -> { i: 5.0 }
 m32700| Wed Jun 13 10:37:22 [migrateThread] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:22-3", server: "ip-0A420969", clientAddr: ":27017", time: new Date(1339598242786), what: "moveChunk.to", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 16, step4 of 5: 0, step5 of 5: 983 } }
 m31100| Wed Jun 13 10:37:22 [conn18] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", min: { i: 3.0 }, max: { i: 5.0 }, shardKeyPattern: { i: 1 }, state: "done", counts: { cloned: 60, clonedBytes: 985620, catchup: 0, steady: 0 }, ok: 1.0 }
 m31100| Wed Jun 13 10:37:22 [conn18] moveChunk updating self version to: 29|1||4fd8a477d1d821664bf17408 through { i: 5.0 } -> { i: 6.0 } for collection 'test.remove2'
 m31100| Wed Jun 13 10:37:22 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:22-69", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598242787), what: "moveChunk.commit", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, from: "remove2-rs0", to: "remove2-rs1" } }
 m31100| Wed Jun 13 10:37:22 [conn18] doing delete inline
 m31100| Wed Jun 13 10:37:22 [conn18] moveChunk deleted: 60
 m31100| Wed Jun 13 10:37:22 [conn18] distributed lock 'test.remove2/ip-0A420969:31100:1339597943:15724' unlocked. 
 m31100| Wed Jun 13 10:37:22 [conn18] about to log metadata event: { _id: "ip-0A420969-2012-06-13T14:37:22-70", server: "ip-0A420969", clientAddr: "10.66.9.105:58636", time: new Date(1339598242802), what: "moveChunk.from", ns: "test.remove2", details: { min: { i: 3.0 }, max: { i: 5.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 999, step5 of 6: 3, step6 of 6: 13 } }
 m31100| Wed Jun 13 10:37:22 [conn18] command admin.$cmd command: { moveChunk: "test.remove2", from: "remove2-rs0/ip-0A420969:31100,ip-0A420969:31101", to: "remove2-rs1/ip-0A420969:32700,ip-0A420969:32701", fromShard: "remove2-rs0", toShard: "remove2-rs1", min: { i: 3.0 }, max: { i: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.remove2-i_3.0", configdb: "ip-0A420969:29000" } ntoreturn:1 keyUpdates:0 locks(micros) R:8 r:10777 w:233279 reslen:37 1020ms
 m30999| Wed Jun 13 10:37:22 [Balancer] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 37 version: 29|1||4fd8a477d1d821664bf17408 based on: 28|1||4fd8a477d1d821664bf17408
 m30999| Wed Jun 13 10:37:22 [Balancer] distributed lock 'balancer/ip-0A420969:30999:1339597943:41' unlocked. 
ShardingTest input: { "remove2-rs0" : 4, "remove2-rs1" : 4 } min: 4 max: 4
chunk diff: 0
 m30999| Wed Jun 13 10:37:22 [conn2] creating WriteBackListener for: ip-0A420969:32700 serverID: 4fd8a477d1d821664bf17406
 m30999| Wed Jun 13 10:37:22 [conn2] creating WriteBackListener for: ip-0A420969:32701 serverID: 4fd8a477d1d821664bf17406
 m32700| Wed Jun 13 10:37:22 [initandlisten] connection accepted from 10.66.9.105:58830 #18 (12 connections now open)
 m32700| Wed Jun 13 10:37:22 [conn18] no current chunk manager found for this shard, will initialize
 m32700| Wed Jun 13 10:37:22 [initandlisten] connection accepted from 10.66.9.105:58831 #19 (13 connections now open)
Shard added successfully
{ "ok" : 0, "errmsg" : "can't find db!" }
 m30999| Wed Jun 13 10:37:23 [conn2] ChunkManager: time to load chunks for test.remove2: 0ms sequenceNumber: 38 version: 29|1||4fd8a477d1d821664bf17408 based on: (empty)
 m30999| Wed Jun 13 10:37:23 [conn2] couldn't find database [test2] in config db
 m30999| Wed Jun 13 10:37:23 [conn2] 	 put [test2] on: remove2-rs1:remove2-rs1/ip-0A420969:32700,ip-0A420969:32701
 m32700| Wed Jun 13 10:37:23 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/test2.ns, filling with zeroes...
 m31100| Wed Jun 13 10:37:23 [conn17] command admin.$cmd command: { writebacklisten: ObjectId('4fd8a477d1d821664bf17406') } ntoreturn:1 keyUpdates:0  reslen:44 299970ms
 m31101| Wed Jun 13 10:37:23 [conn9] command admin.$cmd command: { writebacklisten: ObjectId('4fd8a477d1d821664bf17406') } ntoreturn:1 keyUpdates:0  reslen:44 299970ms
 m32701| Wed Jun 13 10:37:23 [initandlisten] connection accepted from 10.66.9.105:58832 #12 (8 connections now open)
 m32700| Wed Jun 13 10:37:23 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/test2.ns, size: 16MB,  took 0.057 secs
 m32700| Wed Jun 13 10:37:23 [FileAllocator] allocating new datafile /data/db/remove2-rs1-0/test2.0, filling with zeroes...
 m32700| Wed Jun 13 10:37:23 [FileAllocator] done allocating datafile /data/db/remove2-rs1-0/test2.0, size: 16MB,  took 0.056 secs
 m32700| Wed Jun 13 10:37:23 [conn18] build index test2.foo { _id: 1 }
 m32700| Wed Jun 13 10:37:23 [conn18] build index done.  scanned 0 total records. 0 secs
 m32700| Wed Jun 13 10:37:23 [conn18] insert test2.foo keyUpdates:0 locks(micros) W:867 r:8461 w:118970 118ms


----
finishing!
----


 m32701| Wed Jun 13 10:37:23 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/test2.ns, filling with zeroes...
 m30999| Wed Jun 13 10:37:23 [mongosMain] connection accepted from 127.0.0.1:58833 #6 (6 connections now open)
 m30999| Wed Jun 13 10:37:23 [conn6] terminating, shutdown command received
 m30999| Wed Jun 13 10:37:23 [conn6] dbexit: shutdown called rc:0 shutdown called
 m31100| Wed Jun 13 10:37:23 [conn18] end connection 10.66.9.105:58636 (16 connections now open)
 m31100| Wed Jun 13 10:37:23 [conn16] end connection 10.66.9.105:58632 (15 connections now open)
 m31100| Wed Jun 13 10:37:23 [conn20] end connection 10.66.9.105:58645 (14 connections now open)
 m31100| Wed Jun 13 10:37:23 [conn19] end connection 10.66.9.105:58644 (13 connections now open)
 m31101| Wed Jun 13 10:37:23 [conn8] end connection 10.66.9.105:58633 (9 connections now open)
 m31101| Wed Jun 13 10:37:23 [conn13] end connection 10.66.9.105:58669 (8 connections now open)
 m29000| Wed Jun 13 10:37:23 [conn4] end connection 10.66.9.105:58628 (10 connections now open)
 m29000| Wed Jun 13 10:37:23 [conn6] end connection 10.66.9.105:58630 (9 connections now open)
 m29000| Wed Jun 13 10:37:23 [conn10] end connection 10.66.9.105:58652 (8 connections now open)
 m29000| Wed Jun 13 10:37:23 [conn3] end connection 10.66.9.105:58627 (8 connections now open)
 m29000| Wed Jun 13 10:37:23 [conn5] end connection 10.66.9.105:58629 (7 connections now open)
 m31100| Wed Jun 13 10:37:23 [conn26] end connection 10.66.9.105:58668 (12 connections now open)
Wed Jun 13 10:37:23 Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:30999
Wed Jun 13 10:37:23 SocketException: remote: 127.0.0.1:30999 error: 9001 socket exception [1] server [127.0.0.1:30999] 
Wed Jun 13 10:37:23 DBClientCursor::init call() failed
 m32700| Wed Jun 13 10:37:23 [conn18] end connection 10.66.9.105:58830 (12 connections now open)
Wed Jun 13 10:37:23 shell: stopped mongo program on port 30999
 m32700| Wed Jun 13 10:37:23 [conn9] end connection 10.66.9.105:58790 (11 connections now open)
 m32700| Wed Jun 13 10:37:23 [conn11] end connection 10.66.9.105:58794 (10 connections now open)
 m32700| Wed Jun 13 10:37:23 [conn19] end connection 10.66.9.105:58831 (9 connections now open)
 m32701| Wed Jun 13 10:37:23 [conn6] end connection 10.66.9.105:58791 (7 connections now open)
 m32701| Wed Jun 13 10:37:23 [conn12] end connection 10.66.9.105:58832 (7 connections now open)
Wed Jun 13 10:37:23 No db started on port: 30000
Wed Jun 13 10:37:23 shell: stopped mongo program on port 30000
Wed Jun 13 10:37:23 No db started on port: 30001
Wed Jun 13 10:37:23 shell: stopped mongo program on port 30001
ReplSetTest n: 0 ports: [ 31100, 31101 ]	31100 number
ReplSetTest stop *** Shutting down mongod in port 31100 ***
 m31100| Wed Jun 13 10:37:23 [initandlisten] connection accepted from 127.0.0.1:58834 #44 (13 connections now open)
 m31100| Wed Jun 13 10:37:23 [conn44] terminating, shutdown command received
 m31100| Wed Jun 13 10:37:23 dbexit: shutdown called
 m31100| Wed Jun 13 10:37:23 [conn44] shutdown: going to close listening sockets...
 m31100| Wed Jun 13 10:37:23 [conn44] closing listening socket: 420
 m31100| Wed Jun 13 10:37:23 [conn44] closing listening socket: 460
 m31100| Wed Jun 13 10:37:23 [conn44] shutdown: going to flush diaglog...
 m31100| Wed Jun 13 10:37:23 [conn44] shutdown: going to close sockets...
 m31100| Wed Jun 13 10:37:23 [conn44] shutdown: waiting for fs preallocator...
 m31100| Wed Jun 13 10:37:23 [conn44] shutdown: closing all files...
 m31100| Wed Jun 13 10:37:23 [conn44] closeAllFiles() finished
 m31100| Wed Jun 13 10:37:23 [conn44] shutdown: removing fs lock...
 m31100| Wed Jun 13 10:37:23 dbexit: really exiting now
 m32701| Wed Jun 13 10:37:23 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/test2.ns, size: 16MB,  took 0.065 secs
 m32701| Wed Jun 13 10:37:23 [FileAllocator] allocating new datafile /data/db/remove2-rs1-1/test2.0, filling with zeroes...
 m29000| Wed Jun 13 10:37:23 [conn7] end connection 10.66.9.105:58646 (5 connections now open)
 m31101| Wed Jun 13 10:37:23 [rsBackgroundSync] Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31100
 m31101| Wed Jun 13 10:37:23 [rsBackgroundSync] SocketException: remote: 10.66.9.105:31100 error: 9001 socket exception [1] server [10.66.9.105:31100] 
 m31101| Wed Jun 13 10:37:23 [rsBackgroundSync] Socket flush send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31100
 m31101| Wed Jun 13 10:37:23 [rsBackgroundSync]   caught exception (socket exception) in destructor (mongo::PiggyBackData::~PiggyBackData)
 m31101| Wed Jun 13 10:37:23 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: ip-0A420969:31100
 m29000| Wed Jun 13 10:37:23 [conn8] end connection 10.66.9.105:58649 (4 connections now open)
 m29000| Wed Jun 13 10:37:23 [conn9] end connection 10.66.9.105:58650 (3 connections now open)
 m31101| Wed Jun 13 10:37:23 [conn24] end connection 10.66.9.105:58804 (7 connections now open)
 m32701| Wed Jun 13 10:37:23 [conn10] end connection 10.66.9.105:58812 (5 connections now open)
 m32700| Wed Jun 13 10:37:23 [conn15] end connection 10.66.9.105:58809 (8 connections now open)
 m32700| Wed Jun 13 10:37:23 [conn17] end connection 10.66.9.105:58813 (7 connections now open)
 m32700| Wed Jun 13 10:37:23 [conn16] end connection 10.66.9.105:58811 (7 connections now open)
 m32701| Wed Jun 13 10:37:23 [conn9] end connection 10.66.9.105:58810 (4 connections now open)
Wed Jun 13 10:37:23 Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:31100
Wed Jun 13 10:37:23 SocketException: remote: 127.0.0.1:31100 error: 9001 socket exception [1] server [127.0.0.1:31100] 
Wed Jun 13 10:37:23 DBClientCursor::init call() failed
Wed Jun 13 10:37:23 shell: stopped mongo program on port 31100
ReplSetTest n: 1 ports: [ 31100, 31101 ]	31101 number
ReplSetTest stop *** Shutting down mongod in port 31101 ***
 m31101| Wed Jun 13 10:37:23 [initandlisten] connection accepted from 127.0.0.1:58835 #27 (8 connections now open)
 m32701| Wed Jun 13 10:37:23 [FileAllocator] done allocating datafile /data/db/remove2-rs1-1/test2.0, size: 16MB,  took 0.058 secs
 m32701| Wed Jun 13 10:37:23 [rsSync] build index test2.foo { _id: 1 }
 m32701| Wed Jun 13 10:37:23 [rsSync] build index done.  scanned 0 total records. 0 secs
 m32700| Wed Jun 13 10:37:23 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31100
 m32700| Wed Jun 13 10:37:23 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31100
Wed Jun 13 10:37:23 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31201 failed couldn't connect to server ip-0A420969:31201
Wed Jun 13 10:37:23 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31201 socket exception
 m31101| Wed Jun 13 10:37:23 [conn27] terminating, shutdown command received
 m31101| Wed Jun 13 10:37:23 dbexit: shutdown called
 m31101| Wed Jun 13 10:37:23 [conn27] shutdown: going to close listening sockets...
 m31101| Wed Jun 13 10:37:23 [conn27] closing listening socket: 424
 m31101| Wed Jun 13 10:37:23 [conn27] closing listening socket: 432
 m31101| Wed Jun 13 10:37:23 [conn27] shutdown: going to flush diaglog...
 m31101| Wed Jun 13 10:37:23 [conn27] shutdown: going to close sockets...
 m31101| Wed Jun 13 10:37:23 [conn27] shutdown: waiting for fs preallocator...
 m31101| Wed Jun 13 10:37:23 [conn27] shutdown: closing all files...
Wed Jun 13 10:37:23 DBClientCursor::init call() failed
 m31101| Wed Jun 13 10:37:23 [conn2] end connection 127.0.0.1:58586 (7 connections now open)
 m31101| Wed Jun 13 10:37:23 [conn25] end connection 10.66.9.105:58816 (6 connections now open)
 m31101| Wed Jun 13 10:37:23 [conn3] end connection 127.0.0.1:58584 (6 connections now open)
 m31101| Wed Jun 13 10:37:23 [conn6] end connection 10.66.9.105:58606 (4 connections now open)
 m31101| Wed Jun 13 10:37:23 [conn7] end connection 10.66.9.105:58608 (4 connections now open)
 m31101| Wed Jun 13 10:37:23 [conn26] end connection 10.66.9.105:58818 (2 connections now open)
 m31101| Wed Jun 13 10:37:23 [conn27] closeAllFiles() finished
 m31101| Wed Jun 13 10:37:23 [conn27] shutdown: removing fs lock...
 m31101| Wed Jun 13 10:37:23 dbexit: really exiting now
 m32700| Wed Jun 13 10:37:24 [conn13] end connection 10.66.9.105:58805 (5 connections now open)
 m32700| Wed Jun 13 10:37:24 [initandlisten] connection accepted from 10.66.9.105:58837 #20 (6 connections now open)
 m32700| Wed Jun 13 10:37:24 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31100 failed couldn't connect to server ip-0A420969:31100
 m32700| Wed Jun 13 10:37:24 [ReplicaSetMonitorWatcher] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 10.66.9.105:31101
Wed Jun 13 10:37:24 [ReplicaSetMonitorWatcher] warning: No primary detected for set remove2-rs1
Wed Jun 13 10:37:24 [ReplicaSetMonitorWatcher] All nodes for set remove2-rs1 are down. This has happened for 9 checks in a row. Polling will stop after 21 more failed checks
Wed Jun 13 10:37:24 shell: stopped mongo program on port 31101
ReplSetTest stopSet deleting all dbpaths
ReplSetTest stopSet *** Shut down repl set - test worked ****
ReplSetTest n: 0 ports: [ 31200, 31201 ]	31200 number
ReplSetTest stop *** Shutting down mongod in port 31200 ***
Wed Jun 13 10:37:25 No db started on port: 31200
Wed Jun 13 10:37:25 shell: stopped mongo program on port 31200
ReplSetTest n: 1 ports: [ 31200, 31201 ]	31201 number
ReplSetTest stop *** Shutting down mongod in port 31201 ***
Wed Jun 13 10:37:25 No db started on port: 31201
Wed Jun 13 10:37:25 shell: stopped mongo program on port 31201
ReplSetTest stopSet deleting all dbpaths
Wed Jun 13 10:37:25 Error: boost::filesystem::remove: The process cannot access the file because it is being used by another process: "\data\db\remove2-rs1-0\local.0" (anon):1
failed to load: C:\10gen\buildslaves\mongo\Windows_32bit\mongo\jstests\sharding\remove2.js
 m29000| Wed Jun 13 10:37:25 [initandlisten] connection accepted from 127.0.0.1:58838 #17 (4 connections now open)
 m29000| Wed Jun 13 10:37:25 [conn17] terminating, shutdown command received
 m29000| Wed Jun 13 10:37:25 dbexit: shutdown called
 m29000| Wed Jun 13 10:37:25 [conn17] shutdown: going to close listening sockets...
 m29000| Wed Jun 13 10:37:25 [conn17] closing listening socket: 452
 m29000| Wed Jun 13 10:37:25 [conn17] closing listening socket: 460
 m29000| Wed Jun 13 10:37:25 [conn17] shutdown: going to flush diaglog...
 m29000| Wed Jun 13 10:37:25 [conn17] shutdown: going to close sockets...
 m29000| Wed Jun 13 10:37:25 [conn17] shutdown: waiting for fs preallocator...
 m29000| Wed Jun 13 10:37:25 [conn17] shutdown: closing all files...
Wed Jun 13 10:37:25 DBClientCursor::init call() failed
 m29000| Wed Jun 13 10:37:25 [conn17] closeAllFiles() finished
 m29000| Wed Jun 13 10:37:25 [conn17] shutdown: removing fs lock...
 m29000| Wed Jun 13 10:37:25 [conn1] end connection 127.0.0.1:58623 (3 connections now open)
 m29000| Wed Jun 13 10:37:25 [conn2] end connection 10.66.9.105:58625 (2 connections now open)
 m29000| Wed Jun 13 10:37:25 dbexit: really exiting now
 m32700| Wed Jun 13 10:37:25 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31100
 m32700| Wed Jun 13 10:37:26 [initandlisten] connection accepted from 127.0.0.1:58840 #21 (7 connections now open)
 m32700| Wed Jun 13 10:37:26 [conn21] terminating, shutdown command received
 m32700| Wed Jun 13 10:37:26 dbexit: shutdown called
 m32700| Wed Jun 13 10:37:26 [conn21] shutdown: going to close listening sockets...
 m32700| Wed Jun 13 10:37:26 [conn21] closing listening socket: 504
 m32700| Wed Jun 13 10:37:26 [conn21] closing listening socket: 512
 m32700| Wed Jun 13 10:37:26 [conn21] shutdown: going to flush diaglog...
 m32700| Wed Jun 13 10:37:26 [conn21] shutdown: going to close sockets...
 m32700| Wed Jun 13 10:37:26 [conn21] shutdown: waiting for fs preallocator...
 m32700| Wed Jun 13 10:37:26 [conn21] shutdown: closing all files...
 m32701| Wed Jun 13 10:37:26 [conn11] end connection 10.66.9.105:58827 (3 connections now open)
 m32701| Wed Jun 13 10:37:26 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: ip-0A420969:32700
 m32701| Wed Jun 13 10:37:26 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: ip-0A420969:32700
Wed Jun 13 10:37:26 DBClientCursor::init call() failed
 m32700| Wed Jun 13 10:37:26 [conn2] end connection 127.0.0.1:58768 (6 connections now open)
 m32700| Wed Jun 13 10:37:26 [conn3] end connection 127.0.0.1:58766 (5 connections now open)
 m32700| Wed Jun 13 10:37:26 [ReplicaSetMonitorWatcher] reconnect ip-0A420969:31100 failed couldn't connect to server ip-0A420969:31100
 m32700| Wed Jun 13 10:37:26 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception ip-0A420969:31100 socket exception
 m32700| Wed Jun 13 10:37:26 [ReplicaSetMonitorWatcher] trying reconnect to ip-0A420969:31101
 m32700| Wed Jun 13 10:37:26 [conn20] end connection 10.66.9.105:58837 (4 connections now open)
 m32700| Wed Jun 13 10:37:26 [conn21] closeAllFiles() finished
 m32700| Wed Jun 13 10:37:26 [conn21] shutdown: removing fs lock...
 m32700| Wed Jun 13 10:37:26 dbexit: really exiting now
 m32701| Wed Jun 13 10:37:27 [initandlisten] connection accepted from 127.0.0.1:58841 #13 (4 connections now open)
 m32701| Wed Jun 13 10:37:27 [conn13] terminating, shutdown command received
 m32701| Wed Jun 13 10:37:27 dbexit: shutdown called
 m32701| Wed Jun 13 10:37:27 [conn13] shutdown: going to close listening sockets...
 m32701| Wed Jun 13 10:37:27 [conn13] closing listening socket: 512
 m32701| Wed Jun 13 10:37:27 [conn13] closing listening socket: 540
 m32701| Wed Jun 13 10:37:27 [conn13] shutdown: going to flush diaglog...
 m32701| Wed Jun 13 10:37:27 [conn13] shutdown: going to close sockets...
 m32701| Wed Jun 13 10:37:27 [conn13] shutdown: waiting for fs preallocator...
 m32701| Wed Jun 13 10:37:27 [conn13] shutdown: closing all files...
Wed Jun 13 10:37:27 DBClientCursor::init call() failed
                343061.000109ms
test C:\10gen\buildslaves\mongo\Windows_32bit\mongo\jstests\sharding\remove2.js exited with status -3
0 tests succeeded
The following tests failed (with exit code):
C:\10gen\buildslaves\mongo\Windows_32bit\mongo\jstests\sharding\remove2.js	-3
Traceback (most recent call last):
  File "C:\10gen\buildslaves\mongo\Windows_32bit\mongo\buildscripts\smoke.py", line 782, in <module>
    main()
  File "C:\10gen\buildslaves\mongo\Windows_32bit\mongo\buildscripts\smoke.py", line 755, in main
    run_old_fails()
  File "C:\10gen\buildslaves\mongo\Windows_32bit\mongo\buildscripts\smoke.py", line 651, in run_old_fails
    report() # exits with failure code if there is an error
  File "C:\10gen\buildslaves\mongo\Windows_32bit\mongo\buildscripts\smoke.py", line 490, in report
    raise Exception("Test failures")
Exception: Test failures
scons: building terminated because of errors.
scons: *** [smokeFailingTests] Error 1
program finished with exit code 2
elapsedTime=539.194000