[SERVER-7738] jstests/sharding/gle_error_message.js failing in all Windows builds Created: 21/Nov/12  Updated: 11/Jul/16  Resolved: 23/Nov/12

Status: Closed
Project: Core Server
Component/s: Testing Infrastructure
Affects Version/s: None
Fix Version/s: 2.3.1

Type: Bug Priority: Major - P3
Reporter: Tad Marshall Assignee: Tad Marshall
Resolution: Done Votes: 0
Labels: buildbot
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

All Windows build slaves (32 & 64-bit, debug and release)


Operating System: Windows
Participants:

 Description   

Failure in Windows 64-bit:

http://buildlogs.mongodb.org/Windows%2064-bit/builds/5256/test/sharding/gle_error_message.js

 
Wed Nov 21 00:54:41.672 [conn41] end connection 127.0.0.1:57498 (0 connections now open)
Wed Nov 21 00:55:06.538 [initandlisten] connection accepted from 127.0.0.1:57555 #42 (1 connection now open)
MongoDB shell version: 2.3.1-pre-
null
 
 
----
Starting sharded cluster...
----
 
 
Resetting db path '/data/db/test0'
Wed Nov 21 00:55:06.959 shell: started program mongod.exe --port 30000 --dbpath /data/db/test0
 m30000| Wed Nov 21 00:55:06.991 [initandlisten] MongoDB starting : pid=10904 port=30000 dbpath=/data/db/test0 64-bit host=AMAZONA-JD1A6QA
 m30000| Wed Nov 21 00:55:06.991 [initandlisten] 
 m30000| Wed Nov 21 00:55:06.991 [initandlisten] ** NOTE: This is a development version (2.3.1-pre-) of MongoDB.
 m30000| Wed Nov 21 00:55:06.991 [initandlisten] **       Not recommended for production.
 m30000| Wed Nov 21 00:55:06.991 [initandlisten] 
 m30000| Wed Nov 21 00:55:06.991 [initandlisten] db version v2.3.1-pre-, pdfile version 4.5
 m30000| Wed Nov 21 00:55:06.991 [initandlisten] git version: cb14cf83102cca848e572ffd8b95c6fa35594175
 m30000| Wed Nov 21 00:55:06.991 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
 m30000| Wed Nov 21 00:55:06.991 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
 m30000| Wed Nov 21 00:55:07.006 [initandlisten] journal dir=/data/db/test0\journal
 m30000| Wed Nov 21 00:55:07.006 [initandlisten] recover : no journal files present, no recovery needed
 m30000| Wed Nov 21 00:55:07.147 [websvr] admin web console waiting for connections on port 31000
 m30000| Wed Nov 21 00:55:07.147 [initandlisten] waiting for connections on port 30000
 m30000| Wed Nov 21 00:55:07.474 [initandlisten] connection accepted from 127.0.0.1:57564 #1 (1 connection now open)
Resetting db path '/data/db/test1'
Wed Nov 21 00:55:07.490 shell: started program mongod.exe --port 30001 --dbpath /data/db/test1
 m30001| Wed Nov 21 00:55:07.521 [initandlisten] MongoDB starting : pid=2836 port=30001 dbpath=/data/db/test1 64-bit host=AMAZONA-JD1A6QA
 m30001| Wed Nov 21 00:55:07.521 [initandlisten] 
 m30001| Wed Nov 21 00:55:07.521 [initandlisten] ** NOTE: This is a development version (2.3.1-pre-) of MongoDB.
 m30001| Wed Nov 21 00:55:07.521 [initandlisten] **       Not recommended for production.
 m30001| Wed Nov 21 00:55:07.521 [initandlisten] 
 m30001| Wed Nov 21 00:55:07.521 [initandlisten] db version v2.3.1-pre-, pdfile version 4.5
 m30001| Wed Nov 21 00:55:07.521 [initandlisten] git version: cb14cf83102cca848e572ffd8b95c6fa35594175
 m30001| Wed Nov 21 00:55:07.521 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
 m30001| Wed Nov 21 00:55:07.521 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001 }
 m30001| Wed Nov 21 00:55:07.537 [initandlisten] journal dir=/data/db/test1\journal
 m30001| Wed Nov 21 00:55:07.537 [initandlisten] recover : no journal files present, no recovery needed
 m30001| Wed Nov 21 00:55:07.677 [websvr] admin web console waiting for connections on port 31001
 m30001| Wed Nov 21 00:55:07.677 [initandlisten] waiting for connections on port 30001
 m30001| Wed Nov 21 00:55:08.005 [initandlisten] connection accepted from 127.0.0.1:57565 #1 (1 connection now open)
Resetting db path '/data/db/test2'
Wed Nov 21 00:55:08.005 shell: started program mongod.exe --port 30002 --dbpath /data/db/test2
 m30002| Wed Nov 21 00:55:08.036 [initandlisten] MongoDB starting : pid=10968 port=30002 dbpath=/data/db/test2 64-bit host=AMAZONA-JD1A6QA
 m30002| Wed Nov 21 00:55:08.036 [initandlisten] 
 m30002| Wed Nov 21 00:55:08.036 [initandlisten] ** NOTE: This is a development version (2.3.1-pre-) of MongoDB.
 m30002| Wed Nov 21 00:55:08.036 [initandlisten] **       Not recommended for production.
 m30002| Wed Nov 21 00:55:08.036 [initandlisten] 
 m30002| Wed Nov 21 00:55:08.036 [initandlisten] db version v2.3.1-pre-, pdfile version 4.5
 m30002| Wed Nov 21 00:55:08.036 [initandlisten] git version: cb14cf83102cca848e572ffd8b95c6fa35594175
 m30002| Wed Nov 21 00:55:08.036 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
 m30002| Wed Nov 21 00:55:08.036 [initandlisten] options: { dbpath: "/data/db/test2", port: 30002 }
 m30002| Wed Nov 21 00:55:08.051 [initandlisten] journal dir=/data/db/test2\journal
 m30002| Wed Nov 21 00:55:08.051 [initandlisten] recover : no journal files present, no recovery needed
 m30002| Wed Nov 21 00:55:08.285 [websvr] admin web console waiting for connections on port 31002
 m30002| Wed Nov 21 00:55:08.285 [initandlisten] waiting for connections on port 30002
 m30002| Wed Nov 21 00:55:08.519 [initandlisten] connection accepted from 127.0.0.1:57566 #1 (1 connection now open)
Resetting db path '/data/db/test-config0'
Wed Nov 21 00:55:08.519 shell: started program mongod.exe --port 29000 --dbpath /data/db/test-config0 --configsvr
 m29000| Wed Nov 21 00:55:08.551 [initandlisten] MongoDB starting : pid=2868 port=29000 dbpath=/data/db/test-config0 64-bit host=AMAZONA-JD1A6QA
 m29000| Wed Nov 21 00:55:08.551 [initandlisten] 
 m29000| Wed Nov 21 00:55:08.551 [initandlisten] ** NOTE: This is a development version (2.3.1-pre-) of MongoDB.
 m29000| Wed Nov 21 00:55:08.551 [initandlisten] **       Not recommended for production.
 m29000| Wed Nov 21 00:55:08.551 [initandlisten] 
 m29000| Wed Nov 21 00:55:08.551 [initandlisten] db version v2.3.1-pre-, pdfile version 4.5
 m29000| Wed Nov 21 00:55:08.551 [initandlisten] git version: cb14cf83102cca848e572ffd8b95c6fa35594175
 m29000| Wed Nov 21 00:55:08.551 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
 m29000| Wed Nov 21 00:55:08.551 [initandlisten] options: { configsvr: true, dbpath: "/data/db/test-config0", port: 29000 }
 m29000| Wed Nov 21 00:55:08.551 [initandlisten] journal dir=/data/db/test-config0\journal
 m29000| Wed Nov 21 00:55:08.551 [initandlisten] recover : no journal files present, no recovery needed
 m29000| Wed Nov 21 00:55:08.707 [initandlisten] waiting for connections on port 29000
 m29000| Wed Nov 21 00:55:08.707 [websvr] listen(): bind() failed errno:10048 Only one usage of each socket address (protocol/network address/port) is normally permitted. for socket: 0.0.0.0:30000
 m29000| Wed Nov 21 00:55:09.034 [initandlisten] connection accepted from 127.0.0.1:57567 #1 (1 connection now open)
"localhost:29000"
 m29000| Wed Nov 21 00:55:09.034 [initandlisten] connection accepted from 127.0.0.1:57568 #2 (2 connections now open)
ShardingTest test :
{
	"config" : "localhost:29000",
	"shards" : [
		connection to localhost:30000,
		connection to localhost:30001,
		connection to localhost:30002
	]
}
Wed Nov 21 00:55:09.034 shell: started program mongos.exe --port 30999 --configdb localhost:29000 -vvv --chunkSize 50
 m30999| Wed Nov 21 00:55:09.034 running with 1 config server should be done only for testing purposes and is not recommended for production
 m30999| Wed Nov 21 00:55:09.050 [mongosMain] MongoS version 2.3.1-pre- starting: pid=6732 port=30999 64-bit host=AMAZONA-JD1A6QA (--help for usage)
 m30999| Wed Nov 21 00:55:09.050 [mongosMain] git version: cb14cf83102cca848e572ffd8b95c6fa35594175
 m30999| Wed Nov 21 00:55:09.050 [mongosMain] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
 m30999| Wed Nov 21 00:55:09.050 [mongosMain] options: { chunkSize: 50, configdb: "localhost:29000", port: 30999, vvv: true }
 m30999| Wed Nov 21 00:55:09.050 [mongosMain]  config string : localhost:29000
 m30999| Wed Nov 21 00:55:09.050 [mongosMain] creating new connection to:localhost:29000
 m30999| Wed Nov 21 00:55:09.050 BackgroundJob starting: ConnectBG
 m29000| Wed Nov 21 00:55:09.050 [initandlisten] connection accepted from 127.0.0.1:57571 #3 (3 connections now open)
 m30999| Wed Nov 21 00:55:09.065 [mongosMain] connected connection!
 m30999| Wed Nov 21 00:55:09.065 BackgroundJob starting: CheckConfigServers
 m30999| Wed Nov 21 00:55:09.065 [CheckConfigServers] creating new connection to:localhost:29000
 m30999| Wed Nov 21 00:55:09.065 BackgroundJob starting: ConnectBG
 m30999| Wed Nov 21 00:55:09.065 [CheckConfigServers] connected connection!
 m29000| Wed Nov 21 00:55:09.065 [initandlisten] connection accepted from 127.0.0.1:57572 #4 (4 connections now open)
 m30999| Wed Nov 21 00:55:09.065 [mongosMain] creating new connection to:localhost:29000
 m30999| Wed Nov 21 00:55:09.065 BackgroundJob starting: ConnectBG
 m29000| Wed Nov 21 00:55:09.065 [initandlisten] connection accepted from 127.0.0.1:57573 #5 (5 connections now open)
 m30999| Wed Nov 21 00:55:09.065 [mongosMain] connected connection!
 m29000| Wed Nov 21 00:55:09.065 [FileAllocator] allocating new datafile /data/db/test-config0\config.ns, filling with zeroes...
 m29000| Wed Nov 21 00:55:09.065 [FileAllocator] creating directory /data/db/test-config0\_tmp
 m29000| Wed Nov 21 00:55:09.112 [FileAllocator] done allocating datafile /data/db/test-config0\config.ns, size: 16MB,  took 0.047 secs
 m29000| Wed Nov 21 00:55:09.112 [FileAllocator] allocating new datafile /data/db/test-config0\config.0, filling with zeroes...
 m29000| Wed Nov 21 00:55:09.159 [FileAllocator] done allocating datafile /data/db/test-config0\config.0, size: 16MB,  took 0.047 secs
 m29000| Wed Nov 21 00:55:09.159 [FileAllocator] allocating new datafile /data/db/test-config0\config.1, filling with zeroes...
 m29000| Wed Nov 21 00:55:09.159 [conn5] build index config.version { _id: 1 }
 m29000| Wed Nov 21 00:55:09.159 [conn5] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Nov 21 00:55:09.159 [conn3] build index config.settings { _id: 1 }
 m30999| Wed Nov 21 00:55:09.159 BackgroundJob starting: Balancer
 m30999| Wed Nov 21 00:55:09.159 [Balancer] about to contact config servers and shards
 m30999| Wed Nov 21 00:55:09.159 [websvr] admin web console waiting for connections on port 31999
 m30999| Wed Nov 21 00:55:09.159 [mongosMain] waiting for connections on port 30999
 m30999| Wed Nov 21 00:55:09.159 BackgroundJob starting: cursorTimeout
 m30999| Wed Nov 21 00:55:09.159 BackgroundJob starting: PeriodicTask::Runner
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index done.  scanned 0 total records. 0.006 secs
 m30999| Wed Nov 21 00:55:09.175 [Balancer] config servers and shards contacted successfully
 m30999| Wed Nov 21 00:55:09.175 [Balancer] balancer id: AMAZONA-JD1A6QA:30999 started at Nov 21 00:55:09
 m30999| Wed Nov 21 00:55:09.175 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m30999| Wed Nov 21 00:55:09.175 [Balancer] creating new connection to:localhost:29000
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index config.chunks { _id: 1 }
 m30999| Wed Nov 21 00:55:09.175 BackgroundJob starting: ConnectBG
 m30999| Wed Nov 21 00:55:09.175 [Balancer] connected connection!
 m29000| Wed Nov 21 00:55:09.175 [initandlisten] connection accepted from 127.0.0.1:57575 #6 (6 connections now open)
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Nov 21 00:55:09.175 [conn3] info: creating collection config.chunks on add index
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index config.chunks { ns: 1, min: 1 }
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Nov 21 00:55:09.175 [conn5] build index config.mongos { _id: 1 }
 m29000| Wed Nov 21 00:55:09.175 [conn5] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 }
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index config.chunks { ns: 1, lastmod: 1 }
 m30999| Wed Nov 21 00:55:09.175 [Balancer] Refreshing MaxChunkSize: 50
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index config.shards { _id: 1 }
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Nov 21 00:55:09.175 [conn3] info: creating collection config.shards on add index
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index config.shards { host: 1 }
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index done.  scanned 0 total records. 0 secs
 m30999| Wed Nov 21 00:55:09.175 [Balancer] skew from remote server localhost:29000 found: 0
 m30999| Wed Nov 21 00:55:09.175 [Balancer] skew from remote server localhost:29000 found: 0
 m30999| Wed Nov 21 00:55:09.175 [Balancer] skew from remote server localhost:29000 found: 0
 m30999| Wed Nov 21 00:55:09.175 [Balancer] total clock skew of 0ms for servers localhost:29000 is in 30000ms bounds.
 m30999| Wed Nov 21 00:55:09.175 [LockPinger] creating distributed lock ping thread for localhost:29000 and process AMAZONA-JD1A6QA:30999:1353477309:41 (sleeping for 30000ms)
 m30999| Wed Nov 21 00:55:09.175 [LockPinger] distributed lock pinger 'localhost:29000/AMAZONA-JD1A6QA:30999:1353477309:41' about to ping.
 m30999| Wed Nov 21 00:55:09.175 [Balancer] inserting initial doc in config.locks for lock balancer
 m30999| Wed Nov 21 00:55:09.175 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-JD1A6QA:30999:1353477309:41:
 m30999| { "state" : 1,
 m30999|   "who" : "AMAZONA-JD1A6QA:30999:1353477309:41:Balancer:18467",
 m30999|   "process" : "AMAZONA-JD1A6QA:30999:1353477309:41",
 m30999|   "when" : { "$date" : "Wed Nov 21 00:55:09 2012" },
 m30999|   "why" : "doing balance round",
 m30999|   "ts" : { "$oid" : "50ac6cbdfaf2e30ed017368e" } }
 m30999| { "_id" : "balancer",
 m30999|   "state" : 0 }
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index config.lockpings { _id: 1 }
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index done.  scanned 0 total records. 0 secs
 m29000| Wed Nov 21 00:55:09.175 [conn6] build index config.locks { _id: 1 }
 m29000| Wed Nov 21 00:55:09.175 [conn6] build index done.  scanned 0 total records. 0 secs
 m30999| Wed Nov 21 00:55:09.175 [LockPinger] cluster localhost:29000 pinged successfully at Wed Nov 21 00:55:09 2012 by distributed lock pinger 'localhost:29000/AMAZONA-JD1A6QA:30999:1353477309:41', sleeping for 30000ms
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index config.lockpings { ping: new Date(1) }
 m29000| Wed Nov 21 00:55:09.175 [conn3] build index done.  scanned 1 total records. 0 secs
 m30999| Wed Nov 21 00:55:09.175 [Balancer] distributed lock 'balancer/AMAZONA-JD1A6QA:30999:1353477309:41' acquired, ts : 50ac6cbdfaf2e30ed017368e
 m30999| Wed Nov 21 00:55:09.175 [Balancer] *** start balancing round
 m30999| Wed Nov 21 00:55:09.175 [Balancer] no collections to balance
 m30999| Wed Nov 21 00:55:09.175 [Balancer] no need to move any chunk
 m30999| Wed Nov 21 00:55:09.175 [Balancer] *** end of balancing round
 m30999| Wed Nov 21 00:55:09.175 [Balancer] distributed lock 'balancer/AMAZONA-JD1A6QA:30999:1353477309:41' unlocked. 
 m29000| Wed Nov 21 00:55:09.253 [FileAllocator] done allocating datafile /data/db/test-config0\config.1, size: 32MB,  took 0.094 secs
 m30999| Wed Nov 21 00:55:09.549 [mongosMain] connection accepted from 127.0.0.1:57570 #1 (1 connection now open)
 m30999| Wed Nov 21 00:55:09.549 [conn1] couldn't find database [admin] in config db
 m29000| Wed Nov 21 00:55:09.549 [conn3] build index config.databases { _id: 1 }
Wed Nov 21 00:55:09.549 shell: started program mongos.exe --port 30998 --configdb localhost:29000 -vvv --chunkSize 50
 m29000| Wed Nov 21 00:55:09.549 [conn3] build index done.  scanned 0 total records. 0 secs
 m30999| Wed Nov 21 00:55:09.549 [conn1] 	 put [admin] on: config:localhost:29000
 m30999| Wed Nov 21 00:55:09.549 [conn1] scoped connection to localhost:29000 not being returned to the pool
 m29000| Wed Nov 21 00:55:09.549 [conn3] end connection 127.0.0.1:57571 (5 connections now open)
 m30998| Wed Nov 21 00:55:09.549 running with 1 config server should be done only for testing purposes and is not recommended for production
 m30998| Wed Nov 21 00:55:09.549 [mongosMain] MongoS version 2.3.1-pre- starting: pid=7740 port=30998 64-bit host=AMAZONA-JD1A6QA (--help for usage)
 m30998| Wed Nov 21 00:55:09.549 [mongosMain] git version: cb14cf83102cca848e572ffd8b95c6fa35594175
 m30998| Wed Nov 21 00:55:09.549 [mongosMain] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
 m30998| Wed Nov 21 00:55:09.549 [mongosMain] options: { chunkSize: 50, configdb: "localhost:29000", port: 30998, vvv: true }
 m30998| Wed Nov 21 00:55:09.549 [mongosMain]  config string : localhost:29000
 m30998| Wed Nov 21 00:55:09.549 [mongosMain] creating new connection to:localhost:29000
 m30998| Wed Nov 21 00:55:09.549 BackgroundJob starting: ConnectBG
 m29000| Wed Nov 21 00:55:09.549 [initandlisten] connection accepted from 127.0.0.1:57577 #7 (6 connections now open)
 m30998| Wed Nov 21 00:55:09.565 [mongosMain] connected connection!
 m30998| Wed Nov 21 00:55:09.565 BackgroundJob starting: CheckConfigServers
 m30998| Wed Nov 21 00:55:09.565 [CheckConfigServers] creating new connection to:localhost:29000
 m30998| Wed Nov 21 00:55:09.565 BackgroundJob starting: ConnectBG
 m30998| Wed Nov 21 00:55:09.565 [mongosMain] MaxChunkSize: 50
 m29000| Wed Nov 21 00:55:09.565 [initandlisten] connection accepted from 127.0.0.1:57578 #8 (7 connections now open)
 m30998| Wed Nov 21 00:55:09.565 [CheckConfigServers] connected connection!
 m30998| Wed Nov 21 00:55:09.565 BackgroundJob starting: Balancer
 m30998| Wed Nov 21 00:55:09.565 [Balancer] about to contact config servers and shards
 m30998| Wed Nov 21 00:55:09.565 [Balancer] creating new connection to:localhost:29000
 m30998| Wed Nov 21 00:55:09.565 BackgroundJob starting: cursorTimeout
 m30998| Wed Nov 21 00:55:09.565 [mongosMain] waiting for connections on port 30998
 m30998| Wed Nov 21 00:55:09.565 [websvr] admin web console waiting for connections on port 31998
 m30998| Wed Nov 21 00:55:09.565 BackgroundJob starting: PeriodicTask::Runner
 m30998| Wed Nov 21 00:55:09.565 BackgroundJob starting: ConnectBG
 m29000| Wed Nov 21 00:55:09.565 [initandlisten] connection accepted from 127.0.0.1:57579 #9 (8 connections now open)
 m30998| Wed Nov 21 00:55:09.565 [Balancer] connected connection!
 m30998| Wed Nov 21 00:55:09.565 [Balancer] config servers and shards contacted successfully
 m30998| Wed Nov 21 00:55:09.565 [Balancer] balancer id: AMAZONA-JD1A6QA:30998 started at Nov 21 00:55:09
 m30998| Wed Nov 21 00:55:09.565 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m30998| Wed Nov 21 00:55:09.565 [Balancer] creating new connection to:localhost:29000
 m30998| Wed Nov 21 00:55:09.565 BackgroundJob starting: ConnectBG
 m30998| Wed Nov 21 00:55:09.565 [Balancer] connected connection!
 m29000| Wed Nov 21 00:55:09.565 [initandlisten] connection accepted from 127.0.0.1:57580 #10 (9 connections now open)
 m30998| Wed Nov 21 00:55:09.565 [Balancer] Refreshing MaxChunkSize: 50
 m30998| Wed Nov 21 00:55:09.565 [Balancer] skew from remote server localhost:29000 found: 0
 m30998| Wed Nov 21 00:55:09.565 [Balancer] skew from remote server localhost:29000 found: 0
 m30998| Wed Nov 21 00:55:09.565 [Balancer] skew from remote server localhost:29000 found: 0
 m30998| Wed Nov 21 00:55:09.565 [Balancer] total clock skew of 0ms for servers localhost:29000 is in 30000ms bounds.
 m30998| Wed Nov 21 00:55:09.565 [LockPinger] creating distributed lock ping thread for localhost:29000 and process AMAZONA-JD1A6QA:30998:1353477309:41 (sleeping for 30000ms)
 m30998| Wed Nov 21 00:55:09.565 [LockPinger] distributed lock pinger 'localhost:29000/AMAZONA-JD1A6QA:30998:1353477309:41' about to ping.
 m30998| Wed Nov 21 00:55:09.565 [Balancer] about to acquire distributed lock 'balancer/AMAZONA-JD1A6QA:30998:1353477309:41:
 m30998| { "state" : 1,
 m30998|   "who" : "AMAZONA-JD1A6QA:30998:1353477309:41:Balancer:18467",
 m30998|   "process" : "AMAZONA-JD1A6QA:30998:1353477309:41",
 m30998|   "when" : { "$date" : "Wed Nov 21 00:55:09 2012" },
 m30998|   "why" : "doing balance round",
 m30998|   "ts" : { "$oid" : "50ac6cbd7da0aefafa1c6ae0" } }
 m30998| { "_id" : "balancer",
 m30998|   "state" : 0,
 m30998|   "ts" : { "$oid" : "50ac6cbdfaf2e30ed017368e" } }
 m30998| Wed Nov 21 00:55:09.565 [LockPinger] cluster localhost:29000 pinged successfully at Wed Nov 21 00:55:09 2012 by distributed lock pinger 'localhost:29000/AMAZONA-JD1A6QA:30998:1353477309:41', sleeping for 30000ms
 m30998| Wed Nov 21 00:55:09.565 [Balancer] distributed lock 'balancer/AMAZONA-JD1A6QA:30998:1353477309:41' acquired, ts : 50ac6cbd7da0aefafa1c6ae0
 m30998| Wed Nov 21 00:55:09.565 [Balancer] *** start balancing round
 m30998| Wed Nov 21 00:55:09.565 [Balancer] no collections to balance
 m30998| Wed Nov 21 00:55:09.565 [Balancer] no need to move any chunk
 m30998| Wed Nov 21 00:55:09.565 [Balancer] *** end of balancing round
 m30998| Wed Nov 21 00:55:09.565 [Balancer] distributed lock 'balancer/AMAZONA-JD1A6QA:30998:1353477309:41' unlocked. 
 m30998| Wed Nov 21 00:55:10.064 [mongosMain] connection accepted from 127.0.0.1:57576 #1 (1 connection now open)
 m30998| Wed Nov 21 00:55:10.064 [conn1] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
ShardingTest undefined going to add shard : localhost:30000
 m30998| Wed Nov 21 00:55:10.064 [conn1] found 0 dropped collections and 0 sharded collections for database admin
 m30998| Wed Nov 21 00:55:10.064 [conn1] scoped connection to localhost:29000 not being returned to the pool
 m29000| Wed Nov 21 00:55:10.064 [conn8] end connection 127.0.0.1:57578 (8 connections now open)
 m30999| Wed Nov 21 00:55:10.064 [conn1] Request::process begin ns: admin.$cmd msg id: 0 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.064 [conn1] single query: admin.$cmd  { addshard: "localhost:30000" }  ntoreturn: -1 options : 0
 m30999| Wed Nov 21 00:55:10.064 [conn1] creating new connection to:localhost:30000
 m30999| Wed Nov 21 00:55:10.064 BackgroundJob starting: ConnectBG
 m30999| Wed Nov 21 00:55:10.064 [conn1] connected connection!
 m30000| Wed Nov 21 00:55:10.064 [initandlisten] connection accepted from 127.0.0.1:57581 #2 (2 connections now open)
 m30999| Wed Nov 21 00:55:10.064 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" }
 m30999| Wed Nov 21 00:55:10.064 [conn1] Request::process end ns: admin.$cmd msg id: 0 op: 2004 attempt: 0 2ms
{ "shardAdded" : "shard0000", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30001
 m30999| Wed Nov 21 00:55:10.064 [conn1] Request::process begin ns: admin.$cmd msg id: 1 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.064 [conn1] single query: admin.$cmd  { addshard: "localhost:30001" }  ntoreturn: -1 options : 0
 m30999| Wed Nov 21 00:55:10.064 [conn1] creating new connection to:localhost:30001
 m30999| Wed Nov 21 00:55:10.064 BackgroundJob starting: ConnectBG
 m30999| Wed Nov 21 00:55:10.064 [conn1] connected connection!
 m30001| Wed Nov 21 00:55:10.064 [initandlisten] connection accepted from 127.0.0.1:57582 #2 (2 connections now open)
 m30999| Wed Nov 21 00:55:10.064 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" }
 m30999| Wed Nov 21 00:55:10.064 [conn1] Request::process end ns: admin.$cmd msg id: 1 op: 2004 attempt: 0 2ms
{ "shardAdded" : "shard0001", "ok" : 1 }
ShardingTest undefined going to add shard : localhost:30002
 m30999| Wed Nov 21 00:55:10.064 [conn1] Request::process begin ns: admin.$cmd msg id: 2 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.064 [conn1] single query: admin.$cmd  { addshard: "localhost:30002" }  ntoreturn: -1 options : 0
 m30999| Wed Nov 21 00:55:10.064 [conn1] creating new connection to:localhost:30002
 m30999| Wed Nov 21 00:55:10.064 BackgroundJob starting: ConnectBG
 m30999| Wed Nov 21 00:55:10.064 [conn1] connected connection!
 m30002| Wed Nov 21 00:55:10.064 [initandlisten] connection accepted from 127.0.0.1:57583 #2 (2 connections now open)
 m30999| Wed Nov 21 00:55:10.064 [conn1] going to add shard: { _id: "shard0002", host: "localhost:30002" }
 m30999| Wed Nov 21 00:55:10.064 [conn1] Request::process end ns: admin.$cmd msg id: 2 op: 2004 attempt: 0 2ms
{ "shardAdded" : "shard0002", "ok" : 1 }
 m30999| Wed Nov 21 00:55:10.064 [conn1] Request::process begin ns: config.settings msg id: 3 op: 2001 attempt: 0
 m30999| Wed Nov 21 00:55:10.064 [conn1] write: config.settings
 m30999| Wed Nov 21 00:55:10.064 [conn1] creating new connection to:localhost:30000
 m30999| Wed Nov 21 00:55:10.064 BackgroundJob starting: ConnectBG
 m30000| Wed Nov 21 00:55:10.064 [initandlisten] connection accepted from 127.0.0.1:57585 #3 (3 connections now open)
 m30999| Wed Nov 21 00:55:10.064 [conn1] connected connection!
 m30999| Wed Nov 21 00:55:10.064 [conn1] creating WriteBackListener for: localhost:30000 serverID: 50ac6cbdfaf2e30ed017368d
 m30999| Wed Nov 21 00:55:10.064 [conn1] initializing shard connection to localhost:30000
 m30999| Wed Nov 21 00:55:10.064 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), authoritative: true }
 m30999| Wed Nov 21 00:55:10.064 BackgroundJob starting: WriteBackListener-localhost:30000
 m30999| Wed Nov 21 00:55:10.064 [conn1] initial sharding result : { initialized: true, ok: 1.0 }
 m30999| Wed Nov 21 00:55:10.064 [conn1] creating new connection to:localhost:30001
 m30999| Wed Nov 21 00:55:10.064 BackgroundJob starting: ConnectBG
 m30999| Wed Nov 21 00:55:10.064 [conn1] connected connection!
 m30999| Wed Nov 21 00:55:10.064 [conn1] creating WriteBackListener for: localhost:30001 serverID: 50ac6cbdfaf2e30ed017368d
 m30001| Wed Nov 21 00:55:10.064 [initandlisten] connection accepted from 127.0.0.1:57586 #3 (3 connections now open)
 m30999| Wed Nov 21 00:55:10.064 [conn1] initializing shard connection to localhost:30001
 m30999| Wed Nov 21 00:55:10.064 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), authoritative: true }
 m30999| Wed Nov 21 00:55:10.064 BackgroundJob starting: WriteBackListener-localhost:30001
 m30999| Wed Nov 21 00:55:10.064 [conn1] initial sharding result : { initialized: true, ok: 1.0 }
 m30999| Wed Nov 21 00:55:10.064 [conn1] creating new connection to:localhost:30002
 m30999| Wed Nov 21 00:55:10.064 BackgroundJob starting: ConnectBG
 m30002| Wed Nov 21 00:55:10.064 [initandlisten] connection accepted from 127.0.0.1:57587 #3 (3 connections now open)
 m30999| Wed Nov 21 00:55:10.064 [conn1] connected connection!
 m30999| Wed Nov 21 00:55:10.064 [conn1] creating WriteBackListener for: localhost:30002 serverID: 50ac6cbdfaf2e30ed017368d
 m30999| Wed Nov 21 00:55:10.064 [conn1] initializing shard connection to localhost:30002
 m30999| Wed Nov 21 00:55:10.064 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), authoritative: true }
 m30999| Wed Nov 21 00:55:10.064 BackgroundJob starting: WriteBackListener-localhost:30002
 m30999| Wed Nov 21 00:55:10.064 [conn1] initial sharding result : { initialized: true, ok: 1.0 }
 m30999| Wed Nov 21 00:55:10.064 [conn1] creating new connection to:localhost:29000
 m30999| Wed Nov 21 00:55:10.064 BackgroundJob starting: ConnectBG
 m29000| Wed Nov 21 00:55:10.064 [initandlisten] connection accepted from 127.0.0.1:57588 #11 (9 connections now open)
 m30999| Wed Nov 21 00:55:10.064 [conn1] connected connection!
 m30999| Wed Nov 21 00:55:10.079 [conn1] creating WriteBackListener for: localhost:29000 serverID: 50ac6cbdfaf2e30ed017368d
Waiting for active hosts... m30999| Wed Nov 21 00:55:10.079 [conn1] initializing shard connection to localhost:29000
 
 m30999| Wed Nov 21 00:55:10.079 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), authoritative: true }
 m30999| Wed Nov 21 00:55:10.079 BackgroundJob starting: WriteBackListener-localhost:29000
 m30999| Wed Nov 21 00:55:10.079 [WriteBackListener-localhost:29000] localhost:29000 is not a shard node
 m30999| Wed Nov 21 00:55:10.079 [conn1] initial sharding result : { initialized: true, ok: 1.0 }
 m30999| Wed Nov 21 00:55:10.079 [conn1] Request::process end ns: config.settings msg id: 3 op: 2001 attempt: 0 4ms
 m30999| Wed Nov 21 00:55:10.079 [conn1] Request::process begin ns: config.mongos msg id: 4 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.079 [conn1] shard query: config.mongos  {}
 m30999| Wed Nov 21 00:55:10.079 [conn1] creating pcursor over QSpec { ns: "config.mongos", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
 m30999| Wed Nov 21 00:55:10.079 [conn1] initializing over 1 shards required by [unsharded @ config:localhost:29000]
 m30999| Wed Nov 21 00:55:10.079 [conn1] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.079 [conn1] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.079 [conn1] finishing over 1 shards
 m30999| Wed Nov 21 00:55:10.079 [conn1] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.079 [conn1] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "AMAZONA-JD1A6QA:30999", ping: new Date(1353477309175), up: 0, waiting: true }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
 m30999| Wed Nov 21 00:55:10.079 [conn1] Request::process end ns: config.mongos msg id: 4 op: 2004 attempt: 0 0ms
Waiting for the balancer lock...
 m30999| Wed Nov 21 00:55:10.142 [conn1] Request::process begin ns: config.locks msg id: 5 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.142 [conn1] shard query: config.locks  { _id: "balancer" }
 m30999| Wed Nov 21 00:55:10.142 [conn1] creating pcursor over QSpec { ns: "config.locks", n2skip: 0, n2return: -1, options: 0, query: { _id: "balancer" }, fields: {} } and CInfo { v_ns: "", filter: {} }
 m30999| Wed Nov 21 00:55:10.142 [conn1] initializing over 1 shards required by [unsharded @ config:localhost:29000]
 m30999| Wed Nov 21 00:55:10.142 [conn1] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.142 [conn1] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.142 [conn1] finishing over 1 shards
 m30999| Wed Nov 21 00:55:10.142 [conn1] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.142 [conn1] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "balancer", process: "AMAZONA-JD1A6QA:30998:1353477309:41", state: 0, ts: ObjectId('50ac6cbd7da0aefafa1c6ae0'), when: new Date(1353477309565), who: "AMAZONA-JD1A6QA:30998:1353477309:41:Balancer:18467", why: "doing balance round" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
Waiting again for active hosts after balancer is off...
 m30999| Wed Nov 21 00:55:10.142 [conn1] Request::process end ns: config.locks msg id: 5 op: 2004 attempt: 0 0ms
 m30999| Wed Nov 21 00:55:10.142 [conn1] Request::process begin ns: config.shards msg id: 6 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.142 [conn1] shard query: config.shards  {}
 m30999| Wed Nov 21 00:55:10.142 [conn1] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
 m30999| Wed Nov 21 00:55:10.142 [conn1] initializing over 1 shards required by [unsharded @ config:localhost:29000]
 m30999| Wed Nov 21 00:55:10.142 [conn1] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.142 [conn1] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.142 [conn1] finishing over 1 shards
 m30999| Wed Nov 21 00:55:10.142 [conn1] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.142 [conn1] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
 m30999| Wed Nov 21 00:55:10.142 [conn1] Request::process end ns: config.shards msg id: 6 op: 2004 attempt: 0 0ms
 
 
----
Enabling sharding...
----
 
 
 m30999| Wed Nov 21 00:55:10.142 [conn1] Request::process begin ns: admin.$cmd msg id: 7 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.142 [conn1] single query: admin.$cmd  { enableSharding: "gle_error_message" }  ntoreturn: -1 options : 0
 m30999| Wed Nov 21 00:55:10.142 [conn1] couldn't find database [gle_error_message] in config db
 m30999| Wed Nov 21 00:55:10.142 [conn1] creating new connection to:localhost:30000
 m30999| Wed Nov 21 00:55:10.142 BackgroundJob starting: ConnectBG
 m30999| Wed Nov 21 00:55:10.142 [conn1] connected connection!
 m30000| Wed Nov 21 00:55:10.142 [initandlisten] connection accepted from 127.0.0.1:57590 #4 (4 connections now open)
 m30999| Wed Nov 21 00:55:10.142 [conn1] creating new connection to:localhost:30001
 m30999| Wed Nov 21 00:55:10.142 BackgroundJob starting: ConnectBG
 m30999| Wed Nov 21 00:55:10.142 [conn1] connected connection!
 m30001| Wed Nov 21 00:55:10.142 [initandlisten] connection accepted from 127.0.0.1:57591 #4 (4 connections now open)
 m30999| Wed Nov 21 00:55:10.142 [conn1] creating new connection to:localhost:30002
 m30999| Wed Nov 21 00:55:10.142 BackgroundJob starting: ConnectBG
 m30999| Wed Nov 21 00:55:10.142 [conn1] connected connection!
 m30002| Wed Nov 21 00:55:10.142 [initandlisten] connection accepted from 127.0.0.1:57592 #4 (4 connections now open)
 m30999| Wed Nov 21 00:55:10.142 [conn1] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 0 writeLock: 0
 m30999| Wed Nov 21 00:55:10.142 [conn1] 	 put [gle_error_message] on: shard0000:localhost:30000
 m30999| Wed Nov 21 00:55:10.142 [conn1] enabling sharding on: gle_error_message
 m30999| Wed Nov 21 00:55:10.142 [conn1] Request::process end ns: admin.$cmd msg id: 7 op: 2004 attempt: 0 4ms
{ "ok" : 1 }
 m30999| Wed Nov 21 00:55:10.142 [conn1] Request::process begin ns: admin.$cmd msg id: 8 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.142 [conn1] single query: admin.$cmd  { movePrimary: "gle_error_message", to: "shard0000" }  ntoreturn: -1 options : 0
 m30999| Wed Nov 21 00:55:10.142 [conn1] DBConfig unserialize: gle_error_message { _id: "gle_error_message", partitioned: true, primary: "shard0000" }
 m30999| Wed Nov 21 00:55:10.142 [conn1] found 0 dropped collections and 0 sharded collections for database gle_error_message
 m30999| Wed Nov 21 00:55:10.142 [conn1] Request::process end ns: admin.$cmd msg id: 8 op: 2004 attempt: 0 1ms
{ "ok" : 0, "errmsg" : "it is already the primary" }
 m30999| Wed Nov 21 00:55:10.142 [conn1] DBConfig unserialize: admin { _id: "admin", partitioned: false, primary: "config" }
 m30999| Wed Nov 21 00:55:10.142 [conn1] found 0 dropped collections and 0 sharded collections for database admin
 m30999| Wed Nov 21 00:55:10.142 [conn1] Request::process begin ns: admin.$cmd msg id: 9 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.142 [conn1] single query: admin.$cmd  { shardCollection: "gle_error_message.coll", key: { _id: 1.0 } }  ntoreturn: -1 options : 0
 m30000| Wed Nov 21 00:55:10.142 [FileAllocator] allocating new datafile /data/db/test0\gle_error_message.ns, filling with zeroes...
 m30000| Wed Nov 21 00:55:10.142 [FileAllocator] creating directory /data/db/test0\_tmp
 m30000| Wed Nov 21 00:55:10.204 [FileAllocator] done allocating datafile /data/db/test0\gle_error_message.ns, size: 16MB,  took 0.047 secs
 m30000| Wed Nov 21 00:55:10.204 [FileAllocator] allocating new datafile /data/db/test0\gle_error_message.0, filling with zeroes...
 m30000| Wed Nov 21 00:55:10.391 [FileAllocator] done allocating datafile /data/db/test0\gle_error_message.0, size: 64MB,  took 0.191 secs
 m30000| Wed Nov 21 00:55:10.391 [FileAllocator] allocating new datafile /data/db/test0\gle_error_message.1, filling with zeroes...
 m30000| Wed Nov 21 00:55:10.391 [conn4] build index gle_error_message.coll { _id: 1 }
 m30000| Wed Nov 21 00:55:10.391 [conn4] build index done.  scanned 0 total records. 0.001 secs
 m30000| Wed Nov 21 00:55:10.391 [conn4] info: creating collection gle_error_message.coll on add index
 m30000| Wed Nov 21 00:55:10.391 [conn4] insert gle_error_message.system.indexes keyUpdates:0 locks(micros) w:243814 243ms
 m30999| Wed Nov 21 00:55:10.391 [conn1] CMD: shardcollection: { shardCollection: "gle_error_message.coll", key: { _id: 1.0 } }
 m30999| Wed Nov 21 00:55:10.391 [conn1] enable sharding on: gle_error_message.coll with shard key: { _id: 1.0 }
 m30999| Wed Nov 21 00:55:10.391 [conn1] going to create 1 chunk(s) for: gle_error_message.coll using new epoch 50ac6cbefaf2e30ed017368f
 m30999| Wed Nov 21 00:55:10.391 [conn1] major version query from 0|0||50ac6cbefaf2e30ed017368f and over 0 shards is { ns: "gle_error_message.coll", $or: [ { lastmod: { $gte: Timestamp 0|0 } } ] }
 m30999| Wed Nov 21 00:55:10.391 [conn1] found 1 new chunks for collection gle_error_message.coll (tracking 1), new version is 0000000001DFAA00
 m30999| Wed Nov 21 00:55:10.391 [conn1] loaded 1 chunks into new chunk manager for gle_error_message.coll with version 1|0||50ac6cbefaf2e30ed017368f
 m30999| Wed Nov 21 00:55:10.391 [conn1] ChunkManager: time to load chunks for gle_error_message.coll: 0ms sequenceNumber: 2 version: 1|0||50ac6cbefaf2e30ed017368f based on: (empty)
 m29000| Wed Nov 21 00:55:10.391 [conn4] build index config.collections { _id: 1 }
 m29000| Wed Nov 21 00:55:10.391 [conn4] build index done.  scanned 0 total records. 0 secs
 m30999| Wed Nov 21 00:55:10.391 [conn1]  have to set shard version for conn: localhost:30000 ns:gle_error_message.coll my last seq: 0  current: 2 version: 1|0||50ac6cbefaf2e30ed017368f manager: 0000000001DFA8F0
 m30999| Wed Nov 21 00:55:10.391 [conn1]     setShardVersion  shard0000 localhost:30000  gle_error_message.coll  { setShardVersion: "gle_error_message.coll", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('50ac6cbefaf2e30ed017368f'), serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), shard: "shard0000", shardHost: "localhost:30000" } 0000000001DF6160 2
 m30999| Wed Nov 21 00:55:10.391 [conn1]        setShardVersion failed!
 m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "gle_error_message.coll", need_authoritative: true, errmsg: "first time for collection 'gle_error_message.coll'", ok: 0.0 }
 m30999| Wed Nov 21 00:55:10.391 [conn1]  have to set shard version for conn: localhost:30000 ns:gle_error_message.coll my last seq: 0  current: 2 version: 1|0||50ac6cbefaf2e30ed017368f manager: 0000000001DFA8F0
 m30999| Wed Nov 21 00:55:10.391 [conn1]     setShardVersion  shard0000 localhost:30000  gle_error_message.coll  { setShardVersion: "gle_error_message.coll", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('50ac6cbefaf2e30ed017368f'), serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0000000001DF6160 2
 m30000| Wed Nov 21 00:55:10.391 [conn3] no current chunk manager found for this shard, will initialize
 m29000| Wed Nov 21 00:55:10.391 [initandlisten] connection accepted from 127.0.0.1:57593 #12 (10 connections now open)
 m30999| Wed Nov 21 00:55:10.407 [conn1]       setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
 m30999| Wed Nov 21 00:55:10.407 [conn1] resetting shard version of gle_error_message.coll on localhost:30001, version is zero
 m30999| Wed Nov 21 00:55:10.407 [conn1]  have to set shard version for conn: localhost:30001 ns:gle_error_message.coll my last seq: 0  current: 2 version: 0|0||000000000000000000000000 manager: 0000000001DFA8F0
 m30999| Wed Nov 21 00:55:10.407 [conn1]     setShardVersion  shard0001 localhost:30001  gle_error_message.coll  { setShardVersion: "gle_error_message.coll", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), shard: "shard0001", shardHost: "localhost:30001" } 0000000001DF6A60 2
 m30999| Wed Nov 21 00:55:10.407 [conn1]       setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
 m30999| Wed Nov 21 00:55:10.407 [conn1] resetting shard version of gle_error_message.coll on localhost:30002, version is zero
 m30999| Wed Nov 21 00:55:10.407 [conn1]  have to set shard version for conn: localhost:30002 ns:gle_error_message.coll my last seq: 0  current: 2 version: 0|0||000000000000000000000000 manager: 0000000001DFA8F0
 m30999| Wed Nov 21 00:55:10.407 [conn1]     setShardVersion  shard0002 localhost:30002  gle_error_message.coll  { setShardVersion: "gle_error_message.coll", configdb: "localhost:29000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), shard: "shard0002", shardHost: "localhost:30002" } 0000000001DF9840 2
 m30999| Wed Nov 21 00:55:10.407 [conn1]       setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
 m30999| Wed Nov 21 00:55:10.407 [conn1] Request::process end ns: admin.$cmd msg id: 9 op: 2004 attempt: 0 252ms
{ "collectionsharded" : "gle_error_message.coll", "ok" : 1 }
 m30999| Wed Nov 21 00:55:10.407 [conn1] Request::process begin ns: admin.$cmd msg id: 10 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.407 [conn1] single query: admin.$cmd  { split: "gle_error_message.coll", middle: { _id: 0.0 } }  ntoreturn: -1 options : 0
 m30999| Wed Nov 21 00:55:10.407 [conn1] splitting: gle_error_message.coll  shard: ns:gle_error_message.collshard: shard0000:localhost:30000lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey }
 m29000| Wed Nov 21 00:55:10.407 [initandlisten] connection accepted from 127.0.0.1:57594 #13 (11 connections now open)
 m30000| Wed Nov 21 00:55:10.407 [conn4] received splitChunk request: { splitChunk: "gle_error_message.coll", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], shardId: "gle_error_message.coll-_id_MinKey", configdb: "localhost:29000" }
 m30000| Wed Nov 21 00:55:10.407 [conn4] created new distributed lock for gle_error_message.coll on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m30000| Wed Nov 21 00:55:10.407 [LockPinger] creating distributed lock ping thread for localhost:29000 and process AMAZONA-JD1A6QA:30000:1353477310:41 (sleeping for 30000ms)
 m30000| Wed Nov 21 00:55:10.407 [conn4] distributed lock 'gle_error_message.coll/AMAZONA-JD1A6QA:30000:1353477310:41' acquired, ts : 50ac6cbecd24b37a7591f578
 m30000| Wed Nov 21 00:55:10.407 [conn4] splitChunk accepted at version 1|0||50ac6cbefaf2e30ed017368f
 m30000| Wed Nov 21 00:55:10.407 [conn4] about to log metadata event: { _id: "AMAZONA-JD1A6QA-2012-11-21T05:55:10-0", server: "AMAZONA-JD1A6QA", clientAddr: "127.0.0.1:57590", time: new Date(1353477310407), what: "split", ns: "gle_error_message.coll", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('50ac6cbefaf2e30ed017368f') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('50ac6cbefaf2e30ed017368f') } } }
 m29000| Wed Nov 21 00:55:10.407 [conn12] build index config.changelog { _id: 1 }
 m29000| Wed Nov 21 00:55:10.407 [conn12] build index done.  scanned 0 total records. 0 secs
 m30000| Wed Nov 21 00:55:10.407 [conn4] distributed lock 'gle_error_message.coll/AMAZONA-JD1A6QA:30000:1353477310:41' unlocked. 
 m30999| Wed Nov 21 00:55:10.407 [conn1] loading chunk manager for collection gle_error_message.coll using old chunk manager w/ version 1|0||50ac6cbefaf2e30ed017368f and 1 chunks
 m30999| Wed Nov 21 00:55:10.407 [conn1] major version query from 1|0||50ac6cbefaf2e30ed017368f and over 1 shards is { ns: "gle_error_message.coll", $or: [ { lastmod: { $gte: Timestamp 1000|0 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 1000|0 } } ] }
 m30999| Wed Nov 21 00:55:10.407 [conn1] found 2 new chunks for collection gle_error_message.coll (tracking 2), new version is 0000000001DFF8D0
 m30999| Wed Nov 21 00:55:10.407 [conn1] loaded 2 chunks into new chunk manager for gle_error_message.coll with version 1|2||50ac6cbefaf2e30ed017368f
 m30999| Wed Nov 21 00:55:10.407 [conn1] ChunkManager: time to load chunks for gle_error_message.coll: 0ms sequenceNumber: 3 version: 1|2||50ac6cbefaf2e30ed017368f based on: 1|0||50ac6cbefaf2e30ed017368f
 m30999| Wed Nov 21 00:55:10.407 [conn1] Request::process end ns: admin.$cmd msg id: 10 op: 2004 attempt: 0 9ms
{ "ok" : 1 }
 m30999| Wed Nov 21 00:55:10.407 [conn1] Request::process begin ns: admin.$cmd msg id: 11 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.407 [conn1] single query: admin.$cmd  { moveChunk: "gle_error_message.coll", find: { _id: 0.0 }, to: "shard0001" }  ntoreturn: -1 options : 0
 m30999| Wed Nov 21 00:55:10.407 [conn1] CMD: movechunk: { moveChunk: "gle_error_message.coll", find: { _id: 0.0 }, to: "shard0001" }
 m30999| Wed Nov 21 00:55:10.407 [conn1] moving chunk ns: gle_error_message.coll moving ( ns:gle_error_message.collshard: shard0000:localhost:30000lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey }) shard0000:localhost:30000 -> shard0001:localhost:30001
 m30000| Wed Nov 21 00:55:10.407 [conn4] received moveChunk request: { moveChunk: "gle_error_message.coll", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "gle_error_message.coll-_id_0.0", configdb: "localhost:29000", secondaryThrottle: false }
 m30000| Wed Nov 21 00:55:10.407 [conn4] created new distributed lock for gle_error_message.coll on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 )
 m30000| Wed Nov 21 00:55:10.407 [conn4] distributed lock 'gle_error_message.coll/AMAZONA-JD1A6QA:30000:1353477310:41' acquired, ts : 50ac6cbecd24b37a7591f579
 m30000| Wed Nov 21 00:55:10.407 [conn4] about to log metadata event: { _id: "AMAZONA-JD1A6QA-2012-11-21T05:55:10-1", server: "AMAZONA-JD1A6QA", clientAddr: "127.0.0.1:57590", time: new Date(1353477310407), what: "moveChunk.start", ns: "gle_error_message.coll", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
 m30000| Wed Nov 21 00:55:10.407 [conn4] moveChunk request accepted at version 1|2||50ac6cbefaf2e30ed017368f
 m30000| Wed Nov 21 00:55:10.407 [conn4] moveChunk number of documents: 0
 m30001| Wed Nov 21 00:55:10.407 [initandlisten] connection accepted from 127.0.0.1:57595 #5 (5 connections now open)
 m30000| Wed Nov 21 00:55:10.407 [initandlisten] connection accepted from 127.0.0.1:57596 #5 (5 connections now open)
 m30000| Wed Nov 21 00:55:10.423 [conn4] moveChunk data transfer progress: { active: true, ns: "gle_error_message.coll", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30001| Wed Nov 21 00:55:10.423 [FileAllocator] allocating new datafile /data/db/test1\gle_error_message.ns, filling with zeroes...
 m30001| Wed Nov 21 00:55:10.423 [FileAllocator] creating directory /data/db/test1\_tmp
 m30000| Wed Nov 21 00:55:10.438 [conn4] moveChunk data transfer progress: { active: true, ns: "gle_error_message.coll", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30000| Wed Nov 21 00:55:10.454 [conn4] moveChunk data transfer progress: { active: true, ns: "gle_error_message.coll", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30000| Wed Nov 21 00:55:10.469 [conn4] moveChunk data transfer progress: { active: true, ns: "gle_error_message.coll", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30001| Wed Nov 21 00:55:10.469 [FileAllocator] done allocating datafile /data/db/test1\gle_error_message.ns, size: 16MB,  took 0.053 secs
 m30001| Wed Nov 21 00:55:10.469 [FileAllocator] allocating new datafile /data/db/test1\gle_error_message.0, filling with zeroes...
 m30000| Wed Nov 21 00:55:10.501 [conn4] moveChunk data transfer progress: { active: true, ns: "gle_error_message.coll", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30000| Wed Nov 21 00:55:10.547 [conn4] moveChunk data transfer progress: { active: true, ns: "gle_error_message.coll", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30000| Wed Nov 21 00:55:10.625 [conn4] moveChunk data transfer progress: { active: true, ns: "gle_error_message.coll", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30001| Wed Nov 21 00:55:10.688 [FileAllocator] done allocating datafile /data/db/test1\gle_error_message.0, size: 64MB,  took 0.212 secs
 m30001| Wed Nov 21 00:55:10.688 [FileAllocator] allocating new datafile /data/db/test1\gle_error_message.1, filling with zeroes...
 m30001| Wed Nov 21 00:55:10.688 [migrateThread] build index gle_error_message.coll { _id: 1 }
 m30001| Wed Nov 21 00:55:10.688 [migrateThread] build index done.  scanned 0 total records. 0.001 secs
 m30001| Wed Nov 21 00:55:10.688 [migrateThread] info: creating collection gle_error_message.coll on add index
 m30001| Wed Nov 21 00:55:10.688 [migrateThread] Waiting for replication to catch up before entering critical section
 m30001| Wed Nov 21 00:55:10.688 [migrateThread] migrate commit succeeded flushing to secondaries for 'gle_error_message.coll' { _id: 0.0 } -> { _id: MaxKey }
 m30001| Wed Nov 21 00:55:10.688 [migrateThread] migrate commit flushed to journal for 'gle_error_message.coll' { _id: 0.0 } -> { _id: MaxKey }
 m30000| Wed Nov 21 00:55:10.766 [conn4] moveChunk data transfer progress: { active: true, ns: "gle_error_message.coll", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0
 m30000| Wed Nov 21 00:55:10.766 [conn4] moveChunk setting version to: 2|0||50ac6cbefaf2e30ed017368f
 m30001| Wed Nov 21 00:55:10.766 [initandlisten] connection accepted from 127.0.0.1:57598 #6 (6 connections now open)
 m30001| Wed Nov 21 00:55:10.766 [conn6] Waiting for commit to finish
 m30001| Wed Nov 21 00:55:10.781 [conn6] Waiting for commit to finish
 m30001| Wed Nov 21 00:55:10.781 [migrateThread] migrate commit succeeded flushing to secondaries for 'gle_error_message.coll' { _id: 0.0 } -> { _id: MaxKey }
 m30001| Wed Nov 21 00:55:10.781 [migrateThread] migrate commit flushed to journal for 'gle_error_message.coll' { _id: 0.0 } -> { _id: MaxKey }
 m30001| Wed Nov 21 00:55:10.781 [migrateThread] about to log metadata event: { _id: "AMAZONA-JD1A6QA-2012-11-21T05:55:10-0", server: "AMAZONA-JD1A6QA", clientAddr: ":27017", time: new Date(1353477310781), what: "moveChunk.to", ns: "gle_error_message.coll", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, step1 of 5: 272, step2 of 5: 5, step3 of 5: 0, step4 of 5: 0, step5 of 5: 80 } }
 m29000| Wed Nov 21 00:55:10.781 [initandlisten] connection accepted from 127.0.0.1:57600 #14 (12 connections now open)
 m30000| Wed Nov 21 00:55:10.797 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "gle_error_message.coll", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 }
 m30000| Wed Nov 21 00:55:10.797 [conn4] moveChunk updating self version to: 2|1||50ac6cbefaf2e30ed017368f through { _id: MinKey } -> { _id: 0.0 } for collection 'gle_error_message.coll'
 m29000| Wed Nov 21 00:55:10.797 [initandlisten] connection accepted from 127.0.0.1:57601 #15 (13 connections now open)
 m30000| Wed Nov 21 00:55:10.797 [conn4] about to log metadata event: { _id: "AMAZONA-JD1A6QA-2012-11-21T05:55:10-2", server: "AMAZONA-JD1A6QA", clientAddr: "127.0.0.1:57590", time: new Date(1353477310797), what: "moveChunk.commit", ns: "gle_error_message.coll", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", to: "shard0001" } }
 m30000| Wed Nov 21 00:55:10.797 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section
 m30000| Wed Nov 21 00:55:10.797 [conn4] MigrateFromStatus::done Global lock acquired
 m30000| Wed Nov 21 00:55:10.797 [conn4] doing delete inline
 m30000| Wed Nov 21 00:55:10.797 [conn4] moveChunk starting delete for: gle_error_message.coll from { _id: 0.0 } -> { _id: MaxKey }
 m30000| Wed Nov 21 00:55:10.797 [conn4] moveChunk deleted 0 documents for gle_error_message.coll from { _id: 0.0 } -> { _id: MaxKey }
 m30000| Wed Nov 21 00:55:10.797 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section
 m30000| Wed Nov 21 00:55:10.797 [conn4] MigrateFromStatus::done Global lock acquired
 m30000| Wed Nov 21 00:55:10.797 [conn4] distributed lock 'gle_error_message.coll/AMAZONA-JD1A6QA:30000:1353477310:41' unlocked. 
 m30000| Wed Nov 21 00:55:10.797 [conn4] about to log metadata event: { _id: "AMAZONA-JD1A6QA-2012-11-21T05:55:10-3", server: "AMAZONA-JD1A6QA", clientAddr: "127.0.0.1:57590", time: new Date(1353477310797), what: "moveChunk.from", ns: "gle_error_message.coll", details: { min: { _id: 0.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 344, step5 of 6: 32, step6 of 6: 0 } }
{ "millis" : 382, "ok" : 1 }
 m30000| Wed Nov 21 00:55:10.797 [conn4] command admin.$cmd command: { moveChunk: "gle_error_message.coll", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "gle_error_message.coll-_id_0.0", configdb: "localhost:29000", secondaryThrottle: false } ntoreturn:1 keyUpdates:0 locks(micros) W:20 r:85 w:45 reslen:37 380ms
 m30000| Wed Nov 21 00:55:10.813 [FileAllocator] done allocating datafile /data/db/test0\gle_error_message.1, size: 128MB,  took 0.42 secs
 m30999| Wed Nov 21 00:55:10.797 [conn1] moveChunk result: { ok: 1.0 }
 m30999| Wed Nov 21 00:55:10.797 [conn1] loading chunk manager for collection gle_error_message.coll using old chunk manager w/ version 1|2||50ac6cbefaf2e30ed017368f and 2 chunks
 m30999| Wed Nov 21 00:55:10.797 [conn1] major version query from 1|2||50ac6cbefaf2e30ed017368f and over 1 shards is { ns: "gle_error_message.coll", $or: [ { lastmod: { $gte: Timestamp 1000|2 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 1000|2 } } ] }
 m30999| Wed Nov 21 00:55:10.797 [conn1] found 2 new chunks for collection gle_error_message.coll (tracking 2), new version is 0000000001DFA520
 m30999| Wed Nov 21 00:55:10.797 [conn1] loaded 2 chunks into new chunk manager for gle_error_message.coll with version 2|1||50ac6cbefaf2e30ed017368f
 m30999| Wed Nov 21 00:55:10.797 [conn1] ChunkManager: time to load chunks for gle_error_message.coll: 0ms sequenceNumber: 4 version: 2|1||50ac6cbefaf2e30ed017368f based on: 1|2||50ac6cbefaf2e30ed017368f
 m30999| Wed Nov 21 00:55:10.797 [conn1] Request::process end ns: admin.$cmd msg id: 11 op: 2004 attempt: 0 382ms
 m30999| Wed Nov 21 00:55:10.813 [conn1] Request::process begin ns: config.version msg id: 12 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.813 [conn1] shard query: config.version  {}
 m30999| Wed Nov 21 00:55:10.813 [conn1] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
 m30999| Wed Nov 21 00:55:10.813 [conn1] initializing over 1 shards required by [unsharded @ config:localhost:29000]
 m30999| Wed Nov 21 00:55:10.813 [conn1] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] finishing over 1 shards
 m30999| Wed Nov 21 00:55:10.813 [conn1] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] Request::process end ns: config.version msg id: 12 op: 2004 attempt: 0 0ms
 m30999| Wed Nov 21 00:55:10.813 [conn1] Request::process begin ns: config.version msg id: 13 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.813 [conn1] shard query: config.version  {}
 m30999| Wed Nov 21 00:55:10.813 [conn1] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} }
 m30999| Wed Nov 21 00:55:10.813 [conn1] initializing over 1 shards required by [unsharded @ config:localhost:29000]
 m30999| Wed Nov 21 00:55:10.813 [conn1] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] finishing over 1 shards
 m30999| Wed Nov 21 00:55:10.813 [conn1] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: 1, version: 3 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] Request::process end ns: config.version msg id: 13 op: 2004 attempt: 0 0ms
 m30999| Wed Nov 21 00:55:10.813 [conn1] Request::process begin ns: config.shards msg id: 14 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.813 [conn1] shard query: config.shards  { query: {}, orderby: { _id: 1.0 } }
 m30999| Wed Nov 21 00:55:10.813 [conn1] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
 m30999| Wed Nov 21 00:55:10.813 [conn1] initializing over 1 shards required by [unsharded @ config:localhost:29000]
 m30999| Wed Nov 21 00:55:10.813 [conn1] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] finishing over 1 shards
 m30999| Wed Nov 21 00:55:10.813 [conn1] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] Request::process end ns: config.shards msg id: 14 op: 2004 attempt: 0 0ms
 m30999| Wed Nov 21 00:55:10.813 [conn1] Request::process begin ns: config.databases msg id: 15 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.813 [conn1] shard query: config.databases  { query: {}, orderby: { name: 1.0 } }
 m30999| Wed Nov 21 00:55:10.813 [conn1] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
 m30999| Wed Nov 21 00:55:10.813 [conn1] initializing over 1 shards required by [unsharded @ config:localhost:29000]
 m30999| Wed Nov 21 00:55:10.813 [conn1] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] finishing over 1 shards
 m30999| Wed Nov 21 00:55:10.813 [conn1] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] Request::process end ns: config.databases msg id: 15 op: 2004 attempt: 0 0ms
 m30999| Wed Nov 21 00:55:10.813 [conn1] Request::process begin ns: config.collections msg id: 16 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.813 [conn1] shard query: config.collections  { query: { _id: /^gle_error_message\./ }, orderby: { _id: 1.0 } }
 m30999| Wed Nov 21 00:55:10.813 [conn1] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^gle_error_message\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
 m30999| Wed Nov 21 00:55:10.813 [conn1] initializing over 1 shards required by [unsharded @ config:localhost:29000]
 m30999| Wed Nov 21 00:55:10.813 [conn1] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] finishing over 1 shards
 m30999| Wed Nov 21 00:55:10.813 [conn1] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "gle_error_message.coll", lastmod: new Date(1353477310), dropped: false, key: { _id: 1.0 }, unique: false, lastmodEpoch: ObjectId('50ac6cbefaf2e30ed017368f') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
 m30999| Wed Nov 21 00:55:10.813 [conn1] Request::process end ns: config.collections msg id: 16 op: 2004 attempt: 0 0ms
 m30999| Wed Nov 21 00:55:10.813 [conn1] Request::process begin ns: config.$cmd msg id: 17 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.813 [conn1] single query: config.$cmd  { group: { cond: { ns: "gle_error_message.coll" }, key: { shard: 1.0 }, initial: { nChunks: 0.0 }, ns: "chunks", $reduce: function ( doc , out ){ out.nChunks++; } } }  ntoreturn: -1 options : 0
 m29000| Wed Nov 21 00:55:10.859 [conn11] timeoutMs not support for v8 yet  code: $reduce = function ( doc , out ){ out.nChunks++; }
 m29000| in gc
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process end ns: config.$cmd msg id: 17 op: 2004 attempt: 0 37ms
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process begin ns: config.chunks msg id: 18 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.859 [conn1] shard query: config.chunks  { query: { ns: "gle_error_message.coll" }, orderby: { min: 1.0 } }
 m30999| Wed Nov 21 00:55:10.859 [conn1] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "gle_error_message.coll" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
 m30999| Wed Nov 21 00:55:10.859 [conn1] initializing over 1 shards required by [unsharded @ config:localhost:29000]
 m30999| Wed Nov 21 00:55:10.859 [conn1] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.859 [conn1] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.859 [conn1] finishing over 1 shards
 m30999| Wed Nov 21 00:55:10.859 [conn1] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.859 [conn1] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: { _id: "gle_error_message.coll-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('50ac6cbefaf2e30ed017368f'), ns: "gle_error_message.coll", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process end ns: config.chunks msg id: 18 op: 2004 attempt: 0 0ms
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process begin ns: config.tags msg id: 19 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.859 [conn1] shard query: config.tags  { query: { ns: "gle_error_message.coll" }, orderby: { min: 1.0 } }
 m30999| Wed Nov 21 00:55:10.859 [conn1] creating pcursor over QSpec { ns: "config.tags", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "gle_error_message.coll" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} }
 m30999| Wed Nov 21 00:55:10.859 [conn1] initializing over 1 shards required by [unsharded @ config:localhost:29000]
 m30999| Wed Nov 21 00:55:10.859 [conn1] initializing on shard config:localhost:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.859 [conn1] initialized query (lazily) on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.859 [conn1] finishing over 1 shards
 m30999| Wed Nov 21 00:55:10.859 [conn1] finishing on shard config:localhost:29000, current connection state is { state: { conn: "localhost:29000", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false }
 m30999| Wed Nov 21 00:55:10.859 [conn1] finished on shard config:localhost:29000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false }
--- Sharding Status --- 
  sharding version: { "_id" : 1, "version" : 3 }
  shards:
	{  "_id" : "shard0000",  "host" : "localhost:30000" }
	{  "_id" : "shard0001",  "host" : "localhost:30001" }
	{  "_id" : "shard0002",  "host" : "localhost:30002" }
  databases:
	{  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
	{  "_id" : "gle_error_message",  "partitioned" : true,  "primary" : "shard0000" }
		gle_error_message.coll
			shard key: { "_id" : 1 }
			chunks:
				shard0000	1
				shard0001	1
			{ "_id" : { "$MinKey" : true } } -->> { "_id" : 0 } on : shard0000 { "t" : 2000, "i" : 1 } 
			{ "_id" : 0 } -->> { "_id" : { "$MaxKey" : true } } on : shard0001 { "t" : 2000, "i" : 0 } 
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process end ns: config.tags msg id: 19 op: 2004 attempt: 0 0ms
 
 
 
----
Testing GLE...
----
 
 
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process begin ns: gle_error_message.coll msg id: 20 op: 2002 attempt: 0
 m30999| Wed Nov 21 00:55:10.859 [conn1] write: gle_error_message.coll
 m30999| Wed Nov 21 00:55:10.859 [conn1]  have to set shard version for conn: localhost:30000 ns:gle_error_message.coll my last seq: 2  current: 4 version: 2|1||50ac6cbefaf2e30ed017368f manager: 0000000001DFA410
 m30999| Wed Nov 21 00:55:10.859 [conn1]     setShardVersion  shard0000 localhost:30000  gle_error_message.coll  { setShardVersion: "gle_error_message.coll", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('50ac6cbefaf2e30ed017368f'), serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), shard: "shard0000", shardHost: "localhost:30000" } 0000000001DF6160 4
 m30999| Wed Nov 21 00:55:10.859 [conn1]       setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('50ac6cbefaf2e30ed017368f'), ok: 1.0 }
 m30999| Wed Nov 21 00:55:10.859 [conn1] about to initiate autosplit: ns:gle_error_message.collshard: shard0000:localhost:30000lastmod: 2|1||000000000000000000000000min: { _id: MinKey }max: { _id: 0.0 } dataWritten: 5457627 splitThreshold: 471859
 m30999| Wed Nov 21 00:55:10.859 [conn1] chunk not full enough to trigger auto-split no split entry
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process end ns: gle_error_message.coll msg id: 20 op: 2002 attempt: 0 1ms
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process begin ns: gle_error_message.coll msg id: 21 op: 2002 attempt: 0
 m30999| Wed Nov 21 00:55:10.859 [conn1] write: gle_error_message.coll
 m30999| Wed Nov 21 00:55:10.859 [conn1]  have to set shard version for conn: localhost:30001 ns:gle_error_message.coll my last seq: 2  current: 4 version: 2|0||50ac6cbefaf2e30ed017368f manager: 0000000001DFA410
 m30999| Wed Nov 21 00:55:10.859 [conn1]     setShardVersion  shard0001 localhost:30001  gle_error_message.coll  { setShardVersion: "gle_error_message.coll", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('50ac6cbefaf2e30ed017368f'), serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), shard: "shard0001", shardHost: "localhost:30001" } 0000000001DF6A60 4
 m30999| Wed Nov 21 00:55:10.859 [conn1]        setShardVersion failed!
 m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "gle_error_message.coll", need_authoritative: true, errmsg: "first time for collection 'gle_error_message.coll'", ok: 0.0 }
 m30999| Wed Nov 21 00:55:10.859 [conn1]  have to set shard version for conn: localhost:30001 ns:gle_error_message.coll my last seq: 2  current: 4 version: 2|0||50ac6cbefaf2e30ed017368f manager: 0000000001DFA410
 m30999| Wed Nov 21 00:55:10.859 [conn1]     setShardVersion  shard0001 localhost:30001  gle_error_message.coll  { setShardVersion: "gle_error_message.coll", configdb: "localhost:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('50ac6cbefaf2e30ed017368f'), serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0000000001DF6A60 4
 m30001| Wed Nov 21 00:55:10.859 [conn3] no current chunk manager found for this shard, will initialize
 m30999| Wed Nov 21 00:55:10.859 [conn1]       setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
 m30999| Wed Nov 21 00:55:10.859 [conn1] about to initiate autosplit: ns:gle_error_message.collshard: shard0001:localhost:30001lastmod: 2|0||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } dataWritten: 5457627 splitThreshold: 471859
 m30999| Wed Nov 21 00:55:10.859 [conn1] chunk not full enough to trigger auto-split no split entry
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process end ns: gle_error_message.coll msg id: 21 op: 2002 attempt: 0 2ms
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process begin ns: gle_error_message.$cmd msg id: 22 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:10.859 [conn1] single query: gle_error_message.$cmd  { getlasterror: 1.0 }  ntoreturn: -1 options : 0
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process end ns: gle_error_message.$cmd msg id: 22 op: 2004 attempt: 0 0ms
 
 
----
GLE : {
	"singleShard" : "localhost:30001",
	"n" : 0,
	"connectionId" : 3,
	"err" : null,
	"ok" : 1
}
----
 
 
 
 
----
Testing GLE when writeback host goes down...
----
 
 
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process begin ns: gle_error_message.coll msg id: 23 op: 2002 attempt: 0
 m30999| Wed Nov 21 00:55:10.859 [conn1] write: gle_error_message.coll
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process end ns: gle_error_message.coll msg id: 23 op: 2002 attempt: 0 0ms
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process begin ns: gle_error_message.coll msg id: 24 op: 2002 attempt: 0
 m30999| Wed Nov 21 00:55:10.859 [conn1] write: gle_error_message.coll
 m30999| Wed Nov 21 00:55:10.859 [conn1] Request::process end ns: gle_error_message.coll msg id: 24 op: 2002 attempt: 0 0ms
 m30000| Wed Nov 21 00:55:10.859 [initandlisten] connection accepted from 127.0.0.1:57602 #6 (6 connections now open)
 m30000| Wed Nov 21 00:55:10.859 [conn6] terminating, shutdown command received
 m30000| Wed Nov 21 00:55:10.859 dbexit: shutdown called
 m30000| Wed Nov 21 00:55:10.859 [conn6] shutdown: going to close listening sockets...
 m30000| Wed Nov 21 00:55:10.859 [conn6] closing listening socket: 468
 m30000| Wed Nov 21 00:55:10.859 [conn6] closing listening socket: 480
 m30000| Wed Nov 21 00:55:10.859 [conn6] shutdown: going to flush diaglog...
 m30000| Wed Nov 21 00:55:10.859 [conn6] shutdown: going to close sockets...
 m30000| Wed Nov 21 00:55:10.859 [conn6] shutdown: waiting for fs preallocator...
 m30000| Wed Nov 21 00:55:10.859 [conn6] shutdown: lock for final commit...
 m30000| Wed Nov 21 00:55:10.859 [conn6] shutdown: final commit...
 m30999| Wed Nov 21 00:55:10.859 [WriteBackListener-localhost:30000] Socket recv() conn closed? 127.0.0.1:30000
 m30999| Wed Nov 21 00:55:10.859 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [0] server [127.0.0.1:30000] 
 m30999| Wed Nov 21 00:55:10.859 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed
 m30999| Wed Nov 21 00:55:10.859 [WriteBackListener-localhost:30000] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('50ac6cbdfaf2e30ed017368d') }
 m30999| Wed Nov 21 00:55:10.859 [WriteBackListener-localhost:30000] Detecting bad connection created at 0 microSec, clearing pool for localhost:30000
 m30999| Wed Nov 21 00:55:10.859 [WriteBackListener-localhost:30000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('50ac6cbdfaf2e30ed017368d') }
 m30000| Wed Nov 21 00:55:10.859 [conn1] end connection 127.0.0.1:57564 (5 connections now open)
 m30000| Wed Nov 21 00:55:10.859 [conn3] end connection 127.0.0.1:57585 (5 connections now open)
 m30000| Wed Nov 21 00:55:10.859 [conn4] end connection 127.0.0.1:57590 (5 connections now open)
 m30000| Wed Nov 21 00:55:10.859 [conn5] end connection 127.0.0.1:57596 (5 connections now open)
 m29000| Wed Nov 21 00:55:10.859 [conn12] end connection 127.0.0.1:57593 (12 connections now open)
 m29000| Wed Nov 21 00:55:10.859 [conn13] end connection 127.0.0.1:57594 (11 connections now open)
 m30001| Wed Nov 21 00:55:10.875 [conn6] end connection 127.0.0.1:57598 (5 connections now open)
 m30001| Wed Nov 21 00:55:10.875 [conn5] end connection 127.0.0.1:57595 (4 connections now open)
Wed Nov 21 00:55:10.875 DBClientCursor::init call() failed
 m29000| Wed Nov 21 00:55:10.875 [conn15] end connection 127.0.0.1:57601 (10 connections now open)
 m30001| Wed Nov 21 00:55:11.078 [FileAllocator] done allocating datafile /data/db/test1\gle_error_message.1, size: 128MB,  took 0.394 secs
 m30999| Wed Nov 21 00:55:11.889 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
 m30999| Wed Nov 21 00:55:11.889 BackgroundJob starting: ConnectBG
 m30999| Wed Nov 21 00:55:12.919 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000
 m30000| Wed Nov 21 00:55:14.057 [conn6] shutdown: closing all files...
 m30000| Wed Nov 21 00:55:14.057 [conn6] closeAllFiles() finished
 m30000| Wed Nov 21 00:55:14.057 [conn6] journalCleanup...
 m30000| Wed Nov 21 00:55:14.073 [conn6] removeJournalFiles
 m30000| Wed Nov 21 00:55:14.073 [conn6] shutdown: removing fs lock...
 m30000| Wed Nov 21 00:55:14.073 dbexit: really exiting now
Wed Nov 21 00:55:14.931 shell: stopped mongo program on port 30000
 m30999| Wed Nov 21 00:55:14.931 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
 m30999| Wed Nov 21 00:55:14.931 BackgroundJob starting: ConnectBG
 m30999| Wed Nov 21 00:55:14.931 [conn1] Request::process begin ns: gle_error_message.$cmd msg id: 26 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:14.931 [conn1] single query: gle_error_message.$cmd  { getlasterror: 1.0 }  ntoreturn: -1 options : 0
 m30999| Wed Nov 21 00:55:14.931 [conn1] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:30000
 m30999| Wed Nov 21 00:55:14.931 [conn1] Detecting bad connection created at 0 microSec, clearing pool for localhost:30000
 m30999| Wed Nov 21 00:55:14.931 [conn1] could not clear last error from shard localhost:30000 :: caused by :: socket exception [SEND_ERROR] for 127.0.0.1:30000
 m30999| Wed Nov 21 00:55:14.931 [conn1] Request::process end ns: gle_error_message.$cmd msg id: 26 op: 2004 attempt: 0 0ms
 
 
----
GLE : {
	"singleShard" : "localhost:30001",
	"n" : 0,
	"connectionId" : 3,
	"err" : null,
	"ok" : 1
}
----
 
 
Wed Nov 21 00:55:14.931 shell: started program mongod.exe --port 30000 --dbpath /data/db/test0
 m30000| Wed Nov 21 00:55:14.962 [initandlisten] MongoDB starting : pid=4000 port=30000 dbpath=/data/db/test0 64-bit host=AMAZONA-JD1A6QA
 m30000| Wed Nov 21 00:55:14.962 [initandlisten] 
 m30000| Wed Nov 21 00:55:14.962 [initandlisten] ** NOTE: This is a development version (2.3.1-pre-) of MongoDB.
 m30000| Wed Nov 21 00:55:14.962 [initandlisten] **       Not recommended for production.
 m30000| Wed Nov 21 00:55:14.962 [initandlisten] 
 m30000| Wed Nov 21 00:55:14.962 [initandlisten] db version v2.3.1-pre-, pdfile version 4.5
 m30000| Wed Nov 21 00:55:14.962 [initandlisten] git version: cb14cf83102cca848e572ffd8b95c6fa35594175
 m30000| Wed Nov 21 00:55:14.962 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49
 m30000| Wed Nov 21 00:55:14.962 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000 }
 m30000| Wed Nov 21 00:55:15.025 [initandlisten] journal dir=/data/db/test0\journal
 m30000| Wed Nov 21 00:55:15.025 [initandlisten] recover : no journal files present, no recovery needed
 m30999| Wed Nov 21 00:55:15.181 [Balancer] Refreshing MaxChunkSize: 50
 m30999| Wed Nov 21 00:55:15.181 [Balancer] skipping balancing round because balancing is disabled
 m30998| Wed Nov 21 00:55:15.571 [Balancer] Refreshing MaxChunkSize: 50
 m30998| Wed Nov 21 00:55:15.571 [Balancer] skipping balancing round because balancing is disabled
 m29000| Wed Nov 21 00:55:15.883 [conn9] update config.mongos query: { _id: "AMAZONA-JD1A6QA:30998" } update: { $set: { ping: new Date(1353477315571), up: 6, waiting: false } } idhack:1 nupdated:1 keyUpdates:0 locks(micros) w:88 321ms
 m30999| Wed Nov 21 00:55:15.945 [WriteBackListener-localhost:30000] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30000
 m30000| Wed Nov 21 00:55:16.039 [initandlisten] preallocateIsFaster=true 16.7
 m30000| Wed Nov 21 00:55:16.195 [initandlisten] waiting for connections on port 30000
 m30000| Wed Nov 21 00:55:16.195 [websvr] admin web console waiting for connections on port 31000
 m30000| Wed Nov 21 00:55:16.663 [initandlisten] connection accepted from 127.0.0.1:57610 #1 (1 connection now open)
 
 
----
Testing GLE when main host goes down...
----
 
 
 m30999| Wed Nov 21 00:55:16.663 [conn1] Request::process begin ns: gle_error_message.coll msg id: 27 op: 2002 attempt: 0
 m30999| Wed Nov 21 00:55:16.663 [conn1] write: gle_error_message.coll
 m30999| Wed Nov 21 00:55:16.663 [conn1] creating new connection to:localhost:30000
 m30999| Wed Nov 21 00:55:16.663 BackgroundJob starting: ConnectBG
 m30000| Wed Nov 21 00:55:16.663 [initandlisten] connection accepted from 127.0.0.1:57612 #2 (2 connections now open)
 m30001| Wed Nov 21 00:55:16.663 [initandlisten] connection accepted from 127.0.0.1:57611 #7 (5 connections now open)
 m30999| Wed Nov 21 00:55:16.663 [conn1] connected connection!
 m30999| Wed Nov 21 00:55:16.663 [conn1] initializing shard connection to localhost:30000
 m30999| Wed Nov 21 00:55:16.663 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:29000", serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), authoritative: true }
 m30001| Wed Nov 21 00:55:16.663 [conn7] terminating, shutdown command received
 m30001| Wed Nov 21 00:55:16.663 dbexit: shutdown called
 m30001| Wed Nov 21 00:55:16.663 [conn7] shutdown: going to close listening sockets...
 m30001| Wed Nov 21 00:55:16.663 [conn7] closing listening socket: 480
 m30001| Wed Nov 21 00:55:16.663 [conn7] closing listening socket: 488
 m30001| Wed Nov 21 00:55:16.663 [conn7] shutdown: going to flush diaglog...
 m30001| Wed Nov 21 00:55:16.663 [conn7] shutdown: going to close sockets...
 m30001| Wed Nov 21 00:55:16.663 [conn7] shutdown: waiting for fs preallocator...
 m30001| Wed Nov 21 00:55:16.663 [conn7] shutdown: lock for final commit...
 m30001| Wed Nov 21 00:55:16.663 [conn7] shutdown: final commit...
 m30001| Wed Nov 21 00:55:16.663 [conn1] end connection 127.0.0.1:57565 (4 connections now open)
 m30999| Wed Nov 21 00:55:16.663 [WriteBackListener-localhost:30001] Socket recv() conn closed? 127.0.0.1:30001
 m30999| Wed Nov 21 00:55:16.663 [conn1] initial sharding result : { initialized: true, ok: 1.0 }
 m30999| Wed Nov 21 00:55:16.663 [WriteBackListener-localhost:30001] SocketException: remote: 127.0.0.1:30001 error: 9001 socket exception [0] server [127.0.0.1:30001] 
 m30999| Wed Nov 21 00:55:16.663 [conn1]  have to set shard version for conn: localhost:30000 ns:gle_error_message.coll my last seq: 0  current: 4 version: 2|1||50ac6cbefaf2e30ed017368f manager: 0000000001DFA410
 m30999| Wed Nov 21 00:55:16.663 [WriteBackListener-localhost:30001] DBClientCursor::init call() failed
 m30999| Wed Nov 21 00:55:16.663 [WriteBackListener-localhost:30001] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30001 ns: admin.$cmd query: { writebacklisten: ObjectId('50ac6cbdfaf2e30ed017368d') }
 m30999| Wed Nov 21 00:55:16.663 [conn1]     setShardVersion  shard0000 localhost:30000  gle_error_message.coll  { setShardVersion: "gle_error_message.coll", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('50ac6cbefaf2e30ed017368f'), serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), shard: "shard0000", shardHost: "localhost:30000" } 0000000001E00F00 4
 m30999| Wed Nov 21 00:55:16.663 [WriteBackListener-localhost:30001] Detecting bad connection created at 0 microSec, clearing pool for localhost:30001
 m30999| Wed Nov 21 00:55:16.663 [WriteBackListener-localhost:30001] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30001 ns: admin.$cmd query: { writebacklisten: ObjectId('50ac6cbdfaf2e30ed017368d') }
 m30999| Wed Nov 21 00:55:16.663 [conn1]        setShardVersion failed!
 m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "gle_error_message.coll", need_authoritative: true, errmsg: "first time for collection 'gle_error_message.coll'", ok: 0.0 }
Wed Nov 21 00:55:16.663 DBClientCursor::init call() failed
 m30001| Wed Nov 21 00:55:16.663 [conn4] end connection 127.0.0.1:57591 (3 connections now open)
 m30001| Wed Nov 21 00:55:16.663 [conn3] end connection 127.0.0.1:57586 (2 connections now open)
 m29000| Wed Nov 21 00:55:16.663 [conn14] end connection 127.0.0.1:57600 (9 connections now open)
 m30999| Wed Nov 21 00:55:16.663 [conn1]  have to set shard version for conn: localhost:30000 ns:gle_error_message.coll my last seq: 0  current: 4 version: 2|1||50ac6cbefaf2e30ed017368f manager: 0000000001DFA410
 m30999| Wed Nov 21 00:55:16.663 [conn1]     setShardVersion  shard0000 localhost:30000  gle_error_message.coll  { setShardVersion: "gle_error_message.coll", configdb: "localhost:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('50ac6cbefaf2e30ed017368f'), serverID: ObjectId('50ac6cbdfaf2e30ed017368d'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0000000001E00F00 4
 m30000| Wed Nov 21 00:55:16.663 [conn2] no current chunk manager found for this shard, will initialize
 m29000| Wed Nov 21 00:55:16.663 [initandlisten] connection accepted from 127.0.0.1:57613 #16 (10 connections now open)
 m30999| Wed Nov 21 00:55:16.663 [conn1]       setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 }
 m30999| Wed Nov 21 00:55:16.663 [conn1] Request::process end ns: gle_error_message.coll msg id: 27 op: 2002 attempt: 0 3ms
 m30999| Wed Nov 21 00:55:16.663 [conn1] Request::process begin ns: gle_error_message.coll msg id: 28 op: 2002 attempt: 0
 m30999| Wed Nov 21 00:55:16.663 [conn1] write: gle_error_message.coll
 m30999| Wed Nov 21 00:55:16.663 [conn1] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:30001
 m30999| Wed Nov 21 00:55:16.663 [conn1] User Assertion: 16460:error inserting 1 documents to shard shard0001:localhost:30001 at version 2|1||50ac6cbefaf2e30ed017368f :: caused by :: socket exception [SEND_ERROR] for 127.0.0.1:30001
 m30999| Wed Nov 21 00:55:16.663 [conn1] exception during insert :: caused by :: 16460 error inserting 1 documents to shard shard0001:localhost:30001 at version 2|1||50ac6cbefaf2e30ed017368f :: caused by :: socket exception [SEND_ERROR] for 127.0.0.1:30001
 m30999| Wed Nov 21 00:55:16.663 [conn1] AssertionException while processing op type : 2002 to : gle_error_message.coll :: caused by :: 16460 error inserting 1 documents to shard shard0001:localhost:30001 at version 2|1||50ac6cbefaf2e30ed017368f :: caused by :: socket exception [SEND_ERROR] for 127.0.0.1:30001
 m30999| Wed Nov 21 00:55:17.677 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
 m30999| Wed Nov 21 00:55:17.677 BackgroundJob starting: ConnectBG
 m30001| Wed Nov 21 00:55:18.098 [conn7] shutdown: closing all files...
 m30001| Wed Nov 21 00:55:18.098 [conn7] closeAllFiles() finished
 m30001| Wed Nov 21 00:55:18.098 [conn7] journalCleanup...
 m30001| Wed Nov 21 00:55:18.098 [conn7] removeJournalFiles
 m30001| Wed Nov 21 00:55:18.098 [conn7] shutdown: removing fs lock...
 m30001| Wed Nov 21 00:55:18.098 dbexit: really exiting now
Wed Nov 21 00:55:18.691 shell: stopped mongo program on port 30001
 
 
----
Calling GLE! 
----
 
 
 m30999| Wed Nov 21 00:55:18.691 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30001
 m30999| Wed Nov 21 00:55:18.691 [conn1] Request::process begin ns: gle_error_message.$cmd msg id: 30 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:18.691 [conn1] single query: gle_error_message.$cmd  { getlasterror: 1.0 }  ntoreturn: -1 options : 0
 m30999| Wed Nov 21 00:55:18.691 [conn1] Request::process end ns: gle_error_message.$cmd msg id: 30 op: 2004 attempt: 0 0ms
assert failed
Error: Printing Stack Trace
    at printStackTrace (src/mongo/shell/utils.js:37:7)
    at doassert (src/mongo/shell/utils.js:58:1)
    at assert (src/mongo/shell/utils.js:66:1)
    at D:\slave\Windows_64bit\mongo\jstests\sharding\gle_error_message.js:67:5
 
 
----
GLE : assert failed
----
 
 
assert failed
Error: Printing Stack Trace
    at printStackTrace (src/mongo/shell/utils.js:37:7)
    at doassert (src/mongo/shell/utils.js:58:1)
    at assert (src/mongo/shell/utils.js:66:1)
    at D:\slave\Windows_64bit\mongo\jstests\sharding\gle_error_message.js:73:5
Wed Nov 21 00:55:18.691 exec error: src/mongo/shell/utils.js:59 assert failed
throw msg;
      ^
 
failed to load: D:\slave\Windows_64bit\mongo\jstests\sharding\gle_error_message.js
 m29000| Wed Nov 21 00:55:18.722 [initandlisten] connection accepted from 127.0.0.1:57617 #17 (11 connections now open)
 m29000| Wed Nov 21 00:55:18.722 [conn17] terminating, shutdown command received
 m29000| Wed Nov 21 00:55:18.722 dbexit: shutdown called
 m29000| Wed Nov 21 00:55:18.722 [conn17] shutdown: going to close listening sockets...
 m29000| Wed Nov 21 00:55:18.722 [conn17] closing listening socket: 500
 m29000| Wed Nov 21 00:55:18.722 [conn17] shutdown: going to flush diaglog...
 m29000| Wed Nov 21 00:55:18.722 [conn17] shutdown: going to close sockets...
 m29000| Wed Nov 21 00:55:18.722 [conn17] shutdown: waiting for fs preallocator...
 m29000| Wed Nov 21 00:55:18.722 [conn17] shutdown: lock for final commit...
 m29000| Wed Nov 21 00:55:18.722 [conn17] shutdown: final commit...
 m29000| Wed Nov 21 00:55:18.722 [conn4] end connection 127.0.0.1:57572 (10 connections now open)
 m29000| Wed Nov 21 00:55:18.722 [conn5] end connection 127.0.0.1:57573 (10 connections now open)
 m29000| Wed Nov 21 00:55:18.722 [conn6] end connection 127.0.0.1:57575 (10 connections now open)
 m29000| Wed Nov 21 00:55:18.722 [conn2] end connection 127.0.0.1:57568 (10 connections now open)
 m29000| Wed Nov 21 00:55:18.722 [conn16] end connection 127.0.0.1:57613 (10 connections now open)
 m29000| Wed Nov 21 00:55:18.722 [conn10] end connection 127.0.0.1:57580 (10 connections now open)
 m29000| Wed Nov 21 00:55:18.722 [conn1] end connection 127.0.0.1:57567 (10 connections now open)
 m29000| Wed Nov 21 00:55:18.722 [conn9] end connection 127.0.0.1:57579 (10 connections now open)
Wed Nov 21 00:55:18.722 DBClientCursor::init call() failed
 m29000| Wed Nov 21 00:55:18.722 [conn11] end connection 127.0.0.1:57588 (2 connections now open)
 m29000| Wed Nov 21 00:55:18.722 [conn7] end connection 127.0.0.1:57577 (2 connections now open)
 m29000| Wed Nov 21 00:55:18.800 [conn17] shutdown: closing all files...
 m29000| Wed Nov 21 00:55:18.800 [conn17] closeAllFiles() finished
 m29000| Wed Nov 21 00:55:18.800 [conn17] journalCleanup...
 m29000| Wed Nov 21 00:55:18.800 [conn17] removeJournalFiles
 m29000| Wed Nov 21 00:55:18.800 [conn17] shutdown: removing fs lock...
 m29000| Wed Nov 21 00:55:18.800 dbexit: really exiting now
 m30999| Wed Nov 21 00:55:18.956 [WriteBackListener-localhost:30000] creating new connection to:localhost:30000
 m30999| Wed Nov 21 00:55:18.956 BackgroundJob starting: ConnectBG
 m30999| Wed Nov 21 00:55:18.956 [WriteBackListener-localhost:30000] connected connection!
 m30000| Wed Nov 21 00:55:18.956 [initandlisten] connection accepted from 127.0.0.1:57618 #3 (3 connections now open)
 m30000| Wed Nov 21 00:55:19.736 [initandlisten] connection accepted from 127.0.0.1:57619 #4 (4 connections now open)
 m30000| Wed Nov 21 00:55:19.736 [conn4] terminating, shutdown command received
 m30000| Wed Nov 21 00:55:19.736 dbexit: shutdown called
 m30000| Wed Nov 21 00:55:19.736 [conn4] shutdown: going to close listening sockets...
 m30000| Wed Nov 21 00:55:19.736 [conn4] closing listening socket: 552
 m30000| Wed Nov 21 00:55:19.736 [conn4] closing listening socket: 556
 m30000| Wed Nov 21 00:55:19.736 [conn4] shutdown: going to flush diaglog...
 m30000| Wed Nov 21 00:55:19.736 [conn4] shutdown: going to close sockets...
 m30000| Wed Nov 21 00:55:19.736 [conn4] shutdown: waiting for fs preallocator...
 m30000| Wed Nov 21 00:55:19.736 [conn4] shutdown: lock for final commit...
 m30000| Wed Nov 21 00:55:19.736 [conn4] shutdown: final commit...
 m30999| Wed Nov 21 00:55:19.736 [WriteBackListener-localhost:30000] Socket recv() conn closed? 127.0.0.1:30000
Wed Nov 21 00:55:19.736 DBClientCursor::init call() failed
 m30999| Wed Nov 21 00:55:19.736 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [0] server [127.0.0.1:30000] 
 m30999| Wed Nov 21 00:55:19.736 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed
 m30999| Wed Nov 21 00:55:19.736 [WriteBackListener-localhost:30000] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('50ac6cbdfaf2e30ed017368d') }
 m30000| Wed Nov 21 00:55:19.736 [conn1] end connection 127.0.0.1:57610 (3 connections now open)
 m30000| Wed Nov 21 00:55:19.736 [conn2] end connection 127.0.0.1:57612 (3 connections now open)
 m30999| Wed Nov 21 00:55:19.736 [WriteBackListener-localhost:30000] Detecting bad connection created at 1353477310066840 microSec, clearing pool for localhost:30000
 m30999| Wed Nov 21 00:55:19.736 [WriteBackListener-localhost:30000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('50ac6cbdfaf2e30ed017368d') }
 m30999| Wed Nov 21 00:55:19.736 [WriteBackListener-localhost:30000] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:29000
 m30999| Wed Nov 21 00:55:19.736 [WriteBackListener-localhost:30000] Detecting bad connection created at 0 microSec, clearing pool for localhost:29000
 m30000| Wed Nov 21 00:55:19.736 [conn4] shutdown: closing all files...
 m30000| Wed Nov 21 00:55:19.736 [conn4] closeAllFiles() finished
 m30000| Wed Nov 21 00:55:19.736 [conn4] journalCleanup...
 m30000| Wed Nov 21 00:55:19.736 [conn4] removeJournalFiles
 m30000| Wed Nov 21 00:55:19.736 [conn4] shutdown: removing fs lock...
 m30000| Wed Nov 21 00:55:19.736 dbexit: really exiting now
 m30999| Wed Nov 21 00:55:20.703 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001
 m30999| Wed Nov 21 00:55:20.703 BackgroundJob starting: ConnectBG
 m30002| Wed Nov 21 00:55:20.750 [initandlisten] connection accepted from 127.0.0.1:57621 #5 (5 connections now open)
 m30002| Wed Nov 21 00:55:20.750 [conn5] terminating, shutdown command received
 m30002| Wed Nov 21 00:55:20.750 dbexit: shutdown called
 m30002| Wed Nov 21 00:55:20.750 [conn5] shutdown: going to close listening sockets...
 m30002| Wed Nov 21 00:55:20.750 [conn5] closing listening socket: 488
 m30002| Wed Nov 21 00:55:20.750 [conn5] closing listening socket: 500
 m30002| Wed Nov 21 00:55:20.750 [conn5] shutdown: going to flush diaglog...
 m30002| Wed Nov 21 00:55:20.750 [conn5] shutdown: going to close sockets...
 m30002| Wed Nov 21 00:55:20.750 [conn5] shutdown: waiting for fs preallocator...
 m30002| Wed Nov 21 00:55:20.750 [conn5] shutdown: lock for final commit...
 m30002| Wed Nov 21 00:55:20.750 [conn5] shutdown: final commit...
 m30002| Wed Nov 21 00:55:20.750 [conn5] shutdown: closing all files...
 m30002| Wed Nov 21 00:55:20.750 [conn5] closeAllFiles() finished
 m30002| Wed Nov 21 00:55:20.750 [conn5] journalCleanup...
Wed Nov 21 00:55:20.750 DBClientCursor::init call() failed
 m30999| Wed Nov 21 00:55:20.750 [WriteBackListener-localhost:30002] Socket recv() conn closed? 127.0.0.1:30002
 m30002| Wed Nov 21 00:55:20.750 [conn1] end connection 127.0.0.1:57566 (4 connections now open)
 m30002| Wed Nov 21 00:55:20.750 [conn3] end connection 127.0.0.1:57587 (4 connections now open)
 m30002| Wed Nov 21 00:55:20.750 [conn4] end connection 127.0.0.1:57592 (2 connections now open)
 m30999| Wed Nov 21 00:55:20.750 [WriteBackListener-localhost:30002] SocketException: remote: 127.0.0.1:30002 error: 9001 socket exception [0] server [127.0.0.1:30002] 
 m30999| Wed Nov 21 00:55:20.750 [WriteBackListener-localhost:30002] DBClientCursor::init call() failed
 m30999| Wed Nov 21 00:55:20.750 [WriteBackListener-localhost:30002] User Assertion: 10276:DBClientBase::findN: transport error: localhost:30002 ns: admin.$cmd query: { writebacklisten: ObjectId('50ac6cbdfaf2e30ed017368d') }
 m30999| Wed Nov 21 00:55:20.750 [WriteBackListener-localhost:30002] Detecting bad connection created at 0 microSec, clearing pool for localhost:30002
 m30999| Wed Nov 21 00:55:20.750 [WriteBackListener-localhost:30002] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30002 ns: admin.$cmd query: { writebacklisten: ObjectId('50ac6cbdfaf2e30ed017368d') }
 m30999| Wed Nov 21 00:55:20.750 [WriteBackListener-localhost:30002] creating new connection to:localhost:29000
 m30999| Wed Nov 21 00:55:20.750 BackgroundJob starting: ConnectBG
 m30002| Wed Nov 21 00:55:20.750 [conn5] removeJournalFiles
 m30002| Wed Nov 21 00:55:20.750 [conn5] shutdown: removing fs lock...
 m30002| Wed Nov 21 00:55:20.750 dbexit: really exiting now
 m30999| Wed Nov 21 00:55:21.187 [Balancer] creating new connection to:localhost:29000
 m30999| Wed Nov 21 00:55:21.187 BackgroundJob starting: ConnectBG
 m30998| Wed Nov 21 00:55:21.577 [Balancer] Socket say send() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:29000
 m30998| Wed Nov 21 00:55:21.577 [Balancer] Detecting bad connection created at 0 microSec, clearing pool for localhost:29000
 m30998| Wed Nov 21 00:55:21.577 [Balancer] caught exception while doing balance: socket exception [SEND_ERROR] for 127.0.0.1:29000
 m30998| Wed Nov 21 00:55:21.577 [Balancer] *** End of balancing round
 m30999| Wed Nov 21 00:55:21.717 [WriteBackListener-localhost:30001] WriteBackListener exception : socket exception [CONNECT_ERROR] for localhost:30001
 m30999| Wed Nov 21 00:55:21.717 [WriteBackListener-localhost:30001] creating new connection to:localhost:29000
 m30999| Wed Nov 21 00:55:21.717 BackgroundJob starting: ConnectBG
 m30998| Wed Nov 21 00:55:21.764 [mongosMain] connection accepted from 127.0.0.1:57627 #2 (2 connections now open)
 m30998| Wed Nov 21 00:55:21.764 [conn2] scoped connection to localhost:29000 not being returned to the pool
 m30998| Wed Nov 21 00:55:21.764 [conn2] Request::process begin ns: admin.$cmd msg id: 34 op: 2004 attempt: 0
 m30998| Wed Nov 21 00:55:21.764 [conn2] single query: admin.$cmd  { shutdown: 1, force: 1 }  ntoreturn: 1 options : 0
 m30998| Wed Nov 21 00:55:21.764 [conn2] terminating, shutdown command received
 m30998| Wed Nov 21 00:55:21.764 [conn2] dbexit: shutdown called rc:0 shutdown called
Wed Nov 21 00:55:21.764 Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:30998
Wed Nov 21 00:55:21.764 SocketException: remote: 127.0.0.1:30998 error: 9001 socket exception [1] server [127.0.0.1:30998] 
Wed Nov 21 00:55:21.764 DBClientCursor::init call() failed
 m30999| Wed Nov 21 00:55:21.764 [mongosMain] connection accepted from 127.0.0.1:57628 #2 (2 connections now open)
 m30999| Wed Nov 21 00:55:21.764 [conn2] scoped connection to localhost:29000 not being returned to the pool
 m30999| Wed Nov 21 00:55:21.764 [conn2] Request::process begin ns: admin.$cmd msg id: 35 op: 2004 attempt: 0
 m30999| Wed Nov 21 00:55:21.764 [conn2] single query: admin.$cmd  { shutdown: 1, force: 1 }  ntoreturn: 1 options : 0
 m30999| Wed Nov 21 00:55:21.764 [conn2] terminating, shutdown command received
 m30999| Wed Nov 21 00:55:21.764 [conn2] dbexit: shutdown called rc:0 shutdown called
Wed Nov 21 00:55:21.764 Socket recv() errno:10054 An existing connection was forcibly closed by the remote host. 127.0.0.1:30999
Wed Nov 21 00:55:21.764 SocketException: remote: 127.0.0.1:30999 error: 9001 socket exception [1] server [127.0.0.1:30999] 
Wed Nov 21 00:55:21.764 DBClientCursor::init call() failed



 Comments   
Comment by auto [ 23/Nov/12 ]

Author:

{u'date': u'2012-11-22T23:59:56Z', u'email': u'tad@10gen.com', u'name': u'Tad Marshall'}

Message: SERVER-7738 Use assert.soon with find instead of sleep in test

Use a method of waiting for documents to be written that is not
subject to race conditions.
Branch: master
https://github.com/mongodb/mongo/commit/32151400c2b4cf93b025723903794f1e2ec26d3c

Comment by auto [ 22/Nov/12 ]

Author:

{u'date': u'2012-11-22T18:03:51Z', u'email': u'tad@10gen.com', u'name': u'Tad Marshall'}

Message: SERVER-7738 fix test to avoid race

Add a 10 millisecond sleep before each call to MongoRunner.stopMongod()
to reduce the probability that the server will shut down before mongos
has a chance to send the write to it.
Branch: master
https://github.com/mongodb/mongo/commit/0240ea99eb415ea5b380084bb3d45e3a53c475e0

Comment by Tad Marshall [ 22/Nov/12 ]

The failure is caused by a race condition; there are two questions raised here.

One mystery is why the Windows builds hit it consistently and the Linux builds manage to avoid it. The other mystery is why this didn't happen before.

What happens is that the test fires off two tiny inserts to a mongos after configuring things so that they will go to two different shards. It then shuts down one of the shards. This means that mongos has to get the writes off to the two shards before one of them shuts down. In Linux, it comes really close (the same millisecond according to one log) but the write wins and we get the GLE behavior the test wants. In Windows, the shard gets the shutdown and processes it before mongos sends the write to it, and so when the GLE is sent to mongos it responds that there was a real error, not the error in processing the GLE that the test wants so see.

Adding a short delay before each instance of shutting down a shard seems to make the test pass consistently on Windows. I'm testing with 10 milliseconds, and that seems to be enough. We could raise the number if this is still flaky.

My speculation for why Windows gets the shutdown before the write is that the different methods of shutting down the server take different amounts of time. Perhaps signal handling in Linux adds a slight delay compared with sending a shutdown command.

My speculation for why this is happening now is that some new processing in mongos has made it a tiny bit slower than it was before. The margins were always very slim, and it wouldn't take much additional delay to make this race go the wrong way.

Comment by Tad Marshall [ 21/Nov/12 ]

I tested a SpiderMonkey build for comparison, and the failure is identical; not V8-related.

Comment by Tad Marshall [ 21/Nov/12 ]

Reproduces on local machine with 'buildscripts\smoke.py jstests\sharding\gle_error_message.js'.

Generated at Thu Feb 08 03:15:27 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.