2014-11-26T14:35:54.491-0500 I CONTROL [initandlisten] MongoDB starting : pid=9807 port=27999 dbpath=/data/db/sconsTests/ 64-bit host=ip-10-33-141-202 2014-11-26T14:35:54.491-0500 I CONTROL [initandlisten] 2014-11-26T14:35:54.491-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2014-11-26T14:35:54.491-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2014-11-26T14:35:54.491-0500 I CONTROL [initandlisten] 2014-11-26T14:35:54.491-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. 2014-11-26T14:35:54.491-0500 I CONTROL [initandlisten] 2014-11-26T14:35:54.491-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- 2014-11-26T14:35:54.491-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 2014-11-26T14:35:54.491-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 2014-11-26T14:35:54.491-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 2014-11-26T14:35:54.491-0500 I CONTROL [initandlisten] allocator: tcmalloc 2014-11-26T14:35:54.492-0500 I CONTROL [initandlisten] options: { net: { http: { enabled: true }, port: 27999 }, nopreallocj: true, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/sconsTests/", engine: "wiredTiger" } } 2014-11-26T14:35:54.492-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), 2014-11-26T14:35:54.527-0500 I NETWORK [websvr] admin web console waiting for connections on port 28999 2014-11-26T14:35:54.539-0500 I NETWORK [initandlisten] waiting for connections on port 27999 2014-11-26T14:35:55.480-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35504 #1 (1 connection now open) 2014-11-26T14:35:55.480-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35505 #2 (2 connections now open) 2014-11-26T14:35:55.480-0500 I NETWORK [conn1] end connection 127.0.0.1:35504 (0 connections now open) clean_dbroot: /data/db/sconsTests/ num procs:84 running /data/mongo/mongod --port 27999 --dbpath /data/db/sconsTests/ --setParameter enableTestCommands=1 --httpinterface --storageEngine wiredTiger --nopreallocj ******************************************* Test : mongos_rs_auth_shard_failure_tolerance.js ... 2014-11-26T14:35:55.480-0500 I NETWORK [conn2] end connection 127.0.0.1:35505 (0 connections now open) Command : /data/mongo/mongo --port 27999 --authenticationMechanism SCRAM-SHA-1 --writeMode commands --nodb /data/mongo/jstests/sharding/mongos_rs_auth_shard_failure_tolerance.js --eval TestData = new Object();TestData.storageEngine = "wiredTiger";TestData.wiredTigerEngineConfig = "";TestData.wiredTigerCollectionConfig = "";TestData.wiredTigerIndexConfig = "";TestData.testPath = "/data/mongo/jstests/sharding/mongos_rs_auth_shard_failure_tolerance.js";TestData.testFile = "mongos_rs_auth_shard_failure_tolerance.js";TestData.testName = "mongos_rs_auth_shard_failure_tolerance";TestData.setParameters = "";TestData.setParametersMongos = "";TestData.noJournal = false;TestData.noJournalPrealloc = true;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;TestData.authMechanism = "SCRAM-SHA-1";TestData.useSSL = false;TestData.useX509 = false;MongoRunner.dataDir = "/data/db";MongoRunner.dataPath = MongoRunner.dataDir + "/"; Date : Wed Nov 26 14:35:55 2014 MongoDB shell version: 2.8.0-rc2-pre- /data/db/ Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31100, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 0, "node" : 0, "set" : "test-rs0" }, "verbose" : 1, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs0-0' 2014-11-26T14:35:55.527-0500 I - shell: started program (sh9825): /data/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31100 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-0 -v --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:35:55.527-0500 W NETWORK Failed to connect to 127.0.0.1:31100, reason: errno:111 Connection refused m31100| 2014-11-26T14:35:55.537-0500 I CONTROL ** WARNING: --rest is specified without --httpinterface, m31100| 2014-11-26T14:35:55.537-0500 I CONTROL ** enabling http interface m31100| note: noprealloc may hurt performance in many applications m31100| 2014-11-26T14:35:55.555-0500 D SHARDING isInRangeTest passed m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] MongoDB starting : pid=9825 port=31100 dbpath=/data/db/test-rs0-0 64-bit host=ip-10-33-141-202 m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] allocator: tcmalloc m31100| 2014-11-26T14:35:55.555-0500 I CONTROL [initandlisten] options: { net: { http: { RESTInterfaceEnabled: true, enabled: true }, port: 31100 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "test-rs0" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/test-rs0-0", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 1 } } m31100| 2014-11-26T14:35:55.555-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger m31100| 2014-11-26T14:35:55.555-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 m31100| 2014-11-26T14:35:55.555-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m31100| 2014-11-26T14:35:55.576-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:35:55.589-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) m31100| 2014-11-26T14:35:55.590-0500 D STORAGE [initandlisten] done repairDatabases m31100| 2014-11-26T14:35:55.590-0500 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31100| 2014-11-26T14:35:55.590-0500 D COMMAND [snapshot] BackgroundJob starting: snapshot m31100| 2014-11-26T14:35:55.590-0500 D NETWORK [websvr] fd limit hard:64000 soft:64000 max conn: 51200 m31100| 2014-11-26T14:35:55.590-0500 D INDEX [initandlisten] checking complete m31100| 2014-11-26T14:35:55.590-0500 I NETWORK [websvr] admin web console waiting for connections on port 32100 m31100| 2014-11-26T14:35:55.590-0500 D STORAGE [initandlisten] stored meta data for local.me @ 0:1 m31100| 2014-11-26T14:35:55.590-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0--1911027222389114415 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:35:55.597-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:35:55.597-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:35:55.597-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:35:55.597-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:35:55.598-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:35:55.598-0500 D STORAGE [initandlisten] create uri: table:index-1--1911027222389114415 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" } m31100| 2014-11-26T14:35:55.604-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:35:55.604-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:35:55.604-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:35:55.604-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:35:55.604-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:35:55.604-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:35:55.604-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:35:55.604-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:35:55.604-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset m31100| 2014-11-26T14:35:55.605-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor m31100| 2014-11-26T14:35:55.605-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor m31100| 2014-11-26T14:35:55.605-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } m31100| 2014-11-26T14:35:55.605-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner m31100| 2014-11-26T14:35:55.605-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.605-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2--1911027222389114415 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:35:55.611-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.612-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.612-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.612-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.612-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.612-0500 D STORAGE [initandlisten] create uri: table:index-3--1911027222389114415 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" } m31100| 2014-11-26T14:35:55.618-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.618-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.618-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.618-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.618-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.618-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.618-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:35:55.618-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:35:55.618-0500 I NETWORK [initandlisten] waiting for connections on port 31100 m31100| 2014-11-26T14:35:55.728-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47487 #1 (1 connection now open) [ connection to ip-10-33-141-202:31100 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31100, 31101 ] 31101 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31101, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 0, "node" : 1, "set" : "test-rs0" }, "verbose" : 1, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs0-1' 2014-11-26T14:35:55.731-0500 I - shell: started program (sh9852): /data/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31101 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-1 -v --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:35:55.732-0500 W NETWORK Failed to connect to 127.0.0.1:31101, reason: errno:111 Connection refused m31101| 2014-11-26T14:35:55.741-0500 I CONTROL ** WARNING: --rest is specified without --httpinterface, m31101| 2014-11-26T14:35:55.741-0500 I CONTROL ** enabling http interface m31101| note: noprealloc may hurt performance in many applications m31101| 2014-11-26T14:35:55.759-0500 D SHARDING isInRangeTest passed m31101| 2014-11-26T14:35:55.759-0500 I CONTROL [initandlisten] MongoDB starting : pid=9852 port=31101 dbpath=/data/db/test-rs0-1 64-bit host=ip-10-33-141-202 m31101| 2014-11-26T14:35:55.759-0500 I CONTROL [initandlisten] m31101| 2014-11-26T14:35:55.759-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m31101| 2014-11-26T14:35:55.760-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m31101| 2014-11-26T14:35:55.760-0500 I CONTROL [initandlisten] m31101| 2014-11-26T14:35:55.760-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m31101| 2014-11-26T14:35:55.760-0500 I CONTROL [initandlisten] m31101| 2014-11-26T14:35:55.760-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m31101| 2014-11-26T14:35:55.760-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m31101| 2014-11-26T14:35:55.760-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m31101| 2014-11-26T14:35:55.760-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m31101| 2014-11-26T14:35:55.760-0500 I CONTROL [initandlisten] allocator: tcmalloc m31101| 2014-11-26T14:35:55.760-0500 I CONTROL [initandlisten] options: { net: { http: { RESTInterfaceEnabled: true, enabled: true }, port: 31101 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "test-rs0" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/test-rs0-1", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 1 } } m31101| 2014-11-26T14:35:55.760-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger m31101| 2014-11-26T14:35:55.760-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 m31101| 2014-11-26T14:35:55.760-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m31101| 2014-11-26T14:35:55.782-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31101| 2014-11-26T14:35:55.791-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) m31101| 2014-11-26T14:35:55.791-0500 D STORAGE [initandlisten] done repairDatabases m31101| 2014-11-26T14:35:55.791-0500 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31101| 2014-11-26T14:35:55.791-0500 D COMMAND [snapshot] BackgroundJob starting: snapshot m31101| 2014-11-26T14:35:55.792-0500 D NETWORK [websvr] fd limit hard:64000 soft:64000 max conn: 51200 m31101| 2014-11-26T14:35:55.792-0500 D INDEX [initandlisten] checking complete m31101| 2014-11-26T14:35:55.792-0500 I NETWORK [websvr] admin web console waiting for connections on port 32101 m31101| 2014-11-26T14:35:55.792-0500 D STORAGE [initandlisten] stored meta data for local.me @ 0:1 m31101| 2014-11-26T14:35:55.792-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0-1404722688054298599 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31101| 2014-11-26T14:35:55.799-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:35:55.799-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:35:55.799-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:35:55.799-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:35:55.799-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:35:55.799-0500 D STORAGE [initandlisten] create uri: table:index-1-1404722688054298599 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" } m31101| 2014-11-26T14:35:55.802-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:35:55.802-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:35:55.802-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:35:55.802-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:35:55.802-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:35:55.802-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:35:55.802-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:35:55.802-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:35:55.803-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset m31101| 2014-11-26T14:35:55.803-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor m31101| 2014-11-26T14:35:55.803-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor m31101| 2014-11-26T14:35:55.804-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } m31101| 2014-11-26T14:35:55.804-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.804-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2-1404722688054298599 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31101| 2014-11-26T14:35:55.804-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner m31101| 2014-11-26T14:35:55.811-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.811-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.811-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.811-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.811-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.811-0500 D STORAGE [initandlisten] create uri: table:index-3-1404722688054298599 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" } m31101| 2014-11-26T14:35:55.814-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.814-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.814-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.814-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.814-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.814-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.814-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:35:55.814-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:35:55.814-0500 I NETWORK [initandlisten] waiting for connections on port 31101 m31101| 2014-11-26T14:35:55.933-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:36454 #1 (1 connection now open) [ connection to ip-10-33-141-202:31100, connection to ip-10-33-141-202:31101 ] { "replSetInitiate" : { "_id" : "test-rs0", "members" : [ { "_id" : 0, "host" : "ip-10-33-141-202:31100" }, { "_id" : 1, "host" : "ip-10-33-141-202:31101" } ] } } m31100| 2014-11-26T14:35:55.935-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m31100| 2014-11-26T14:35:55.935-0500 I REPL [conn1] replSetInitiate admin command received from client m31100| 2014-11-26T14:35:55.937-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31101| 2014-11-26T14:35:55.937-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:54099 #2 (2 connections now open) m31100| 2014-11-26T14:35:55.937-0500 D NETWORK [conn1] connected to server ip-10-33-141-202:31101 (10.33.141.202) m31101| 2014-11-26T14:35:55.939-0500 I QUERY [conn2] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D656866447967634B497672704D72374342342B55417045496841785A4D485578) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31101| 2014-11-26T14:35:55.952-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D656866447967634B497672704D72374342342B55417045496841785A4D4855785178312B6F396231796C595068583863466E6862562F5331742B2F4D55...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31101| 2014-11-26T14:35:55.952-0500 I ACCESS [conn2] Successfully authenticated as principal __system on local m31101| 2014-11-26T14:35:55.952-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31101| 2014-11-26T14:35:55.952-0500 I QUERY [conn2] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31100| 2014-11-26T14:35:55.952-0500 I REPL [conn1] replSet replSetInitiate config object with 2 members parses ok m31101| 2014-11-26T14:35:55.953-0500 I NETWORK [conn2] end connection 10.33.141.202:54099 (1 connection now open) m31100| 2014-11-26T14:35:55.953-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31101| 2014-11-26T14:35:55.953-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:54100 #3 (2 connections now open) m31100| 2014-11-26T14:35:55.953-0500 D NETWORK [ReplExecNetThread-0] connected to server ip-10-33-141-202:31101 (10.33.141.202) m31101| 2014-11-26T14:35:55.955-0500 I QUERY [conn3] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D7561752B6343344C41366576576338583652635570414D6D7244705A50655462) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31101| 2014-11-26T14:35:55.967-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D7561752B6343344C41366576576338583652635570414D6D7244705A5065546239596270306D735458484C73416164706D514E583334526B6353306262...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31101| 2014-11-26T14:35:55.968-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m31101| 2014-11-26T14:35:55.968-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31101| 2014-11-26T14:35:55.968-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: true } ntoreturn:1 keyUpdates:0 reslen:112 0ms m31100| 2014-11-26T14:35:55.968-0500 D STORAGE [conn1] stored meta data for local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.968-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-4--1911027222389114415 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31101| 2014-11-26T14:35:55.969-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:35:55.969-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38165 #2 (2 connections now open) m31101| 2014-11-26T14:35:55.970-0500 D NETWORK [ReplExecNetThread-0] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31100| 2014-11-26T14:35:55.971-0500 I QUERY [conn2] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D47506B59356F626D634A4B436F775048495967636F413343346F777150396B35) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:35:55.972-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.973-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.973-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.973-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.973-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.973-0500 D STORAGE [conn1] create uri: table:index-5--1911027222389114415 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" } m31100| 2014-11-26T14:35:55.977-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.977-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.977-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.977-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.977-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.977-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.977-0500 D STORAGE [conn1] local.system.replset: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:35:55.977-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:35:55.978-0500 I REPL [ReplicationExecutor] new replica set config in use: { _id: "test-rs0", version: 1, members: [ { _id: 0, host: "ip-10-33-141-202:31100", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } m31100| 2014-11-26T14:35:55.978-0500 I REPL [ReplicationExecutor] transition to STARTUP2 m31101| 2014-11-26T14:35:55.978-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:102 0ms m31100| 2014-11-26T14:35:55.978-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31101 is now in state STARTUP m31100| 2014-11-26T14:35:55.978-0500 I REPL [conn1] ****** m31100| 2014-11-26T14:35:55.979-0500 I REPL [conn1] creating replication oplog of size: 40MB... m31100| 2014-11-26T14:35:55.979-0500 D STORAGE [conn1] stored meta data for local.oplog.rs @ 0:4 m31100| 2014-11-26T14:35:55.979-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-6--1911027222389114415 config: type=file,memory_page_max=100m,block_compressor=snappy,,type=file,app_metadata=(oplogKeyExtractionVersion=1),key_format=q,value_format=u m31100| 2014-11-26T14:35:55.985-0500 D STORAGE [conn1] looking up metadata for: local.oplog.rs @ 0:4 m31100| 2014-11-26T14:35:55.986-0500 D STORAGE [conn1] WiredTigerKVEngine::flushAllFiles m31100| 2014-11-26T14:35:55.987-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D47506B59356F626D634A4B436F775048495967636F413343346F777150396B35495342584E4B6A6E5A6D73353239666667306F4F64355751337A6E6666...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:35:55.987-0500 I ACCESS [conn2] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:35:55.987-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:35:55.987-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: -2, from: "", checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:597 0ms m31101| 2014-11-26T14:35:55.987-0500 D REPL [ReplicationExecutor] Received new config via heartbeat with version 1 m31101| 2014-11-26T14:35:55.988-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:35:55.988-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38166 #3 (3 connections now open) m31101| 2014-11-26T14:35:55.988-0500 D NETWORK connected to server ip-10-33-141-202:31100 (10.33.141.202) m31100| 2014-11-26T14:35:55.990-0500 I QUERY [conn3] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D555562615473574B566777484D467055563963794B696958556E2B6F62304643) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:35:56.003-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D555562615473574B566777484D467055563963794B696958556E2B6F62304643464F50616C6941556D6A6C70586F6338746A7A3047483156795957305A...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:35:56.003-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:35:56.003-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:35:56.003-0500 I QUERY [conn3] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31100| 2014-11-26T14:35:56.004-0500 I NETWORK [conn3] end connection 10.33.141.202:38166 (2 connections now open) m31101| 2014-11-26T14:35:56.004-0500 D STORAGE [WriteReplSetConfig] stored meta data for local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.004-0500 D STORAGE [WriteReplSetConfig] WiredTigerKVEngine::createRecordStore uri: table:collection-4-1404722688054298599 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31101| 2014-11-26T14:35:56.010-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.010-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.010-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.010-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.010-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.010-0500 D STORAGE [WriteReplSetConfig] create uri: table:index-5-1404722688054298599 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" } m31101| 2014-11-26T14:35:56.018-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.018-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.018-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.018-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.018-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.018-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.018-0500 D STORAGE [WriteReplSetConfig] local.system.replset: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:35:56.018-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:35:56.018-0500 I REPL [WriteReplSetConfig] Starting replication applier threads m31101| 2014-11-26T14:35:56.018-0500 I REPL [rsSync] replSet warning did not receive a valid config yet, sleeping 5 seconds m31101| 2014-11-26T14:35:56.019-0500 I REPL [ReplicationExecutor] new replica set config in use: { _id: "test-rs0", version: 1, members: [ { _id: 0, host: "ip-10-33-141-202:31100", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } m31101| 2014-11-26T14:35:56.019-0500 I REPL [ReplicationExecutor] transition to STARTUP2 m31100| 2014-11-26T14:35:56.019-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31101| 2014-11-26T14:35:56.019-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31100 is now in state STARTUP2 m31100| 2014-11-26T14:35:56.088-0500 I REPL [conn1] ****** m31100| 2014-11-26T14:35:56.089-0500 I REPL [conn1] Starting replication applier threads m31100| 2014-11-26T14:35:56.089-0500 I REPL [ReplicationExecutor] transition to RECOVERING m31100| 2014-11-26T14:35:56.089-0500 I QUERY [conn1] command admin.$cmd command: replSetInitiate { replSetInitiate: { _id: "test-rs0", members: [ { _id: 0.0, host: "ip-10-33-141-202:31100" }, { _id: 1.0, host: "ip-10-33-141-202:31101" } ] } } keyUpdates:0 reslen:37 154ms m31100| 2014-11-26T14:35:56.090-0500 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 54762b9c:1 0 m31100| 2014-11-26T14:35:56.090-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:56.091-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m31100| 2014-11-26T14:35:56.091-0500 I REPL [ReplicationExecutor] transition to SECONDARY m31101| 2014-11-26T14:35:56.091-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:56.292-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:56.292-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:56.493-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:56.494-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:56.694-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:56.695-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:56.896-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:56.896-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:57.097-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:57.097-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:57.298-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:57.298-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:57.499-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:57.500-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:57.700-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:57.701-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:57.902-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:57.902-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:57.979-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:35:57.979-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31101 is now in state STARTUP2 m31100| 2014-11-26T14:35:57.979-0500 I REPL [ReplicationExecutor] Standing for election m31101| 2014-11-26T14:35:57.979-0500 I QUERY [conn3] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "test-rs0", opTime: new Date(6086099895452696577), who: "ip-10-33-141-202:31100", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:257 0ms m31100| 2014-11-26T14:35:57.979-0500 I REPL [ReplicationExecutor] not electing self, ip-10-33-141-202:31101 would veto with 'errmsg: "I don't think ip-10-33-141-202:31100 is electable because the member is not currently a secondary; member is more than 10 seconds behind the most up-t..."' m31100| 2014-11-26T14:35:57.979-0500 I REPL [ReplicationExecutor] not electing self, we are not freshest m31100| 2014-11-26T14:35:58.020-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31101| 2014-11-26T14:35:58.020-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31100 is now in state SECONDARY m31100| 2014-11-26T14:35:58.103-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:58.103-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:58.304-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:58.304-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:58.505-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:58.505-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:58.706-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:58.708-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:58.908-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:58.909-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:59.110-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:59.110-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:59.311-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:59.311-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:59.512-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:59.512-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:59.714-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:59.714-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:35:59.915-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:59.916-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:35:59.979-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:35:59.980-0500 I REPL [ReplicationExecutor] Standing for election m31101| 2014-11-26T14:35:59.980-0500 I QUERY [conn3] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "test-rs0", opTime: new Date(6086099895452696577), who: "ip-10-33-141-202:31100", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:70 0ms m31100| 2014-11-26T14:35:59.980-0500 I REPL [ReplicationExecutor] replSet info electSelf m31101| 2014-11-26T14:35:59.980-0500 I REPL [ReplicationExecutor] replSetElect voting yea for ip-10-33-141-202:31100 (0) m31101| 2014-11-26T14:35:59.980-0500 I QUERY [conn3] command admin.$cmd command: replSetElect { replSetElect: 1, set: "test-rs0", who: "ip-10-33-141-202:31100", whoid: 0, cfgver: 1, round: ObjectId('54762b9feff30b03c3b1ccc9') } ntoreturn:1 keyUpdates:0 reslen:66 0ms m31100| 2014-11-26T14:35:59.980-0500 D REPL [ReplicationExecutor] replSet elect res: { vote: 1, round: ObjectId('54762b9feff30b03c3b1ccc9'), ok: 1.0 } m31100| 2014-11-26T14:35:59.980-0500 I REPL [ReplicationExecutor] replSet election succeeded, assuming primary role m31100| 2014-11-26T14:35:59.980-0500 I REPL [ReplicationExecutor] transition to PRIMARY m31100| 2014-11-26T14:36:00.020-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31101| 2014-11-26T14:36:00.020-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31100 is now in state PRIMARY m31100| 2014-11-26T14:36:00.092-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted m31100| 2014-11-26T14:36:00.116-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:36:00.117-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:36:00.117-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:36:00.118-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:00.118-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:36:00.319-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:36:00.319-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:00.320-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:36:00.520-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:36:00.521-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:00.521-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:36:00.722-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:36:00.722-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:00.723-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:36:00.924-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:36:00.924-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:00.924-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:01.019-0500 I REPL [rsSync] ****** m31101| 2014-11-26T14:36:01.019-0500 I REPL [rsSync] creating replication oplog of size: 40MB... m31101| 2014-11-26T14:36:01.019-0500 D STORAGE [rsSync] stored meta data for local.oplog.rs @ 0:4 m31101| 2014-11-26T14:36:01.019-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-6-1404722688054298599 config: type=file,memory_page_max=100m,block_compressor=snappy,,type=file,app_metadata=(oplogKeyExtractionVersion=1),key_format=q,value_format=u m31101| 2014-11-26T14:36:01.023-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:36:01.023-0500 D STORAGE [rsSync] WiredTigerKVEngine::flushAllFiles m31101| 2014-11-26T14:36:01.125-0500 I REPL [rsSync] ****** m31101| 2014-11-26T14:36:01.125-0500 I REPL [rsSync] initial sync pending m31101| 2014-11-26T14:36:01.125-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31100| 2014-11-26T14:36:01.125-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:36:01.125-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:36:01.125-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:36:01.126-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:36:01.126-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:36:01.126-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:36:01.126-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:36:01.126-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:36:01.126-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:01.126-0500 D STORAGE [rsSync] local.oplog.rs: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:36:01.126-0500 I REPL [ReplicationExecutor] syncing from: ip-10-33-141-202:31100 m31101| 2014-11-26T14:36:01.126-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:01.127-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:36:01.127-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38167 #4 (3 connections now open) m31101| 2014-11-26T14:36:01.127-0500 D NETWORK [rsSync] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31100| 2014-11-26T14:36:01.128-0500 I QUERY [conn4] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D776973786758374D75576470584D3358566961457038544E4346427454493350) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:36:01.141-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D776973786758374D75576470584D3358566961457038544E4346427454493350736A6D6C652B7535526761367869477A6C6E5A554262633674634C5133...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:36:01.141-0500 I ACCESS [conn4] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:01.142-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:36:01.142-0500 I QUERY [conn4] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31100| 2014-11-26T14:36:01.143-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31101| 2014-11-26T14:36:01.143-0500 D STORAGE [rsSync] stored meta data for local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.143-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-7-1404722688054298599 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31101| 2014-11-26T14:36:01.146-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.146-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.146-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.146-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.146-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.146-0500 D STORAGE [rsSync] create uri: table:index-8-1404722688054298599 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.minvalid" } m31101| 2014-11-26T14:36:01.152-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.152-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.152-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.152-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.152-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.152-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.152-0500 D STORAGE [rsSync] local.replset.minvalid: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:36:01.152-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:36:01.153-0500 I REPL [rsSync] initial sync drop all databases m31101| 2014-11-26T14:36:01.153-0500 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 m31101| 2014-11-26T14:36:01.153-0500 I REPL [rsSync] initial sync clone all databases m31100| 2014-11-26T14:36:01.153-0500 D STORAGE [conn4] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:36:01.153-0500 D STORAGE [conn4] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:36:01.153-0500 D STORAGE [conn4] looking up metadata for: local.oplog.rs @ 0:4 m31100| 2014-11-26T14:36:01.153-0500 D STORAGE [conn4] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:36:01.153-0500 D STORAGE [conn4] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:36:01.154-0500 D STORAGE [conn4] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:36:01.154-0500 D STORAGE [conn4] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:36:01.154-0500 I QUERY [conn4] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 reslen:124 1ms m31101| 2014-11-26T14:36:01.154-0500 I REPL [rsSync] initial sync data copy, starting syncup m31101| 2014-11-26T14:36:01.154-0500 I REPL [rsSync] oplog sync 1 of 3 m31100| 2014-11-26T14:36:01.154-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31101| 2014-11-26T14:36:01.154-0500 I REPL [rsSync] oplog sync 2 of 3 m31100| 2014-11-26T14:36:01.154-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31101| 2014-11-26T14:36:01.154-0500 I REPL [rsSync] initial sync building indexes m31101| 2014-11-26T14:36:01.154-0500 I REPL [rsSync] oplog sync 3 of 3 m31100| 2014-11-26T14:36:01.156-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31101| 2014-11-26T14:36:01.156-0500 I QUERY [rsSync] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31101| 2014-11-26T14:36:01.156-0500 I REPL [rsSync] initial sync finishing up m31101| 2014-11-26T14:36:01.156-0500 I REPL [rsSync] replSet set minValid=54762b9c:1 m31101| 2014-11-26T14:36:01.156-0500 I REPL [rsSync] initial sync done m31101| 2014-11-26T14:36:01.159-0500 I REPL [ReplicationExecutor] transition to RECOVERING m31100| 2014-11-26T14:36:01.159-0500 I NETWORK [conn4] end connection 10.33.141.202:38167 (2 connections now open) m31101| 2014-11-26T14:36:01.161-0500 I REPL [ReplicationExecutor] transition to SECONDARY m31100| 2014-11-26T14:36:01.327-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:36:01.328-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:01.328-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31200, 31201 ] 31200 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31200, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 1, "node" : 0, "set" : "test-rs1" }, "verbose" : 1, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs1-0' 2014-11-26T14:36:01.330-0500 I - shell: started program (sh10024): /data/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31200 --noprealloc --smallfiles --rest --replSet test-rs1 --dbpath /data/db/test-rs1-0 -v --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:36:01.331-0500 W NETWORK Failed to connect to 127.0.0.1:31200, reason: errno:111 Connection refused m31200| 2014-11-26T14:36:01.340-0500 I CONTROL ** WARNING: --rest is specified without --httpinterface, m31200| 2014-11-26T14:36:01.340-0500 I CONTROL ** enabling http interface m31200| note: noprealloc may hurt performance in many applications m31200| 2014-11-26T14:36:01.359-0500 D SHARDING isInRangeTest passed m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] MongoDB starting : pid=10024 port=31200 dbpath=/data/db/test-rs1-0 64-bit host=ip-10-33-141-202 m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] allocator: tcmalloc m31200| 2014-11-26T14:36:01.359-0500 I CONTROL [initandlisten] options: { net: { http: { RESTInterfaceEnabled: true, enabled: true }, port: 31200 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "test-rs1" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/test-rs1-0", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 1 } } m31200| 2014-11-26T14:36:01.359-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger m31200| 2014-11-26T14:36:01.359-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 m31200| 2014-11-26T14:36:01.359-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m31200| 2014-11-26T14:36:01.383-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31200| 2014-11-26T14:36:01.395-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) m31200| 2014-11-26T14:36:01.395-0500 D STORAGE [initandlisten] done repairDatabases m31200| 2014-11-26T14:36:01.396-0500 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31200| 2014-11-26T14:36:01.396-0500 D COMMAND [snapshot] BackgroundJob starting: snapshot m31200| 2014-11-26T14:36:01.396-0500 D NETWORK [websvr] fd limit hard:64000 soft:64000 max conn: 51200 m31200| 2014-11-26T14:36:01.396-0500 D INDEX [initandlisten] checking complete m31200| 2014-11-26T14:36:01.396-0500 I NETWORK [websvr] admin web console waiting for connections on port 32200 m31200| 2014-11-26T14:36:01.396-0500 D STORAGE [initandlisten] stored meta data for local.me @ 0:1 m31200| 2014-11-26T14:36:01.396-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0-5148480814435254834 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31200| 2014-11-26T14:36:01.403-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:01.404-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:01.404-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:01.404-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:01.404-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:01.404-0500 D STORAGE [initandlisten] create uri: table:index-1-5148480814435254834 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" } m31200| 2014-11-26T14:36:01.408-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:01.408-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:01.408-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:01.408-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:01.408-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:01.408-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:01.409-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset m31200| 2014-11-26T14:36:01.409-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:01.409-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset m31200| 2014-11-26T14:36:01.409-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor m31200| 2014-11-26T14:36:01.409-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor m31200| 2014-11-26T14:36:01.409-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner m31200| 2014-11-26T14:36:01.410-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } m31200| 2014-11-26T14:36:01.410-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.410-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2-5148480814435254834 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31200| 2014-11-26T14:36:01.416-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.416-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.416-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.416-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.416-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.416-0500 D STORAGE [initandlisten] create uri: table:index-3-5148480814435254834 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" } m31200| 2014-11-26T14:36:01.421-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.421-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.421-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.421-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.421-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.421-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.421-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset m31200| 2014-11-26T14:36:01.421-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:01.421-0500 I NETWORK [initandlisten] waiting for connections on port 31200 m31200| 2014-11-26T14:36:01.531-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50778 #1 (1 connection now open) [ connection to ip-10-33-141-202:31200 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31200, 31201 ] 31201 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31201, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 1, "node" : 1, "set" : "test-rs1" }, "verbose" : 1, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs1-1' 2014-11-26T14:36:01.534-0500 I - shell: started program (sh10051): /data/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31201 --noprealloc --smallfiles --rest --replSet test-rs1 --dbpath /data/db/test-rs1-1 -v --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:36:01.535-0500 W NETWORK Failed to connect to 127.0.0.1:31201, reason: errno:111 Connection refused m31201| 2014-11-26T14:36:01.544-0500 I CONTROL ** WARNING: --rest is specified without --httpinterface, m31201| 2014-11-26T14:36:01.544-0500 I CONTROL ** enabling http interface m31201| note: noprealloc may hurt performance in many applications m31201| 2014-11-26T14:36:01.562-0500 D SHARDING isInRangeTest passed m31201| 2014-11-26T14:36:01.562-0500 I CONTROL [initandlisten] MongoDB starting : pid=10051 port=31201 dbpath=/data/db/test-rs1-1 64-bit host=ip-10-33-141-202 m31201| 2014-11-26T14:36:01.563-0500 I CONTROL [initandlisten] m31201| 2014-11-26T14:36:01.563-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m31201| 2014-11-26T14:36:01.563-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m31201| 2014-11-26T14:36:01.563-0500 I CONTROL [initandlisten] m31201| 2014-11-26T14:36:01.563-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m31201| 2014-11-26T14:36:01.563-0500 I CONTROL [initandlisten] m31201| 2014-11-26T14:36:01.563-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m31201| 2014-11-26T14:36:01.563-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m31201| 2014-11-26T14:36:01.563-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m31201| 2014-11-26T14:36:01.563-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m31201| 2014-11-26T14:36:01.563-0500 I CONTROL [initandlisten] allocator: tcmalloc m31201| 2014-11-26T14:36:01.563-0500 I CONTROL [initandlisten] options: { net: { http: { RESTInterfaceEnabled: true, enabled: true }, port: 31201 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "test-rs1" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/test-rs1-1", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 1 } } m31201| 2014-11-26T14:36:01.563-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger m31201| 2014-11-26T14:36:01.563-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 m31201| 2014-11-26T14:36:01.563-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m31201| 2014-11-26T14:36:01.587-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31201| 2014-11-26T14:36:01.601-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) m31201| 2014-11-26T14:36:01.601-0500 D STORAGE [initandlisten] done repairDatabases m31201| 2014-11-26T14:36:01.601-0500 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31201| 2014-11-26T14:36:01.601-0500 D COMMAND [snapshot] BackgroundJob starting: snapshot m31201| 2014-11-26T14:36:01.601-0500 D NETWORK [websvr] fd limit hard:64000 soft:64000 max conn: 51200 m31201| 2014-11-26T14:36:01.601-0500 D INDEX [initandlisten] checking complete m31201| 2014-11-26T14:36:01.601-0500 I NETWORK [websvr] admin web console waiting for connections on port 32201 m31201| 2014-11-26T14:36:01.601-0500 D STORAGE [initandlisten] stored meta data for local.me @ 0:1 m31201| 2014-11-26T14:36:01.601-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0--7855397372784430281 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31201| 2014-11-26T14:36:01.608-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:36:01.608-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:36:01.608-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:36:01.608-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:36:01.608-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:36:01.608-0500 D STORAGE [initandlisten] create uri: table:index-1--7855397372784430281 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" } m31201| 2014-11-26T14:36:01.615-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:36:01.615-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:36:01.615-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:36:01.615-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:36:01.615-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:36:01.615-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:36:01.615-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset m31201| 2014-11-26T14:36:01.615-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:36:01.615-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset m31201| 2014-11-26T14:36:01.615-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor m31201| 2014-11-26T14:36:01.616-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor m31201| 2014-11-26T14:36:01.616-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } m31201| 2014-11-26T14:36:01.616-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.616-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner m31201| 2014-11-26T14:36:01.616-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2--7855397372784430281 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31201| 2014-11-26T14:36:01.620-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.620-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.620-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.620-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.621-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.621-0500 D STORAGE [initandlisten] create uri: table:index-3--7855397372784430281 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" } m31201| 2014-11-26T14:36:01.626-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.626-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.626-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.626-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.626-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.626-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.626-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset m31201| 2014-11-26T14:36:01.626-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:36:01.626-0500 I NETWORK [initandlisten] waiting for connections on port 31201 m31201| 2014-11-26T14:36:01.735-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:44115 #1 (1 connection now open) [ connection to ip-10-33-141-202:31200, connection to ip-10-33-141-202:31201 ] { "replSetInitiate" : { "_id" : "test-rs1", "members" : [ { "_id" : 0, "host" : "ip-10-33-141-202:31200" }, { "_id" : 1, "host" : "ip-10-33-141-202:31201" } ] } } m31200| 2014-11-26T14:36:01.736-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m31200| 2014-11-26T14:36:01.736-0500 I REPL [conn1] replSetInitiate admin command received from client m31200| 2014-11-26T14:36:01.738-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:36:01.738-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:53845 #2 (2 connections now open) m31200| 2014-11-26T14:36:01.738-0500 D NETWORK [conn1] connected to server ip-10-33-141-202:31201 (10.33.141.202) m31201| 2014-11-26T14:36:01.739-0500 I QUERY [conn2] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D686B4F6D77776E69724E615445574E546974736234565475696A746A41344D57) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:36:01.752-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D686B4F6D77776E69724E615445574E546974736234565475696A746A41344D576A494E344C447A7A547637717A57534D446445432B523243476C2F622F...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:36:01.752-0500 I ACCESS [conn2] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:36:01.752-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:36:01.753-0500 I QUERY [conn2] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31200| 2014-11-26T14:36:01.753-0500 I REPL [conn1] replSet replSetInitiate config object with 2 members parses ok m31201| 2014-11-26T14:36:01.753-0500 I NETWORK [conn2] end connection 10.33.141.202:53845 (1 connection now open) m31200| 2014-11-26T14:36:01.753-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:36:01.754-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:53846 #3 (2 connections now open) m31200| 2014-11-26T14:36:01.754-0500 D NETWORK [ReplExecNetThread-0] connected to server ip-10-33-141-202:31201 (10.33.141.202) m31201| 2014-11-26T14:36:01.755-0500 I QUERY [conn3] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6F5254587456534249696E666964697032386E6C6D536F6559665A71716A504B) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:36:01.768-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D6F5254587456534249696E666964697032386E6C6D536F6559665A71716A504B3954696C47686F77774A674562556C4E6E422F5376496C4C4956773436...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:36:01.768-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:36:01.768-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:36:01.769-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: true } ntoreturn:1 keyUpdates:0 reslen:112 0ms m31200| 2014-11-26T14:36:01.769-0500 D STORAGE [conn1] stored meta data for local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.769-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-4-5148480814435254834 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31201| 2014-11-26T14:36:01.769-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31200| 2014-11-26T14:36:01.770-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40631 #2 (2 connections now open) m31201| 2014-11-26T14:36:01.770-0500 D NETWORK [ReplExecNetThread-0] connected to server ip-10-33-141-202:31200 (10.33.141.202) m31200| 2014-11-26T14:36:01.771-0500 I QUERY [conn2] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4D3443686C642F374B4A76704D56366E33494461586C6F37574B78494B696659) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:36:01.784-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.784-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.785-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.785-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.785-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.785-0500 D STORAGE [conn1] create uri: table:index-5-5148480814435254834 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" } m31200| 2014-11-26T14:36:01.785-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4D3443686C642F374B4A76704D56366E33494461586C6F37574B78494B69665947624F772F73792F563143417972766A4E5339485464666B76356B6B2F...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:36:01.785-0500 I ACCESS [conn2] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:01.785-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:36:01.786-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: -2, from: "", checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:102 0ms m31200| 2014-11-26T14:36:01.791-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.791-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.791-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.791-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.791-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.791-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.791-0500 D STORAGE [conn1] local.system.replset: clearing plan cache - collection info cache reset m31200| 2014-11-26T14:36:01.791-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:01.792-0500 I REPL [ReplicationExecutor] new replica set config in use: { _id: "test-rs1", version: 1, members: [ { _id: 0, host: "ip-10-33-141-202:31200", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ip-10-33-141-202:31201", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } m31200| 2014-11-26T14:36:01.792-0500 I REPL [ReplicationExecutor] transition to STARTUP2 m31201| 2014-11-26T14:36:01.792-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:102 0ms m31200| 2014-11-26T14:36:01.792-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31201 is now in state STARTUP m31200| 2014-11-26T14:36:01.792-0500 I REPL [conn1] ****** m31200| 2014-11-26T14:36:01.792-0500 I REPL [conn1] creating replication oplog of size: 40MB... m31200| 2014-11-26T14:36:01.792-0500 D STORAGE [conn1] stored meta data for local.oplog.rs @ 0:4 m31200| 2014-11-26T14:36:01.792-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-6-5148480814435254834 config: type=file,memory_page_max=100m,block_compressor=snappy,,type=file,app_metadata=(oplogKeyExtractionVersion=1),key_format=q,value_format=u m31200| 2014-11-26T14:36:01.799-0500 D STORAGE [conn1] looking up metadata for: local.oplog.rs @ 0:4 m31200| 2014-11-26T14:36:01.799-0500 D STORAGE [conn1] WiredTigerKVEngine::flushAllFiles m31200| 2014-11-26T14:36:01.905-0500 I REPL [conn1] ****** m31200| 2014-11-26T14:36:01.905-0500 I REPL [conn1] Starting replication applier threads m31200| 2014-11-26T14:36:01.906-0500 I REPL [ReplicationExecutor] transition to RECOVERING m31200| 2014-11-26T14:36:01.906-0500 I QUERY [conn1] command admin.$cmd command: replSetInitiate { replSetInitiate: { _id: "test-rs1", members: [ { _id: 0.0, host: "ip-10-33-141-202:31200" }, { _id: 1.0, host: "ip-10-33-141-202:31201" } ] } } keyUpdates:0 reslen:37 169ms m31200| 2014-11-26T14:36:01.906-0500 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 54762ba1:1 0 m31200| 2014-11-26T14:36:01.907-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:01.907-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m31201| 2014-11-26T14:36:01.907-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:36:01.908-0500 I REPL [ReplicationExecutor] transition to SECONDARY m31101| 2014-11-26T14:36:01.980-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:158 0ms m31100| 2014-11-26T14:36:01.981-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31101 is now in state SECONDARY m31101| 2014-11-26T14:36:02.019-0500 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 54762b9c:1 0 m31101| 2014-11-26T14:36:02.020-0500 I REPL [ReplicationExecutor] could not find member to sync from m31100| 2014-11-26T14:36:02.020-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31200| 2014-11-26T14:36:02.108-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:02.109-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:36:02.310-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:02.310-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:36:02.511-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:02.512-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:36:02.712-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:02.713-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:36:02.913-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:02.914-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:36:03.115-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:03.115-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:36:03.316-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:03.316-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:36:03.517-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:03.518-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:36:03.718-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:03.719-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:36:03.786-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: -2, from: "", checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:597 0ms m31201| 2014-11-26T14:36:03.786-0500 D REPL [ReplicationExecutor] Received new config via heartbeat with version 1 m31201| 2014-11-26T14:36:03.787-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31200| 2014-11-26T14:36:03.787-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40632 #3 (3 connections now open) m31201| 2014-11-26T14:36:03.787-0500 D NETWORK connected to server ip-10-33-141-202:31200 (10.33.141.202) m31200| 2014-11-26T14:36:03.789-0500 I QUERY [conn3] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D35506E4C7236354A6F64644548346B6B64784D65357465436D59704442333048) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:36:03.792-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:102 0ms m31200| 2014-11-26T14:36:03.792-0500 I REPL [ReplicationExecutor] Standing for election m31201| 2014-11-26T14:36:03.793-0500 I QUERY [conn3] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "test-rs1", opTime: new Date(6086099916927533057), who: "ip-10-33-141-202:31200", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:154 0ms m31200| 2014-11-26T14:36:03.793-0500 I REPL [ReplicationExecutor] not electing self, we could not contact enough voting members m31200| 2014-11-26T14:36:03.802-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D35506E4C7236354A6F64644548346B6B64784D65357465436D597044423330486C547A505A7A7A41506E3035565445384A4D797446576E71665651782F...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:36:03.802-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:03.803-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:36:03.803-0500 I QUERY [conn3] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31200| 2014-11-26T14:36:03.803-0500 I NETWORK [conn3] end connection 10.33.141.202:40632 (2 connections now open) m31201| 2014-11-26T14:36:03.803-0500 D STORAGE [WriteReplSetConfig] stored meta data for local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.803-0500 D STORAGE [WriteReplSetConfig] WiredTigerKVEngine::createRecordStore uri: table:collection-4--7855397372784430281 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31201| 2014-11-26T14:36:03.807-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.807-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.807-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.807-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.807-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.807-0500 D STORAGE [WriteReplSetConfig] create uri: table:index-5--7855397372784430281 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" } m31201| 2014-11-26T14:36:03.814-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.814-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.814-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.814-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.814-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.814-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.814-0500 D STORAGE [WriteReplSetConfig] local.system.replset: clearing plan cache - collection info cache reset m31201| 2014-11-26T14:36:03.814-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:36:03.814-0500 I REPL [WriteReplSetConfig] Starting replication applier threads m31201| 2014-11-26T14:36:03.814-0500 I REPL [rsSync] replSet warning did not receive a valid config yet, sleeping 5 seconds m31201| 2014-11-26T14:36:03.815-0500 I REPL [ReplicationExecutor] new replica set config in use: { _id: "test-rs1", version: 1, members: [ { _id: 0, host: "ip-10-33-141-202:31200", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ip-10-33-141-202:31201", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } m31201| 2014-11-26T14:36:03.815-0500 I REPL [ReplicationExecutor] transition to STARTUP2 m31200| 2014-11-26T14:36:03.815-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31201| 2014-11-26T14:36:03.815-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31200 is now in state SECONDARY m31200| 2014-11-26T14:36:03.920-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:03.920-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:36:03.982-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:36:04.020-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31200| 2014-11-26T14:36:04.121-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:04.123-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:36:04.323-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:04.324-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:36:04.525-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:04.525-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:36:04.726-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:04.726-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:36:04.927-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:04.928-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:36:05.128-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:05.129-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:36:05.330-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:05.330-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:36:05.531-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:05.531-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:36:05.732-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:05.732-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:05.792-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:36:05.792-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31201 is now in state STARTUP2 m31200| 2014-11-26T14:36:05.792-0500 I REPL [ReplicationExecutor] Standing for election m31201| 2014-11-26T14:36:05.793-0500 I QUERY [conn3] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "test-rs1", opTime: new Date(6086099916927533057), who: "ip-10-33-141-202:31200", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:70 0ms m31200| 2014-11-26T14:36:05.793-0500 I REPL [ReplicationExecutor] replSet info electSelf m31201| 2014-11-26T14:36:05.793-0500 I REPL [ReplicationExecutor] replSetElect voting yea for ip-10-33-141-202:31200 (0) m31200| 2014-11-26T14:36:05.793-0500 D REPL [ReplicationExecutor] replSet elect res: { vote: 1, round: ObjectId('54762ba567f6f077e3000831'), ok: 1.0 } m31200| 2014-11-26T14:36:05.793-0500 I REPL [ReplicationExecutor] replSet election succeeded, assuming primary role m31200| 2014-11-26T14:36:05.793-0500 I REPL [ReplicationExecutor] transition to PRIMARY m31201| 2014-11-26T14:36:05.793-0500 I QUERY [conn3] command admin.$cmd command: replSetElect { replSetElect: 1, set: "test-rs1", who: "ip-10-33-141-202:31200", whoid: 0, cfgver: 1, round: ObjectId('54762ba567f6f077e3000831') } ntoreturn:1 keyUpdates:0 reslen:66 0ms m31200| 2014-11-26T14:36:05.815-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31201| 2014-11-26T14:36:05.815-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31200 is now in state PRIMARY m31200| 2014-11-26T14:36:05.908-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted m31200| 2014-11-26T14:36:05.933-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:05.934-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:05.934-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:05.934-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:05.935-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:05.982-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:36:06.020-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31200| 2014-11-26T14:36:06.135-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:06.136-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:06.136-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:06.337-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:06.337-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:06.338-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:06.539-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:06.539-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:06.539-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:06.740-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:06.741-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:06.741-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:06.942-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:06.942-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:06.943-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:07.143-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:07.144-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:07.145-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:07.346-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:07.346-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:07.346-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:07.547-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:07.551-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:07.553-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:07.754-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:07.754-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:07.755-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:07.792-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:36:07.816-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31200| 2014-11-26T14:36:07.956-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:07.956-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:07.956-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:07.982-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:36:08.021-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31200| 2014-11-26T14:36:08.157-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:08.158-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:08.158-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:08.359-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:08.359-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:08.360-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:08.560-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:08.561-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:08.561-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:08.762-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:08.762-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:08.763-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:08.815-0500 I REPL [rsSync] ****** m31201| 2014-11-26T14:36:08.815-0500 I REPL [rsSync] creating replication oplog of size: 40MB... m31201| 2014-11-26T14:36:08.815-0500 D STORAGE [rsSync] stored meta data for local.oplog.rs @ 0:4 m31201| 2014-11-26T14:36:08.815-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-6--7855397372784430281 config: type=file,memory_page_max=100m,block_compressor=snappy,,type=file,app_metadata=(oplogKeyExtractionVersion=1),key_format=q,value_format=u m31201| 2014-11-26T14:36:08.821-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:36:08.821-0500 D STORAGE [rsSync] WiredTigerKVEngine::flushAllFiles m31201| 2014-11-26T14:36:08.925-0500 I REPL [rsSync] ****** m31201| 2014-11-26T14:36:08.925-0500 I REPL [rsSync] initial sync pending m31201| 2014-11-26T14:36:08.925-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:36:08.925-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:36:08.925-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:36:08.925-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:36:08.926-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:36:08.926-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:36:08.926-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:36:08.926-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:36:08.926-0500 D STORAGE [rsSync] local.oplog.rs: clearing plan cache - collection info cache reset m31201| 2014-11-26T14:36:08.926-0500 I REPL [ReplicationExecutor] syncing from: ip-10-33-141-202:31200 m31201| 2014-11-26T14:36:08.927-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31200| 2014-11-26T14:36:08.927-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40633 #4 (3 connections now open) m31201| 2014-11-26T14:36:08.927-0500 D NETWORK [rsSync] connected to server ip-10-33-141-202:31200 (10.33.141.202) m31200| 2014-11-26T14:36:08.928-0500 I QUERY [conn4] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D677138697064514F78557839685736646D666C797A30394B6C47634432326453) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:36:08.941-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D677138697064514F78557839685736646D666C797A30394B6C476344323264535259374A68774F577048557446784D5434766556692B38795A68454C33...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:36:08.941-0500 I ACCESS [conn4] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:08.942-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:36:08.942-0500 I QUERY [conn4] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31200| 2014-11-26T14:36:08.943-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31201| 2014-11-26T14:36:08.943-0500 D STORAGE [rsSync] stored meta data for local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.943-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-7--7855397372784430281 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31201| 2014-11-26T14:36:08.947-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.947-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.947-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.947-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.947-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.947-0500 D STORAGE [rsSync] create uri: table:index-8--7855397372784430281 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.minvalid" } m31201| 2014-11-26T14:36:08.954-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.954-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.954-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.954-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.954-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.954-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.954-0500 D STORAGE [rsSync] local.replset.minvalid: clearing plan cache - collection info cache reset m31201| 2014-11-26T14:36:08.954-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:36:08.954-0500 I REPL [rsSync] initial sync drop all databases m31201| 2014-11-26T14:36:08.954-0500 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 m31201| 2014-11-26T14:36:08.954-0500 I REPL [rsSync] initial sync clone all databases m31200| 2014-11-26T14:36:08.955-0500 D STORAGE [conn4] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:08.955-0500 D STORAGE [conn4] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:08.955-0500 D STORAGE [conn4] looking up metadata for: local.oplog.rs @ 0:4 m31200| 2014-11-26T14:36:08.955-0500 D STORAGE [conn4] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:08.955-0500 D STORAGE [conn4] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:08.955-0500 D STORAGE [conn4] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:08.955-0500 D STORAGE [conn4] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:08.955-0500 I QUERY [conn4] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 reslen:124 1ms m31201| 2014-11-26T14:36:08.955-0500 I REPL [rsSync] initial sync data copy, starting syncup m31201| 2014-11-26T14:36:08.956-0500 I REPL [rsSync] oplog sync 1 of 3 m31200| 2014-11-26T14:36:08.956-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31201| 2014-11-26T14:36:08.956-0500 I REPL [rsSync] oplog sync 2 of 3 m31200| 2014-11-26T14:36:08.956-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31201| 2014-11-26T14:36:08.956-0500 I REPL [rsSync] initial sync building indexes m31201| 2014-11-26T14:36:08.956-0500 I REPL [rsSync] oplog sync 3 of 3 m31200| 2014-11-26T14:36:08.957-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31201| 2014-11-26T14:36:08.958-0500 I QUERY [rsSync] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31201| 2014-11-26T14:36:08.958-0500 I REPL [rsSync] initial sync finishing up m31201| 2014-11-26T14:36:08.958-0500 I REPL [rsSync] replSet set minValid=54762ba1:1 m31201| 2014-11-26T14:36:08.958-0500 I REPL [rsSync] initial sync done m31200| 2014-11-26T14:36:08.961-0500 I NETWORK [conn4] end connection 10.33.141.202:40633 (2 connections now open) m31201| 2014-11-26T14:36:08.961-0500 I REPL [ReplicationExecutor] transition to RECOVERING m31201| 2014-11-26T14:36:08.962-0500 I REPL [ReplicationExecutor] transition to SECONDARY m31200| 2014-11-26T14:36:08.963-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:08.964-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:08.964-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31300, 31301 ] 31300 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31300, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs2", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 2, "node" : 0, "set" : "test-rs2" }, "verbose" : 1, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs2-0' 2014-11-26T14:36:08.967-0500 I - shell: started program (sh10224): /data/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31300 --noprealloc --smallfiles --rest --replSet test-rs2 --dbpath /data/db/test-rs2-0 -v --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:36:08.967-0500 W NETWORK Failed to connect to 127.0.0.1:31300, reason: errno:111 Connection refused m31300| 2014-11-26T14:36:08.977-0500 I CONTROL ** WARNING: --rest is specified without --httpinterface, m31300| 2014-11-26T14:36:08.977-0500 I CONTROL ** enabling http interface m31300| note: noprealloc may hurt performance in many applications m31300| 2014-11-26T14:36:08.995-0500 D SHARDING isInRangeTest passed m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] MongoDB starting : pid=10224 port=31300 dbpath=/data/db/test-rs2-0 64-bit host=ip-10-33-141-202 m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] allocator: tcmalloc m31300| 2014-11-26T14:36:08.995-0500 I CONTROL [initandlisten] options: { net: { http: { RESTInterfaceEnabled: true, enabled: true }, port: 31300 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "test-rs2" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/test-rs2-0", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 1 } } m31300| 2014-11-26T14:36:08.995-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger m31300| 2014-11-26T14:36:08.995-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 m31300| 2014-11-26T14:36:08.996-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m31300| 2014-11-26T14:36:09.022-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31300| 2014-11-26T14:36:09.034-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) m31300| 2014-11-26T14:36:09.034-0500 D STORAGE [initandlisten] done repairDatabases m31300| 2014-11-26T14:36:09.034-0500 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31300| 2014-11-26T14:36:09.034-0500 D COMMAND [snapshot] BackgroundJob starting: snapshot m31300| 2014-11-26T14:36:09.034-0500 D NETWORK [websvr] fd limit hard:64000 soft:64000 max conn: 51200 m31300| 2014-11-26T14:36:09.034-0500 D INDEX [initandlisten] checking complete m31300| 2014-11-26T14:36:09.034-0500 I NETWORK [websvr] admin web console waiting for connections on port 32300 m31300| 2014-11-26T14:36:09.035-0500 D STORAGE [initandlisten] stored meta data for local.me @ 0:1 m31300| 2014-11-26T14:36:09.035-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0-4921718955984408552 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31300| 2014-11-26T14:36:09.042-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:09.042-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:09.042-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:09.042-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:09.042-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:09.042-0500 D STORAGE [initandlisten] create uri: table:index-1-4921718955984408552 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" } m31300| 2014-11-26T14:36:09.048-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:09.048-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:09.048-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:09.048-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:09.048-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:09.048-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:09.048-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset m31300| 2014-11-26T14:36:09.048-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:09.049-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset m31300| 2014-11-26T14:36:09.049-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor m31300| 2014-11-26T14:36:09.049-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor m31300| 2014-11-26T14:36:09.049-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } m31300| 2014-11-26T14:36:09.049-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner m31300| 2014-11-26T14:36:09.049-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.049-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2-4921718955984408552 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31300| 2014-11-26T14:36:09.055-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.055-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.056-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.056-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.056-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.056-0500 D STORAGE [initandlisten] create uri: table:index-3-4921718955984408552 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" } m31300| 2014-11-26T14:36:09.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.062-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset m31300| 2014-11-26T14:36:09.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:09.062-0500 I NETWORK [initandlisten] waiting for connections on port 31300 m31300| 2014-11-26T14:36:09.168-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51038 #1 (1 connection now open) [ connection to ip-10-33-141-202:31300 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31300, 31301 ] 31301 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31301, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs2", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 2, "node" : 1, "set" : "test-rs2" }, "verbose" : 1, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs2-1' 2014-11-26T14:36:09.171-0500 I - shell: started program (sh10251): /data/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31301 --noprealloc --smallfiles --rest --replSet test-rs2 --dbpath /data/db/test-rs2-1 -v --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:36:09.171-0500 W NETWORK Failed to connect to 127.0.0.1:31301, reason: errno:111 Connection refused m31301| 2014-11-26T14:36:09.181-0500 I CONTROL ** WARNING: --rest is specified without --httpinterface, m31301| 2014-11-26T14:36:09.181-0500 I CONTROL ** enabling http interface m31301| note: noprealloc may hurt performance in many applications m31301| 2014-11-26T14:36:09.199-0500 D SHARDING isInRangeTest passed m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] MongoDB starting : pid=10251 port=31301 dbpath=/data/db/test-rs2-1 64-bit host=ip-10-33-141-202 m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] allocator: tcmalloc m31301| 2014-11-26T14:36:09.199-0500 I CONTROL [initandlisten] options: { net: { http: { RESTInterfaceEnabled: true, enabled: true }, port: 31301 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "test-rs2" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/test-rs2-1", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 1 } } m31301| 2014-11-26T14:36:09.199-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger m31301| 2014-11-26T14:36:09.199-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 m31301| 2014-11-26T14:36:09.200-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m31301| 2014-11-26T14:36:09.224-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:36:09.235-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) m31301| 2014-11-26T14:36:09.235-0500 D STORAGE [initandlisten] done repairDatabases m31301| 2014-11-26T14:36:09.236-0500 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31301| 2014-11-26T14:36:09.236-0500 D COMMAND [snapshot] BackgroundJob starting: snapshot m31301| 2014-11-26T14:36:09.236-0500 D NETWORK [websvr] fd limit hard:64000 soft:64000 max conn: 51200 m31301| 2014-11-26T14:36:09.236-0500 D INDEX [initandlisten] checking complete m31301| 2014-11-26T14:36:09.236-0500 I NETWORK [websvr] admin web console waiting for connections on port 32301 m31301| 2014-11-26T14:36:09.236-0500 D STORAGE [initandlisten] stored meta data for local.me @ 0:1 m31301| 2014-11-26T14:36:09.236-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0--6168639182710429406 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:36:09.242-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:36:09.242-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:36:09.242-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:36:09.242-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:36:09.242-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:36:09.242-0500 D STORAGE [initandlisten] create uri: table:index-1--6168639182710429406 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" } m31301| 2014-11-26T14:36:09.248-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:36:09.248-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:36:09.248-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:36:09.248-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:36:09.248-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:36:09.248-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:36:09.248-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset m31301| 2014-11-26T14:36:09.248-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:36:09.249-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset m31301| 2014-11-26T14:36:09.249-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor m31301| 2014-11-26T14:36:09.249-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor m31301| 2014-11-26T14:36:09.249-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner m31301| 2014-11-26T14:36:09.249-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } m31301| 2014-11-26T14:36:09.249-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.249-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2--6168639182710429406 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:36:09.254-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.254-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.254-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.254-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.254-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.254-0500 D STORAGE [initandlisten] create uri: table:index-3--6168639182710429406 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" } m31301| 2014-11-26T14:36:09.260-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.260-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.260-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.260-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.260-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.260-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.260-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset m31301| 2014-11-26T14:36:09.260-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:36:09.261-0500 I NETWORK [initandlisten] waiting for connections on port 31301 m31301| 2014-11-26T14:36:09.372-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49994 #1 (1 connection now open) [ connection to ip-10-33-141-202:31300, connection to ip-10-33-141-202:31301 ] { "replSetInitiate" : { "_id" : "test-rs2", "members" : [ { "_id" : 0, "host" : "ip-10-33-141-202:31300" }, { "_id" : 1, "host" : "ip-10-33-141-202:31301" } ] } } m31300| 2014-11-26T14:36:09.373-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m31300| 2014-11-26T14:36:09.373-0500 I REPL [conn1] replSetInitiate admin command received from client m31300| 2014-11-26T14:36:09.374-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:36:09.374-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41092 #2 (2 connections now open) m31300| 2014-11-26T14:36:09.374-0500 D NETWORK [conn1] connected to server ip-10-33-141-202:31301 (10.33.141.202) m31301| 2014-11-26T14:36:09.376-0500 I QUERY [conn2] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D5250416E333437755153795068516D543635704531796A2B5A65433861534830) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31301| 2014-11-26T14:36:09.389-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D5250416E333437755153795068516D543635704531796A2B5A6543386153483042794F654735584E6E395035714D4F2F61566F4F7742326D5A72537470...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31301| 2014-11-26T14:36:09.389-0500 I ACCESS [conn2] Successfully authenticated as principal __system on local m31301| 2014-11-26T14:36:09.389-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31301| 2014-11-26T14:36:09.389-0500 I QUERY [conn2] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31300| 2014-11-26T14:36:09.389-0500 I REPL [conn1] replSet replSetInitiate config object with 2 members parses ok m31301| 2014-11-26T14:36:09.390-0500 I NETWORK [conn2] end connection 10.33.141.202:41092 (1 connection now open) m31300| 2014-11-26T14:36:09.390-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:36:09.390-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41093 #3 (2 connections now open) m31300| 2014-11-26T14:36:09.390-0500 D NETWORK [ReplExecNetThread-7] connected to server ip-10-33-141-202:31301 (10.33.141.202) m31301| 2014-11-26T14:36:09.392-0500 I QUERY [conn3] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4646426D306A4C75797273596152364B71564244505975723337514B38553156) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31301| 2014-11-26T14:36:09.405-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4646426D306A4C75797273596152364B71564244505975723337514B385531566A617032745371724D72464D584B53455344522B567439644762317836...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31301| 2014-11-26T14:36:09.405-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m31301| 2014-11-26T14:36:09.405-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31301| 2014-11-26T14:36:09.406-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31300", fromId: 0, checkEmpty: true } ntoreturn:1 keyUpdates:0 reslen:112 0ms m31300| 2014-11-26T14:36:09.406-0500 D STORAGE [conn1] stored meta data for local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.406-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31300| 2014-11-26T14:36:09.406-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:60721 #2 (2 connections now open) m31300| 2014-11-26T14:36:09.406-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-4-4921718955984408552 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:36:09.406-0500 D NETWORK [ReplExecNetThread-0] connected to server ip-10-33-141-202:31300 (10.33.141.202) m31300| 2014-11-26T14:36:09.408-0500 I QUERY [conn2] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D67397656676C4267527877776B726241577A4F685858414765516152566B7979) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31300| 2014-11-26T14:36:09.409-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:09.409-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:09.409-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:09.409-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:09.409-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:09.409-0500 D STORAGE [conn1] create uri: table:index-5-4921718955984408552 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" } m31300| 2014-11-26T14:36:09.415-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:09.415-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:09.415-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:09.415-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:09.415-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:09.415-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:09.415-0500 D STORAGE [conn1] local.system.replset: clearing plan cache - collection info cache reset m31300| 2014-11-26T14:36:09.415-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:09.416-0500 I REPL [ReplicationExecutor] new replica set config in use: { _id: "test-rs2", version: 1, members: [ { _id: 0, host: "ip-10-33-141-202:31300", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ip-10-33-141-202:31301", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } m31300| 2014-11-26T14:36:09.416-0500 I REPL [ReplicationExecutor] transition to STARTUP2 m31300| 2014-11-26T14:36:09.416-0500 I REPL [conn1] ****** m31300| 2014-11-26T14:36:09.416-0500 I REPL [conn1] creating replication oplog of size: 40MB... m31300| 2014-11-26T14:36:09.416-0500 D STORAGE [conn1] stored meta data for local.oplog.rs @ 0:4 m31301| 2014-11-26T14:36:09.416-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31300", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:102 0ms m31300| 2014-11-26T14:36:09.416-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-6-4921718955984408552 config: type=file,memory_page_max=100m,block_compressor=snappy,,type=file,app_metadata=(oplogKeyExtractionVersion=1),key_format=q,value_format=u m31300| 2014-11-26T14:36:09.417-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31301 is now in state STARTUP m31300| 2014-11-26T14:36:09.421-0500 D STORAGE [conn1] looking up metadata for: local.oplog.rs @ 0:4 m31300| 2014-11-26T14:36:09.421-0500 D STORAGE [conn1] WiredTigerKVEngine::flushAllFiles m31300| 2014-11-26T14:36:09.421-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D67397656676C4267527877776B726241577A4F685858414765516152566B79797769376C39432F666A537A54654F2F6F692B6433443732613734545469...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31300| 2014-11-26T14:36:09.422-0500 I ACCESS [conn2] Successfully authenticated as principal __system on local m31300| 2014-11-26T14:36:09.422-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:36:09.422-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: -2, from: "", checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:597 0ms m31301| 2014-11-26T14:36:09.422-0500 D REPL [ReplicationExecutor] Received new config via heartbeat with version 1 m31301| 2014-11-26T14:36:09.423-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31300| 2014-11-26T14:36:09.423-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:60722 #3 (3 connections now open) m31301| 2014-11-26T14:36:09.423-0500 D NETWORK connected to server ip-10-33-141-202:31300 (10.33.141.202) m31300| 2014-11-26T14:36:09.425-0500 I QUERY [conn3] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4461436E7561454D674342306766364B41464A306C6A616550382F446F4C7648) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31300| 2014-11-26T14:36:09.438-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4461436E7561454D674342306766364B41464A306C6A616550382F446F4C76486855636E72497671496A61744D4E5748473266637471466A7351625943...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31300| 2014-11-26T14:36:09.438-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m31300| 2014-11-26T14:36:09.438-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:36:09.438-0500 I QUERY [conn3] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31300| 2014-11-26T14:36:09.439-0500 I NETWORK [conn3] end connection 10.33.141.202:60722 (2 connections now open) m31301| 2014-11-26T14:36:09.439-0500 D STORAGE [WriteReplSetConfig] stored meta data for local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.439-0500 D STORAGE [WriteReplSetConfig] WiredTigerKVEngine::createRecordStore uri: table:collection-4--6168639182710429406 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:36:09.445-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.445-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.445-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.445-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.445-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.445-0500 D STORAGE [WriteReplSetConfig] create uri: table:index-5--6168639182710429406 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" } m31301| 2014-11-26T14:36:09.453-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.453-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.453-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.453-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.453-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.453-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.453-0500 D STORAGE [WriteReplSetConfig] local.system.replset: clearing plan cache - collection info cache reset m31301| 2014-11-26T14:36:09.453-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:36:09.454-0500 I REPL [WriteReplSetConfig] Starting replication applier threads m31301| 2014-11-26T14:36:09.454-0500 I REPL [rsSync] replSet warning did not receive a valid config yet, sleeping 5 seconds m31301| 2014-11-26T14:36:09.454-0500 I REPL [ReplicationExecutor] new replica set config in use: { _id: "test-rs2", version: 1, members: [ { _id: 0, host: "ip-10-33-141-202:31300", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ip-10-33-141-202:31301", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } m31301| 2014-11-26T14:36:09.454-0500 I REPL [ReplicationExecutor] transition to STARTUP2 m31300| 2014-11-26T14:36:09.454-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31301| 2014-11-26T14:36:09.454-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31300 is now in state STARTUP2 m31300| 2014-11-26T14:36:09.524-0500 I REPL [conn1] ****** m31300| 2014-11-26T14:36:09.524-0500 I REPL [conn1] Starting replication applier threads m31300| 2014-11-26T14:36:09.524-0500 I REPL [ReplicationExecutor] transition to RECOVERING m31300| 2014-11-26T14:36:09.524-0500 I QUERY [conn1] command admin.$cmd command: replSetInitiate { replSetInitiate: { _id: "test-rs2", members: [ { _id: 0.0, host: "ip-10-33-141-202:31300" }, { _id: 1.0, host: "ip-10-33-141-202:31301" } ] } } keyUpdates:0 reslen:37 151ms m31300| 2014-11-26T14:36:09.525-0500 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 54762ba9:1 0 m31300| 2014-11-26T14:36:09.525-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:09.525-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m31301| 2014-11-26T14:36:09.526-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:09.526-0500 I REPL [ReplicationExecutor] transition to SECONDARY m31300| 2014-11-26T14:36:09.726-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:09.727-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:09.793-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:158 0ms m31200| 2014-11-26T14:36:09.793-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31201 is now in state SECONDARY m31201| 2014-11-26T14:36:09.815-0500 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 54762ba1:1 0 m31201| 2014-11-26T14:36:09.815-0500 I REPL [ReplicationExecutor] could not find member to sync from m31200| 2014-11-26T14:36:09.816-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31300| 2014-11-26T14:36:09.928-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:09.928-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:36:09.983-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:36:10.021-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31300| 2014-11-26T14:36:10.129-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:10.129-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:10.330-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:10.330-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:10.531-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:10.532-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:10.732-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:10.733-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:10.933-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:10.934-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:11.135-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:11.135-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:11.336-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:11.336-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:11.417-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31300", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31300| 2014-11-26T14:36:11.417-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31301 is now in state STARTUP2 m31300| 2014-11-26T14:36:11.417-0500 I REPL [ReplicationExecutor] Standing for election m31301| 2014-11-26T14:36:11.418-0500 I QUERY [conn3] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "test-rs2", opTime: new Date(6086099951287271425), who: "ip-10-33-141-202:31300", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:257 0ms m31300| 2014-11-26T14:36:11.418-0500 I REPL [ReplicationExecutor] not electing self, ip-10-33-141-202:31301 would veto with 'errmsg: "I don't think ip-10-33-141-202:31300 is electable because the member is not currently a secondary; member is more than 10 seconds behind the most up-t..."' m31300| 2014-11-26T14:36:11.418-0500 I REPL [ReplicationExecutor] not electing self, we are not freshest m31300| 2014-11-26T14:36:11.454-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31301| 2014-11-26T14:36:11.454-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31300 is now in state SECONDARY m31300| 2014-11-26T14:36:11.537-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:11.537-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:11.738-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:11.739-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:36:11.794-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:36:11.816-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31300| 2014-11-26T14:36:11.939-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:11.940-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:36:11.983-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:36:12.021-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31300| 2014-11-26T14:36:12.141-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:12.141-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:12.342-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:12.343-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:12.543-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:12.544-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:12.744-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:12.745-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:12.946-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:12.946-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:13.147-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:13.147-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:36:13.348-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:13.348-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:36:13.417-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31300", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31300| 2014-11-26T14:36:13.417-0500 I REPL [ReplicationExecutor] Standing for election m31301| 2014-11-26T14:36:13.418-0500 I QUERY [conn3] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "test-rs2", opTime: new Date(6086099951287271425), who: "ip-10-33-141-202:31300", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:70 0ms m31300| 2014-11-26T14:36:13.418-0500 I REPL [ReplicationExecutor] replSet info electSelf m31301| 2014-11-26T14:36:13.418-0500 I REPL [ReplicationExecutor] replSetElect voting yea for ip-10-33-141-202:31300 (0) m31301| 2014-11-26T14:36:13.418-0500 I QUERY [conn3] command admin.$cmd command: replSetElect { replSetElect: 1, set: "test-rs2", who: "ip-10-33-141-202:31300", whoid: 0, cfgver: 1, round: ObjectId('54762bad999274ffdc4c2850') } ntoreturn:1 keyUpdates:0 reslen:66 0ms m31300| 2014-11-26T14:36:13.418-0500 D REPL [ReplicationExecutor] replSet elect res: { vote: 1, round: ObjectId('54762bad999274ffdc4c2850'), ok: 1.0 } m31300| 2014-11-26T14:36:13.418-0500 I REPL [ReplicationExecutor] replSet election succeeded, assuming primary role m31300| 2014-11-26T14:36:13.418-0500 I REPL [ReplicationExecutor] transition to PRIMARY m31300| 2014-11-26T14:36:13.454-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31301| 2014-11-26T14:36:13.454-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31300 is now in state PRIMARY m31300| 2014-11-26T14:36:13.527-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted m31300| 2014-11-26T14:36:13.549-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:13.550-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31300| 2014-11-26T14:36:13.550-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:13.550-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:36:13.551-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31300| 2014-11-26T14:36:13.751-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:13.752-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:36:13.752-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:13.794-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:36:13.816-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31300| 2014-11-26T14:36:13.953-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:13.953-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:36:13.954-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:13.983-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:36:14.022-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31300| 2014-11-26T14:36:14.155-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:14.155-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:36:14.155-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31300| 2014-11-26T14:36:14.356-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:14.357-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:36:14.357-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:36:14.454-0500 I REPL [rsSync] ****** m31301| 2014-11-26T14:36:14.454-0500 I REPL [rsSync] creating replication oplog of size: 40MB... m31301| 2014-11-26T14:36:14.454-0500 D STORAGE [rsSync] stored meta data for local.oplog.rs @ 0:4 m31301| 2014-11-26T14:36:14.454-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-6--6168639182710429406 config: type=file,memory_page_max=100m,block_compressor=snappy,,type=file,app_metadata=(oplogKeyExtractionVersion=1),key_format=q,value_format=u m31301| 2014-11-26T14:36:14.458-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:36:14.458-0500 D STORAGE [rsSync] WiredTigerKVEngine::flushAllFiles m31300| 2014-11-26T14:36:14.558-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:14.560-0500 I REPL [rsSync] ****** m31301| 2014-11-26T14:36:14.561-0500 I REPL [rsSync] initial sync pending m31301| 2014-11-26T14:36:14.561-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:36:14.561-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:36:14.561-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:36:14.561-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:36:14.561-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:36:14.561-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:36:14.561-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:36:14.561-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:36:14.561-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:36:14.561-0500 D STORAGE [rsSync] local.oplog.rs: clearing plan cache - collection info cache reset m31301| 2014-11-26T14:36:14.561-0500 I REPL [ReplicationExecutor] syncing from: ip-10-33-141-202:31300 m31301| 2014-11-26T14:36:14.561-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:36:14.562-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31300| 2014-11-26T14:36:14.562-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:60723 #4 (3 connections now open) m31301| 2014-11-26T14:36:14.562-0500 D NETWORK [rsSync] connected to server ip-10-33-141-202:31300 (10.33.141.202) m31300| 2014-11-26T14:36:14.563-0500 I QUERY [conn4] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D765446513754377363447855424554566C5630796879745938472F32364A3148) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31300| 2014-11-26T14:36:14.576-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D765446513754377363447855424554566C5630796879745938472F32364A31486C6A314E5A314B79426173717A46657638743153792F70656C74507477...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31300| 2014-11-26T14:36:14.576-0500 I ACCESS [conn4] Successfully authenticated as principal __system on local m31300| 2014-11-26T14:36:14.577-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:36:14.577-0500 I QUERY [conn4] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31300| 2014-11-26T14:36:14.578-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31301| 2014-11-26T14:36:14.578-0500 D STORAGE [rsSync] stored meta data for local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.579-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-7--6168639182710429406 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:36:14.582-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.582-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.582-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.582-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.583-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.583-0500 D STORAGE [rsSync] create uri: table:index-8--6168639182710429406 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.minvalid" } m31301| 2014-11-26T14:36:14.589-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.589-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.589-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.589-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.589-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.589-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.589-0500 D STORAGE [rsSync] local.replset.minvalid: clearing plan cache - collection info cache reset m31301| 2014-11-26T14:36:14.589-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:36:14.589-0500 I REPL [rsSync] initial sync drop all databases m31301| 2014-11-26T14:36:14.589-0500 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 m31301| 2014-11-26T14:36:14.589-0500 I REPL [rsSync] initial sync clone all databases m31300| 2014-11-26T14:36:14.590-0500 D STORAGE [conn4] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:14.590-0500 D STORAGE [conn4] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:14.590-0500 D STORAGE [conn4] looking up metadata for: local.oplog.rs @ 0:4 m31300| 2014-11-26T14:36:14.590-0500 D STORAGE [conn4] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:14.590-0500 D STORAGE [conn4] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:14.590-0500 D STORAGE [conn4] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:14.590-0500 D STORAGE [conn4] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:14.590-0500 I QUERY [conn4] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 reslen:124 0ms m31301| 2014-11-26T14:36:14.591-0500 I REPL [rsSync] initial sync data copy, starting syncup m31301| 2014-11-26T14:36:14.591-0500 I REPL [rsSync] oplog sync 1 of 3 m31300| 2014-11-26T14:36:14.591-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31301| 2014-11-26T14:36:14.591-0500 I REPL [rsSync] oplog sync 2 of 3 m31300| 2014-11-26T14:36:14.591-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31301| 2014-11-26T14:36:14.591-0500 I REPL [rsSync] initial sync building indexes m31301| 2014-11-26T14:36:14.591-0500 I REPL [rsSync] oplog sync 3 of 3 m31300| 2014-11-26T14:36:14.593-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31301| 2014-11-26T14:36:14.593-0500 I QUERY [rsSync] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31301| 2014-11-26T14:36:14.593-0500 I REPL [rsSync] initial sync finishing up m31301| 2014-11-26T14:36:14.593-0500 I REPL [rsSync] replSet set minValid=54762ba9:1 m31301| 2014-11-26T14:36:14.593-0500 I REPL [rsSync] initial sync done m31301| 2014-11-26T14:36:14.596-0500 I REPL [ReplicationExecutor] transition to RECOVERING m31300| 2014-11-26T14:36:14.596-0500 I NETWORK [conn4] end connection 10.33.141.202:60723 (2 connections now open) m31301| 2014-11-26T14:36:14.597-0500 I REPL [ReplicationExecutor] transition to SECONDARY m31300| 2014-11-26T14:36:14.762-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:14.762-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:36:14.763-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:36:14.763-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:36:14.763-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:36:14.765-0500 I QUERY [conn1] command admin.$cmd command: isMaster { isMaster: 1.0 } keyUpdates:0 reslen:401 0ms m31100| 2014-11-26T14:36:14.767-0500 I ACCESS [conn1] Unauthorized not authorized on admin to execute command { insert: "foo", documents: [ { x: 1.0, _id: ObjectId('54762baec9726aeedd20c958') } ], ordered: true } m31100| 2014-11-26T14:36:14.767-0500 I QUERY [conn1] command admin.$cmd command: isMaster { insert: "foo", documents: [ { x: 1.0, _id: ObjectId('54762baec9726aeedd20c958') } ], ordered: true } keyUpdates:0 reslen:205 0ms m31100| 2014-11-26T14:36:14.770-0500 I QUERY [conn1] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6162304A7A4C56464D4859784C537A68646639663671447172346531794B6F59) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:36:14.784-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D6162304A7A4C56464D4859784C537A68646639663671447172346531794B6F594A793970716A4A6D356A2F635A4530696A2B4559557670532F5158764A...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:36:14.784-0500 I ACCESS [conn1] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:14.784-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31101| 2014-11-26T14:36:14.786-0500 I QUERY [conn1] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D615053417233426A413851654A38766C46447139353750682F546E3048504B43) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31101| 2014-11-26T14:36:14.799-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D615053417233426A413851654A38766C46447139353750682F546E3048504B435867425A4D4F342F5851563048554131475456617A7074574F6363344A...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31101| 2014-11-26T14:36:14.799-0500 I ACCESS [conn1] Successfully authenticated as principal __system on local m31101| 2014-11-26T14:36:14.799-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:36:14.800-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:36:14.800-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:36:14.801-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms ReplSetTest awaitReplication: starting: timestamp for primary, ip-10-33-141-202:31100, is Timestamp(1417030556, 1) m31100| 2014-11-26T14:36:14.802-0500 I QUERY [conn1] query local.system.replset planSummary: COLLSCAN ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:489 0ms ReplSetTest awaitReplication: checking secondaries against timestamp Timestamp(1417030556, 1) m31101| 2014-11-26T14:36:14.802-0500 I QUERY [conn1] query local.system.replset planSummary: COLLSCAN ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:489 0ms m31101| 2014-11-26T14:36:14.803-0500 I QUERY [conn1] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1.0 } keyUpdates:0 reslen:563 0ms ReplSetTest awaitReplication: checking secondary #1: ip-10-33-141-202:31101 m31101| 2014-11-26T14:36:14.803-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31101| 2014-11-26T14:36:14.803-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms ReplSetTest awaitReplication: secondary #1, ip-10-33-141-202:31101, is synced ReplSetTest awaitReplication: finished: all 1 secondaries synced at timestamp Timestamp(1417030556, 1) m31100| 2014-11-26T14:36:14.804-0500 I QUERY [conn1] command local.$cmd command: logout { logout: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31101| 2014-11-26T14:36:14.804-0500 I QUERY [conn1] command local.$cmd command: logout { logout: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:36:14.804-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:36:14.805-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:36:14.805-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:36:14.805-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:36:14.806-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms 2014-11-26T14:36:14.806-0500 I NETWORK starting new replica set monitor for replica set test-rs0 with seeds ip-10-33-141-202:31100,ip-10-33-141-202:31101 2014-11-26T14:36:14.806-0500 I NETWORK [ReplicaSetMonitorWatcher] starting m31100| 2014-11-26T14:36:14.807-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38186 #5 (3 connections now open) m31100| 2014-11-26T14:36:14.807-0500 I QUERY [conn5] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:36:14.808-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:14.808-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:14.809-0500 I QUERY [conn1] command admin.$cmd command: isMaster { isMaster: 1.0 } keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:36:14.809-0500 I ACCESS [conn1] Unauthorized not authorized on admin to execute command { insert: "foo", documents: [ { x: 1.0, _id: ObjectId('54762baec9726aeedd20c959') } ], ordered: true } m31200| 2014-11-26T14:36:14.809-0500 I QUERY [conn1] command admin.$cmd command: isMaster { insert: "foo", documents: [ { x: 1.0, _id: ObjectId('54762baec9726aeedd20c959') } ], ordered: true } keyUpdates:0 reslen:205 0ms m31200| 2014-11-26T14:36:14.811-0500 I QUERY [conn1] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6F4831626C34554D51394437794E41614A772F5162746C724645784F6A507149) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:36:14.824-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D6F4831626C34554D51394437794E41614A772F5162746C724645784F6A50714973596C42326A734D7961722F356B334B43626B704749434B7857755959...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:36:14.824-0500 I ACCESS [conn1] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:14.824-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:36:14.826-0500 I QUERY [conn1] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D3762717346584D3263676C71497075337974412B33384775513351726A737844) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:36:14.839-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D3762717346584D3263676C71497075337974412B33384775513351726A7378447374664C564658654B6F7A492F43416D2F4745436A78705671664B3478...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:36:14.839-0500 I ACCESS [conn1] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:36:14.839-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:36:14.840-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:14.840-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:14.841-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms ReplSetTest awaitReplication: starting: timestamp for primary, ip-10-33-141-202:31200, is Timestamp(1417030561, 1) m31200| 2014-11-26T14:36:14.841-0500 I QUERY [conn1] query local.system.replset planSummary: COLLSCAN ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:489 0ms ReplSetTest awaitReplication: checking secondaries against timestamp Timestamp(1417030561, 1) m31201| 2014-11-26T14:36:14.842-0500 I QUERY [conn1] query local.system.replset planSummary: COLLSCAN ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:489 0ms m31201| 2014-11-26T14:36:14.842-0500 I QUERY [conn1] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1.0 } keyUpdates:0 reslen:615 0ms ReplSetTest awaitReplication: checking secondary #1: ip-10-33-141-202:31201 m31201| 2014-11-26T14:36:14.842-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31201| 2014-11-26T14:36:14.843-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms ReplSetTest awaitReplication: secondary #1, ip-10-33-141-202:31201, is synced ReplSetTest awaitReplication: finished: all 1 secondaries synced at timestamp Timestamp(1417030561, 1) m31200| 2014-11-26T14:36:14.843-0500 I QUERY [conn1] command local.$cmd command: logout { logout: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31201| 2014-11-26T14:36:14.843-0500 I QUERY [conn1] command local.$cmd command: logout { logout: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31200| 2014-11-26T14:36:14.843-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:14.844-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:14.844-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:14.844-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:14.845-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms 2014-11-26T14:36:14.845-0500 I NETWORK starting new replica set monitor for replica set test-rs1 with seeds ip-10-33-141-202:31200,ip-10-33-141-202:31201 m31200| 2014-11-26T14:36:14.845-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40644 #5 (3 connections now open) m31200| 2014-11-26T14:36:14.846-0500 I QUERY [conn5] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31300| 2014-11-26T14:36:14.846-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:14.846-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31300| 2014-11-26T14:36:14.847-0500 I QUERY [conn1] command admin.$cmd command: isMaster { isMaster: 1.0 } keyUpdates:0 reslen:401 0ms m31300| 2014-11-26T14:36:14.847-0500 I ACCESS [conn1] Unauthorized not authorized on admin to execute command { insert: "foo", documents: [ { x: 1.0, _id: ObjectId('54762baec9726aeedd20c95a') } ], ordered: true } m31300| 2014-11-26T14:36:14.847-0500 I QUERY [conn1] command admin.$cmd command: isMaster { insert: "foo", documents: [ { x: 1.0, _id: ObjectId('54762baec9726aeedd20c95a') } ], ordered: true } keyUpdates:0 reslen:205 0ms m31300| 2014-11-26T14:36:14.850-0500 I QUERY [conn1] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D675A503833322F737539477637484E6D7237683231642B6C487A36344A30306C) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31300| 2014-11-26T14:36:14.863-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D675A503833322F737539477637484E6D7237683231642B6C487A36344A30306C6B765342394277417339752B4D3364756D4A77464E5853666874796C46...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31300| 2014-11-26T14:36:14.863-0500 I ACCESS [conn1] Successfully authenticated as principal __system on local m31300| 2014-11-26T14:36:14.863-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31301| 2014-11-26T14:36:14.865-0500 I QUERY [conn1] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D46515A6761793638774279502B6C646641537951326F634769715579596D4C43) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31301| 2014-11-26T14:36:14.878-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D46515A6761793638774279502B6C646641537951326F634769715579596D4C434B564E79766E7270744553596364332B4B425A694E504364783558646C...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31301| 2014-11-26T14:36:14.878-0500 I ACCESS [conn1] Successfully authenticated as principal __system on local m31301| 2014-11-26T14:36:14.878-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:36:14.879-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:14.879-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31300| 2014-11-26T14:36:14.880-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms ReplSetTest awaitReplication: starting: timestamp for primary, ip-10-33-141-202:31300, is Timestamp(1417030569, 1) m31300| 2014-11-26T14:36:14.880-0500 I QUERY [conn1] query local.system.replset planSummary: COLLSCAN ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:489 0ms ReplSetTest awaitReplication: checking secondaries against timestamp Timestamp(1417030569, 1) m31301| 2014-11-26T14:36:14.881-0500 I QUERY [conn1] query local.system.replset planSummary: COLLSCAN ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:489 0ms m31301| 2014-11-26T14:36:14.881-0500 I QUERY [conn1] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1.0 } keyUpdates:0 reslen:639 0ms ReplSetTest awaitReplication: checking secondary #1: ip-10-33-141-202:31301 m31301| 2014-11-26T14:36:14.881-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31301| 2014-11-26T14:36:14.882-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms ReplSetTest awaitReplication: secondary #1, ip-10-33-141-202:31301, is synced ReplSetTest awaitReplication: finished: all 1 secondaries synced at timestamp Timestamp(1417030569, 1) m31300| 2014-11-26T14:36:14.882-0500 I QUERY [conn1] command local.$cmd command: logout { logout: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31301| 2014-11-26T14:36:14.882-0500 I QUERY [conn1] command local.$cmd command: logout { logout: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31300| 2014-11-26T14:36:14.882-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:14.883-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31300| 2014-11-26T14:36:14.883-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:14.883-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:36:14.884-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms 2014-11-26T14:36:14.884-0500 I NETWORK starting new replica set monitor for replica set test-rs2 with seeds ip-10-33-141-202:31300,ip-10-33-141-202:31301 m31300| 2014-11-26T14:36:14.884-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:60726 #5 (3 connections now open) m31300| 2014-11-26T14:36:14.885-0500 I QUERY [conn5] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms Resetting db path '/data/db/test-config0' 2014-11-26T14:36:14.888-0500 I - shell: started program (sh10430): /data/mongo/mongod --port 29000 --dbpath /data/db/test-config0 --keyFile jstests/libs/key1 --configsvr --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:36:14.888-0500 W NETWORK Failed to connect to 127.0.0.1:29000, reason: errno:111 Connection refused m29000| 2014-11-26T14:36:14.915-0500 I CONTROL [initandlisten] MongoDB starting : pid=10430 port=29000 dbpath=/data/db/test-config0 master=1 64-bit host=ip-10-33-141-202 m29000| 2014-11-26T14:36:14.916-0500 I CONTROL [initandlisten] m29000| 2014-11-26T14:36:14.916-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m29000| 2014-11-26T14:36:14.916-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m29000| 2014-11-26T14:36:14.916-0500 I CONTROL [initandlisten] m29000| 2014-11-26T14:36:14.916-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m29000| 2014-11-26T14:36:14.916-0500 I CONTROL [initandlisten] m29000| 2014-11-26T14:36:14.916-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m29000| 2014-11-26T14:36:14.916-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m29000| 2014-11-26T14:36:14.916-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m29000| 2014-11-26T14:36:14.916-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m29000| 2014-11-26T14:36:14.916-0500 I CONTROL [initandlisten] allocator: tcmalloc m29000| 2014-11-26T14:36:14.916-0500 I CONTROL [initandlisten] options: { net: { port: 29000 }, nopreallocj: true, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/db/test-config0", engine: "wiredTiger" } } m29000| 2014-11-26T14:36:14.916-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m29000| 2014-11-26T14:36:14.961-0500 I REPL [initandlisten] ****** m29000| 2014-11-26T14:36:14.961-0500 I REPL [initandlisten] creating replication oplog of size: 5MB... m29000| 2014-11-26T14:36:15.036-0500 I REPL [initandlisten] ****** m29000| 2014-11-26T14:36:15.045-0500 I NETWORK [initandlisten] waiting for connections on port 29000 m29000| 2014-11-26T14:36:15.089-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59986 #1 (1 connection now open) "ip-10-33-141-202:29000" m29000| 2014-11-26T14:36:15.090-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41567 #2 (2 connections now open) ShardingTest test : { "config" : "ip-10-33-141-202:29000", "shards" : [ connection to test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101, connection to test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201, connection to test-rs2/ip-10-33-141-202:31300,ip-10-33-141-202:31301 ] } 2014-11-26T14:36:15.091-0500 I - shell: started program (sh10448): /data/mongo/mongos --port 30999 --configdb ip-10-33-141-202:29000 --keyFile jstests/libs/key1 --chunkSize 50 --setParameter enableTestCommands=1 2014-11-26T14:36:15.092-0500 W NETWORK Failed to connect to 127.0.0.1:30999, reason: errno:111 Connection refused m30999| 2014-11-26T14:36:15.100-0500 W SHARDING running with 1 config server should be done only for testing purposes and is not recommended for production m30999| 2014-11-26T14:36:15.117-0500 I SHARDING [mongosMain] MongoS version 2.8.0-rc2-pre- starting: pid=10448 port=30999 64-bit host=ip-10-33-141-202 (--help for usage) m30999| 2014-11-26T14:36:15.117-0500 I CONTROL [mongosMain] db version v2.8.0-rc2-pre- m30999| 2014-11-26T14:36:15.117-0500 I CONTROL [mongosMain] git version: 45790039049d7375beafe122622363d35ce990c2 m30999| 2014-11-26T14:36:15.117-0500 I CONTROL [mongosMain] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m30999| 2014-11-26T14:36:15.117-0500 I CONTROL [mongosMain] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m30999| 2014-11-26T14:36:15.117-0500 I CONTROL [mongosMain] allocator: tcmalloc m30999| 2014-11-26T14:36:15.117-0500 I CONTROL [mongosMain] options: { net: { port: 30999 }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, sharding: { chunkSize: 50, configDB: "ip-10-33-141-202:29000" } } m29000| 2014-11-26T14:36:15.118-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41569 #3 (3 connections now open) m29000| 2014-11-26T14:36:15.133-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m29000| 2014-11-26T14:36:15.133-0500 I STORAGE [conn3] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:36:15.133-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41570 #4 (4 connections now open) m29000| 2014-11-26T14:36:15.148-0500 I ACCESS [conn4] Successfully authenticated as principal __system on local m29000| 2014-11-26T14:36:15.194-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41571 #5 (5 connections now open) m29000| 2014-11-26T14:36:15.208-0500 I ACCESS [conn5] Successfully authenticated as principal __system on local m30999| 2014-11-26T14:36:15.210-0500 I SHARDING [LockPinger] creating distributed lock ping thread for ip-10-33-141-202:29000 and process ip-10-33-141-202:30999:1417030575:1804289383 (sleeping for 30000ms) m30999| 2014-11-26T14:36:15.226-0500 I SHARDING [LockPinger] cluster ip-10-33-141-202:29000 pinged successfully at Wed Nov 26 14:36:15 2014 by distributed lock pinger 'ip-10-33-141-202:29000/ip-10-33-141-202:30999:1417030575:1804289383', sleeping for 30000ms m30999| 2014-11-26T14:36:15.227-0500 I SHARDING [mongosMain] distributed lock 'configUpgrade/ip-10-33-141-202:30999:1417030575:1804289383' acquired, ts : 54762baf9255d3d73a3c7ad5 m30999| 2014-11-26T14:36:15.227-0500 I SHARDING [mongosMain] starting upgrade of config server from v0 to v6 m30999| 2014-11-26T14:36:15.227-0500 I SHARDING [mongosMain] starting next upgrade step from v0 to v6 m30999| 2014-11-26T14:36:15.227-0500 I SHARDING [mongosMain] about to log new metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:15-54762baf9255d3d73a3c7ad6", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030575227), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 6 } } m29000| 2014-11-26T14:36:15.237-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41572 #6 (6 connections now open) m29000| 2014-11-26T14:36:15.251-0500 I ACCESS [conn6] Successfully authenticated as principal __system on local m29000| 2014-11-26T14:36:15.252-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 2014-11-26T14:36:15.292-0500 W NETWORK Failed to connect to 127.0.0.1:30999, reason: errno:111 Connection refused m29000| 2014-11-26T14:36:15.363-0500 I QUERY [conn6] command admin.$cmd command: fsync { fsync: true } ntoreturn:1 keyUpdates:0 reslen:51 111ms m30999| 2014-11-26T14:36:15.363-0500 I SHARDING [mongosMain] writing initial config version at v6 m29000| 2014-11-26T14:36:15.363-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31301| 2014-11-26T14:36:15.417-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31300", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:158 0ms m31300| 2014-11-26T14:36:15.417-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31301 is now in state SECONDARY m30999| 2014-11-26T14:36:15.447-0500 I SHARDING [mongosMain] about to log new metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:15-54762baf9255d3d73a3c7ad8", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030575447), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 6 } } m29000| 2014-11-26T14:36:15.448-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31300| 2014-11-26T14:36:15.454-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31301| 2014-11-26T14:36:15.455-0500 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 54762ba9:1 0 m31301| 2014-11-26T14:36:15.455-0500 I REPL [ReplicationExecutor] could not find member to sync from 2014-11-26T14:36:15.493-0500 W NETWORK Failed to connect to 127.0.0.1:30999, reason: errno:111 Connection refused m30999| 2014-11-26T14:36:15.496-0500 I SHARDING [mongosMain] upgrade of config server to v6 successful m30999| 2014-11-26T14:36:15.497-0500 I SHARDING [mongosMain] distributed lock 'configUpgrade/ip-10-33-141-202:30999:1417030575:1804289383' unlocked. m29000| 2014-11-26T14:36:15.497-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:36:15.567-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:36:15.635-0500 I INDEX [conn6] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } m29000| 2014-11-26T14:36:15.635-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:36:15.641-0500 I INDEX [conn6] build index done. scanned 0 total records. 0 secs m29000| 2014-11-26T14:36:15.642-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 2014-11-26T14:36:15.693-0500 W NETWORK Failed to connect to 127.0.0.1:30999, reason: errno:111 Connection refused m29000| 2014-11-26T14:36:15.696-0500 I INDEX [conn6] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } m29000| 2014-11-26T14:36:15.696-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:36:15.701-0500 I INDEX [conn6] build index done. scanned 0 total records. 0 secs m29000| 2014-11-26T14:36:15.701-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:36:15.748-0500 I INDEX [conn6] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } m29000| 2014-11-26T14:36:15.749-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:36:15.753-0500 I INDEX [conn6] build index done. scanned 0 total records. 0 secs m29000| 2014-11-26T14:36:15.753-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31201| 2014-11-26T14:36:15.794-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m29000| 2014-11-26T14:36:15.804-0500 I INDEX [conn6] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } m29000| 2014-11-26T14:36:15.804-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:36:15.810-0500 I INDEX [conn6] build index done. scanned 0 total records. 0 secs m29000| 2014-11-26T14:36:15.811-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31200| 2014-11-26T14:36:15.817-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m29000| 2014-11-26T14:36:15.856-0500 I INDEX [conn6] build index on: config.locks properties: { v: 1, unique: true, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } m29000| 2014-11-26T14:36:15.856-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:36:15.862-0500 I INDEX [conn6] build index done. scanned 1 total records. 0 secs m29000| 2014-11-26T14:36:15.863-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 2014-11-26T14:36:15.894-0500 W NETWORK Failed to connect to 127.0.0.1:30999, reason: errno:111 Connection refused m29000| 2014-11-26T14:36:15.906-0500 I INDEX [conn6] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } m29000| 2014-11-26T14:36:15.906-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:36:15.911-0500 I INDEX [conn6] build index done. scanned 1 total records. 0 secs m29000| 2014-11-26T14:36:15.911-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:36:15.955-0500 I INDEX [conn6] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } m29000| 2014-11-26T14:36:15.955-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:36:15.961-0500 I INDEX [conn6] build index done. scanned 1 total records. 0 secs m30999| 2014-11-26T14:36:15.962-0500 I SHARDING [Balancer] about to contact config servers and shards m30999| 2014-11-26T14:36:15.962-0500 I NETWORK [mongosMain] waiting for connections on port 30999 m30999| 2014-11-26T14:36:15.962-0500 I SHARDING [Balancer] config servers and shards contacted successfully m30999| 2014-11-26T14:36:15.962-0500 I SHARDING [Balancer] balancer id: ip-10-33-141-202:30999 started at Nov 26 14:36:15 m30999| 2014-11-26T14:36:15.971-0500 I SHARDING [Balancer] distributed lock 'balancer/ip-10-33-141-202:30999:1417030575:1804289383' acquired, ts : 54762baf9255d3d73a3c7ada m29000| 2014-11-26T14:36:15.981-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31101| 2014-11-26T14:36:15.985-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:36:16.022-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m30999| 2014-11-26T14:36:16.073-0500 I SHARDING [Balancer] distributed lock 'balancer/ip-10-33-141-202:30999:1417030575:1804289383' unlocked. m30999| 2014-11-26T14:36:16.094-0500 I NETWORK [mongosMain] connection accepted from 127.0.0.1:39818 #1 (1 connection now open) m30999| 2014-11-26T14:36:16.095-0500 I SHARDING [conn1] couldn't find database [admin] in config db m29000| 2014-11-26T14:36:16.096-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:36:16.187-0500 I SHARDING [conn1] put [admin] on: config:ip-10-33-141-202:29000 m30999| 2014-11-26T14:36:16.187-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m30999| 2014-11-26T14:36:16.188-0500 I ACCESS [conn1] authenticate db: admin { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" } m29000| 2014-11-26T14:36:16.190-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 Waiting for active hosts... Waiting for the balancer lock... Waiting again for active hosts after balancer is off... ShardingTest undefined going to add shard : test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101 m30999| 2014-11-26T14:36:16.255-0500 I NETWORK [conn1] starting new replica set monitor for replica set test-rs0 with seeds ip-10-33-141-202:31100,ip-10-33-141-202:31101 m30999| 2014-11-26T14:36:16.256-0500 I NETWORK [ReplicaSetMonitorWatcher] starting m31100| 2014-11-26T14:36:16.256-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38202 #6 (4 connections now open) m31100| 2014-11-26T14:36:16.258-0500 I QUERY [conn6] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D314C4A6858733477782F417768576753573756705943363771514F4842726F73) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:36:16.271-0500 I QUERY [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D314C4A6858733477782F417768576753573756705943363771514F4842726F7368714E6C4D7568477176446F5A573764464351596F4B7066474439776B...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:36:16.271-0500 I ACCESS [conn6] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:16.271-0500 I QUERY [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:36:16.271-0500 I QUERY [conn6] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31100| 2014-11-26T14:36:16.271-0500 I QUERY [conn6] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31100| 2014-11-26T14:36:16.272-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38203 #7 (5 connections now open) m31100| 2014-11-26T14:36:16.273-0500 I QUERY [conn7] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4E484A2F6F5950757645734C755A637072725A61585A4C393568353151455265) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:36:16.286-0500 I QUERY [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4E484A2F6F5950757645734C755A637072725A61585A4C39356835315145526541767A443665495A6B50594A2F643933775941544C556B6970566E2B57...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:36:16.287-0500 I ACCESS [conn7] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:16.287-0500 I QUERY [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:36:16.287-0500 I QUERY [conn7] command admin.$cmd command: getLastError { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:110 0ms m31100| 2014-11-26T14:36:16.287-0500 I QUERY [conn7] command admin.$cmd command: getLastError { isdbgrid: 1 } ntoreturn:1 keyUpdates:0 reslen:113 0ms m31100| 2014-11-26T14:36:16.287-0500 I QUERY [conn7] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31100| 2014-11-26T14:36:16.287-0500 D STORAGE [conn7] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:36:16.287-0500 D STORAGE [conn7] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:36:16.288-0500 D STORAGE [conn7] looking up metadata for: local.oplog.rs @ 0:4 m31100| 2014-11-26T14:36:16.288-0500 D STORAGE [conn7] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:36:16.288-0500 D STORAGE [conn7] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:36:16.288-0500 D STORAGE [conn7] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:36:16.288-0500 D STORAGE [conn7] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:36:16.288-0500 I QUERY [conn7] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 reslen:124 0ms m30999| 2014-11-26T14:36:16.289-0500 I SHARDING [conn1] going to add shard: { _id: "test-rs0", host: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101" } m29000| 2014-11-26T14:36:16.289-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:36:16.347-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:16-54762bb09255d3d73a3c7adc", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030576347), what: "addShard", ns: "", details: { name: "test-rs0", host: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101" } } m29000| 2014-11-26T14:36:16.347-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 { "shardAdded" : "test-rs0", "ok" : 1 } ShardingTest undefined going to add shard : test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201 m30999| 2014-11-26T14:36:16.412-0500 I NETWORK [conn1] starting new replica set monitor for replica set test-rs1 with seeds ip-10-33-141-202:31200,ip-10-33-141-202:31201 m31200| 2014-11-26T14:36:16.413-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40661 #6 (4 connections now open) m31200| 2014-11-26T14:36:16.414-0500 I QUERY [conn6] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D78707271753959453167626A69585A432F3369526D756C52566E76546B526A30) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:36:16.427-0500 I QUERY [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D78707271753959453167626A69585A432F3369526D756C52566E76546B526A303349706277453171597943586375325945495143514857516277763979...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:36:16.427-0500 I ACCESS [conn6] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:16.428-0500 I QUERY [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:36:16.428-0500 I QUERY [conn6] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:36:16.428-0500 I QUERY [conn6] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:36:16.428-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40662 #7 (5 connections now open) m31200| 2014-11-26T14:36:16.430-0500 I QUERY [conn7] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D7068454575354F7876644F513953466B746D454E71463447784A337272694141) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:36:16.443-0500 I QUERY [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D7068454575354F7876644F513953466B746D454E71463447784A337272694141496E49655942415267446A67786234625941695173665176557139792B...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:36:16.443-0500 I ACCESS [conn7] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:16.443-0500 I QUERY [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:36:16.443-0500 I QUERY [conn7] command admin.$cmd command: getLastError { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:110 0ms m31200| 2014-11-26T14:36:16.443-0500 I QUERY [conn7] command admin.$cmd command: getLastError { isdbgrid: 1 } ntoreturn:1 keyUpdates:0 reslen:113 0ms m31200| 2014-11-26T14:36:16.444-0500 I QUERY [conn7] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:36:16.444-0500 D STORAGE [conn7] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:16.444-0500 D STORAGE [conn7] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:36:16.444-0500 D STORAGE [conn7] looking up metadata for: local.oplog.rs @ 0:4 m31200| 2014-11-26T14:36:16.444-0500 D STORAGE [conn7] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:16.444-0500 D STORAGE [conn7] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:36:16.444-0500 D STORAGE [conn7] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:16.444-0500 D STORAGE [conn7] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:36:16.445-0500 I QUERY [conn7] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 reslen:124 0ms m30999| 2014-11-26T14:36:16.445-0500 I SHARDING [conn1] going to add shard: { _id: "test-rs1", host: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201" } m29000| 2014-11-26T14:36:16.445-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:36:16.498-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:16-54762bb09255d3d73a3c7add", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030576498), what: "addShard", ns: "", details: { name: "test-rs1", host: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201" } } m29000| 2014-11-26T14:36:16.498-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 { "shardAdded" : "test-rs1", "ok" : 1 } ShardingTest undefined going to add shard : test-rs2/ip-10-33-141-202:31300,ip-10-33-141-202:31301 m30999| 2014-11-26T14:36:16.560-0500 I NETWORK [conn1] starting new replica set monitor for replica set test-rs2 with seeds ip-10-33-141-202:31300,ip-10-33-141-202:31301 m31300| 2014-11-26T14:36:16.560-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:60744 #6 (4 connections now open) m31300| 2014-11-26T14:36:16.562-0500 I QUERY [conn6] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D48514B716C6F5277694538376F35445A6C616B38584D38534E34694E63315254) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31300| 2014-11-26T14:36:16.575-0500 I QUERY [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D48514B716C6F5277694538376F35445A6C616B38584D38534E34694E6331525470594234416C323468465169337657384237454F446C746D3331533243...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31300| 2014-11-26T14:36:16.575-0500 I ACCESS [conn6] Successfully authenticated as principal __system on local m31300| 2014-11-26T14:36:16.575-0500 I QUERY [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:36:16.575-0500 I QUERY [conn6] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31300| 2014-11-26T14:36:16.576-0500 I QUERY [conn6] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31300| 2014-11-26T14:36:16.576-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:60745 #7 (5 connections now open) m31300| 2014-11-26T14:36:16.578-0500 I QUERY [conn7] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D7969314B4A687A694E69655170773536382F796D716C4977366F315266536C59) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31300| 2014-11-26T14:36:16.590-0500 I QUERY [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D7969314B4A687A694E69655170773536382F796D716C4977366F315266536C59556D76524970722B73683843524F6649353946754A6F455A4E2F4A6C50...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31300| 2014-11-26T14:36:16.591-0500 I ACCESS [conn7] Successfully authenticated as principal __system on local m31300| 2014-11-26T14:36:16.591-0500 I QUERY [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:36:16.591-0500 I QUERY [conn7] command admin.$cmd command: getLastError { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:110 0ms m31300| 2014-11-26T14:36:16.591-0500 I QUERY [conn7] command admin.$cmd command: getLastError { isdbgrid: 1 } ntoreturn:1 keyUpdates:0 reslen:113 0ms m31300| 2014-11-26T14:36:16.591-0500 I QUERY [conn7] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31300| 2014-11-26T14:36:16.591-0500 D STORAGE [conn7] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:16.591-0500 D STORAGE [conn7] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:36:16.592-0500 D STORAGE [conn7] looking up metadata for: local.oplog.rs @ 0:4 m31300| 2014-11-26T14:36:16.592-0500 D STORAGE [conn7] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:16.592-0500 D STORAGE [conn7] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:36:16.592-0500 D STORAGE [conn7] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:16.592-0500 D STORAGE [conn7] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:36:16.592-0500 I QUERY [conn7] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 reslen:124 0ms m30999| 2014-11-26T14:36:16.592-0500 I SHARDING [conn1] going to add shard: { _id: "test-rs2", host: "test-rs2/ip-10-33-141-202:31300,ip-10-33-141-202:31301" } m29000| 2014-11-26T14:36:16.592-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:36:16.638-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:16-54762bb09255d3d73a3c7ade", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030576638), what: "addShard", ns: "", details: { name: "test-rs2", host: "test-rs2/ip-10-33-141-202:31300,ip-10-33-141-202:31301" } } m29000| 2014-11-26T14:36:16.638-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 { "shardAdded" : "test-rs2", "ok" : 1 } ---- Setting up initial admin user... ---- m30999| 2014-11-26T14:36:16.714-0500 I SHARDING [conn1] distributed lock 'authorizationData/ip-10-33-141-202:30999:1417030575:1804289383' acquired, ts : 54762bb09255d3d73a3c7adf m29000| 2014-11-26T14:36:16.714-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:36:16.791-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:36:16.866-0500 I INDEX [conn6] build index on: admin.system.users properties: { v: 1, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" } m29000| 2014-11-26T14:36:16.866-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:36:16.872-0500 I INDEX [conn6] build index done. scanned 0 total records. 0 secs m30999| 2014-11-26T14:36:16.873-0500 I SHARDING [conn1] distributed lock 'authorizationData/ip-10-33-141-202:30999:1417030575:1804289383' unlocked. Successfully added user: { "user" : "adminUser", "roles" : [ "root" ] } m30999| 2014-11-26T14:36:16.889-0500 I ACCESS [conn1] Successfully authenticated as principal adminUser on admin m29000| 2014-11-26T14:36:16.890-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 Waiting for active hosts... Waiting for the balancer lock... Waiting again for active hosts after balancer is off... m30999| 2014-11-26T14:36:16.973-0500 I SHARDING [conn1] couldn't find database [fooUnsharded] in config db m31100| 2014-11-26T14:36:16.974-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m31100| 2014-11-26T14:36:16.975-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m31200| 2014-11-26T14:36:16.975-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m31300| 2014-11-26T14:36:16.976-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m29000| 2014-11-26T14:36:16.976-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:36:17.007-0500 I SHARDING [conn1] put [fooUnsharded] on: test-rs0:test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101 m31100| 2014-11-26T14:36:17.008-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38208 #8 (6 connections now open) m31100| 2014-11-26T14:36:17.010-0500 I QUERY [conn8] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D48552B385750422F673637463343376F616A6D30464656774A46354A77396136) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:36:17.023-0500 I QUERY [conn8] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D48552B385750422F673637463343376F616A6D30464656774A46354A773961365164697932467065457A776F5A75516C306A6369426663677950555841...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:36:17.023-0500 I ACCESS [conn8] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:17.023-0500 I QUERY [conn8] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:36:17.023-0500 I QUERY [conn8] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31100| 2014-11-26T14:36:17.023-0500 D STORAGE [conn8] stored meta data for fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.023-0500 D STORAGE [conn8] WiredTigerKVEngine::createRecordStore uri: table:collection-7--1911027222389114415 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:36:17.025-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.026-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.026-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.026-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.026-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.026-0500 D STORAGE [conn8] create uri: table:index-8--1911027222389114415 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "fooUnsharded.barUnsharded" } m31100| 2014-11-26T14:36:17.030-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.030-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.030-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.031-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.031-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.031-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.031-0500 D STORAGE [conn8] fooUnsharded.barUnsharded: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:36:17.031-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:36:17.031-0500 I WRITE [conn8] insert fooUnsharded.barUnsharded query: { _id: ObjectId('54762bb0c9726aeedd20c95d'), some: "doc" } ninserted:1 keyUpdates:0 7ms m31100| 2014-11-26T14:36:17.031-0500 I QUERY [conn8] command fooUnsharded.$cmd command: insert { insert: "barUnsharded", documents: [ { _id: ObjectId('54762bb0c9726aeedd20c95d'), some: "doc" } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 7ms m31100| 2014-11-26T14:36:17.032-0500 I WRITE [conn8] remove fooUnsharded.barUnsharded ndeleted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:36:17.032-0500 I QUERY [conn8] command fooUnsharded.$cmd command: delete { delete: "barUnsharded", deletes: [ { q: {}, limit: 0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms { "ok" : 0, "errmsg" : "it is already the primary" } m30999| 2014-11-26T14:36:17.034-0500 I SHARDING [conn1] couldn't find database [fooSharded] in config db m31100| 2014-11-26T14:36:17.034-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m31100| 2014-11-26T14:36:17.035-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m31200| 2014-11-26T14:36:17.035-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m31300| 2014-11-26T14:36:17.036-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m29000| 2014-11-26T14:36:17.036-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:36:17.120-0500 I SHARDING [conn1] put [fooSharded] on: test-rs0:test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101 m30999| 2014-11-26T14:36:17.120-0500 I COMMAND [conn1] enabling sharding on: fooSharded m29000| 2014-11-26T14:36:17.120-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:36:17.193-0500 I COMMAND [conn1] Moving fooSharded primary from: test-rs0:test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101 to: test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201 m30999| 2014-11-26T14:36:17.194-0500 I SHARDING [conn1] distributed lock 'fooSharded-movePrimary/ip-10-33-141-202:30999:1417030575:1804289383' acquired, ts : 54762bb19255d3d73a3c7ae0 m30999| 2014-11-26T14:36:17.194-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:17-54762bb19255d3d73a3c7ae1", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030577194), what: "movePrimary.start", ns: "fooSharded", details: { database: "fooSharded", from: "test-rs0:test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", to: "test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", shardedCollections: [] } } m29000| 2014-11-26T14:36:17.194-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31200| 2014-11-26T14:36:17.260-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:36:17.260-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38209 #9 (7 connections now open) m31200| 2014-11-26T14:36:17.261-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31100| 2014-11-26T14:36:17.262-0500 I QUERY [conn9] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D634A4B45396565684D593874546F7232504C706D6C2F4F35584266547A676E4E) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:36:17.275-0500 I QUERY [conn9] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D634A4B45396565684D593874546F7232504C706D6C2F4F35584266547A676E4E5A796B6F497A6955774D503076385845386E514F73346E413162495048...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:36:17.275-0500 I ACCESS [conn9] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:17.275-0500 I QUERY [conn9] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:36:17.275-0500 I QUERY [conn9] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31100| 2014-11-26T14:36:17.276-0500 I NETWORK [conn9] end connection 10.33.141.202:38209 (6 connections now open) m31200| 2014-11-26T14:36:17.276-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31101| 2014-11-26T14:36:17.276-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:54146 #4 (3 connections now open) m31200| 2014-11-26T14:36:17.276-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31101 (10.33.141.202) m31101| 2014-11-26T14:36:17.278-0500 I QUERY [conn4] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D3537367941475769466C4337672B50615356504C423444666756315956767032) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31101| 2014-11-26T14:36:17.291-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D3537367941475769466C4337672B50615356504C4234446667563159567670327964682B384B712F6E615A6C385173376B612F4C2B66464233306B3551...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31101| 2014-11-26T14:36:17.291-0500 I ACCESS [conn4] Successfully authenticated as principal __system on local m31101| 2014-11-26T14:36:17.291-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31101| 2014-11-26T14:36:17.291-0500 I QUERY [conn4] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31200| 2014-11-26T14:36:17.291-0500 I NETWORK [conn7] starting new replica set monitor for replica set test-rs0 with seeds ip-10-33-141-202:31100,ip-10-33-141-202:31101 m31200| 2014-11-26T14:36:17.291-0500 D COMMAND [ReplicaSetMonitorWatcher] BackgroundJob starting: ReplicaSetMonitorWatcher m31200| 2014-11-26T14:36:17.291-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:31101 m31200| 2014-11-26T14:36:17.291-0500 I NETWORK [ReplicaSetMonitorWatcher] starting m31101| 2014-11-26T14:36:17.291-0500 I NETWORK [conn4] end connection 10.33.141.202:54146 (2 connections now open) m31200| 2014-11-26T14:36:17.291-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31101| 2014-11-26T14:36:17.292-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:54147 #5 (3 connections now open) m31200| 2014-11-26T14:36:17.292-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31101 (10.33.141.202) m31200| 2014-11-26T14:36:17.292-0500 D NETWORK [conn7] connected connection! m31101| 2014-11-26T14:36:17.292-0500 I QUERY [conn5] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:17.292-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:31100 m31200| 2014-11-26T14:36:17.292-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:36:17.292-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38212 #10 (7 connections now open) m31200| 2014-11-26T14:36:17.292-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31200| 2014-11-26T14:36:17.293-0500 D NETWORK [conn7] connected connection! m31100| 2014-11-26T14:36:17.293-0500 I QUERY [conn10] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:36:17.293-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:31100 m31200| 2014-11-26T14:36:17.293-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:36:17.293-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38213 #11 (8 connections now open) m31200| 2014-11-26T14:36:17.293-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31200| 2014-11-26T14:36:17.293-0500 D NETWORK [conn7] connected connection! m31100| 2014-11-26T14:36:17.295-0500 I QUERY [conn11] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D626C456D5A5634724C64457750796A6539646D357948703464336C447577322B) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:36:17.308-0500 I QUERY [conn11] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D626C456D5A5634724C64457750796A6539646D357948703464336C447577322B3077316C696B436F2F45626D7759434E3348744759592B61553541716B...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:36:17.308-0500 I ACCESS [conn11] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:17.308-0500 I QUERY [conn11] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:36:17.308-0500 I QUERY [conn11] command fooSharded.$cmd command: listCollections { listCollections: 1, filter: {} } ntoreturn:1 keyUpdates:0 reslen:55 0ms m31200| 2014-11-26T14:36:17.308-0500 I QUERY [conn7] command fooSharded.$cmd command: clone { clone: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", collsToIgnore: [] } ntoreturn:1 keyUpdates:0 reslen:55 48ms m29000| 2014-11-26T14:36:17.308-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31100| 2014-11-26T14:36:17.309-0500 I NETWORK [conn11] end connection 10.33.141.202:38213 (7 connections now open) m30999| 2014-11-26T14:36:17.354-0500 I COMMAND [conn1] movePrimary dropping database on test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101, no sharded collections in fooSharded m31100| 2014-11-26T14:36:17.354-0500 I QUERY [conn7] command fooSharded.$cmd command: dropDatabase { dropDatabase: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m30999| 2014-11-26T14:36:17.354-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:17-54762bb19255d3d73a3c7ae2", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030577354), what: "movePrimary", ns: "fooSharded", details: { database: "fooSharded", from: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", to: "test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", shardedCollections: [] } } m29000| 2014-11-26T14:36:17.355-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:36:17.399-0500 I SHARDING [conn1] distributed lock 'fooSharded-movePrimary/ip-10-33-141-202:30999:1417030575:1804289383' unlocked. { "primary " : "test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", "ok" : 1 } m31200| 2014-11-26T14:36:17.400-0500 I QUERY [conn7] command fooSharded.$cmd command: listCollections { listCollections: 1, filter: { name: "barSharded" } } ntoreturn:1 keyUpdates:0 reslen:55 0ms m31200| 2014-11-26T14:36:17.400-0500 I QUERY [conn7] command fooSharded.$cmd command: listIndexes { listIndexes: "barSharded" } ntoreturn:1 keyUpdates:0 reslen:71 0ms m31200| 2014-11-26T14:36:17.401-0500 I QUERY [conn7] command fooSharded.$cmd command: count { count: "barSharded", query: {} } planSummary: EOF ntoreturn:1 keyUpdates:0 reslen:44 0ms m31200| 2014-11-26T14:36:17.401-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40671 #8 (6 connections now open) m31200| 2014-11-26T14:36:17.403-0500 I QUERY [conn8] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D416A4B336C684D6755434E3732555243694A59526A3231622F673836436A4D30) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:36:17.416-0500 I QUERY [conn8] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D416A4B336C684D6755434E3732555243694A59526A3231622F673836436A4D305A42486153765461494F4B6B73534D58774E3936725050457657303452...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:36:17.416-0500 I ACCESS [conn8] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:17.416-0500 I QUERY [conn8] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:36:17.416-0500 I QUERY [conn8] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:36:17.416-0500 D STORAGE [conn8] stored meta data for fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.417-0500 D STORAGE [conn8] WiredTigerKVEngine::createRecordStore uri: table:collection-7-5148480814435254834 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:36:17.417-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31300", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:36:17.420-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.420-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.420-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.420-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.421-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.421-0500 D STORAGE [conn8] create uri: table:index-8-5148480814435254834 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "fooSharded.barSharded" } m31200| 2014-11-26T14:36:17.427-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.427-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.427-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.427-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.427-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.427-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.427-0500 D STORAGE [conn8] fooSharded.barSharded: clearing plan cache - collection info cache reset m31200| 2014-11-26T14:36:17.427-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.427-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:17.427-0500 I WRITE [conn8] insert fooSharded.system.indexes query: { ns: "fooSharded.barSharded", key: { _id: 1.0 }, name: "_id_1" } ninserted:0 keyUpdates:0 10ms m31200| 2014-11-26T14:36:17.427-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "system.indexes", documents: [ { ns: "fooSharded.barSharded", key: { _id: 1.0 }, name: "_id_1" } ], ordered: true, metadata: { shardName: "test-rs1", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 11ms m31200| 2014-11-26T14:36:17.428-0500 I QUERY [conn7] command fooSharded.$cmd command: count { count: "barSharded", query: {} } planSummary: COUNT ntoreturn:1 keyUpdates:0 reslen:44 0ms m30999| 2014-11-26T14:36:17.428-0500 I COMMAND [conn1] CMD: shardcollection: { shardCollection: "fooSharded.barSharded", key: { _id: 1.0 } } m30999| 2014-11-26T14:36:17.428-0500 I SHARDING [conn1] enable sharding on: fooSharded.barSharded with shard key: { _id: 1.0 } m30999| 2014-11-26T14:36:17.428-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:17-54762bb19255d3d73a3c7ae3", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030577428), what: "shardCollection.start", ns: "fooSharded.barSharded", details: { shardKey: { _id: 1.0 }, collection: "fooSharded.barSharded", primary: "test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", initShards: [], numChunks: 1 } } m29000| 2014-11-26T14:36:17.428-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31300| 2014-11-26T14:36:17.454-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31200| 2014-11-26T14:36:17.493-0500 I QUERY [conn7] command fooSharded.$cmd command: count { count: "barSharded", query: {} } planSummary: COUNT ntoreturn:1 keyUpdates:0 reslen:44 0ms m30999| 2014-11-26T14:36:17.493-0500 I SHARDING [conn1] going to create 1 chunk(s) for: fooSharded.barSharded using new epoch 54762bb19255d3d73a3c7ae4 m29000| 2014-11-26T14:36:17.494-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:36:17.547-0500 I SHARDING [conn1] ChunkManager: time to load chunks for fooSharded.barSharded: 0ms sequenceNumber: 2 version: 1|0||54762bb19255d3d73a3c7ae4 based on: (empty) m29000| 2014-11-26T14:36:17.547-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:36:17.628-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31200| 2014-11-26T14:36:17.670-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40672 #9 (7 connections now open) m31200| 2014-11-26T14:36:17.672-0500 I QUERY [conn9] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D68514653546C53524C6358347253664F65595458427943713255714272716F4F) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:36:17.685-0500 I QUERY [conn9] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D68514653546C53524C6358347253664F65595458427943713255714272716F4F644C49337658553354684E38587737676D384944787235337155455457...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:36:17.685-0500 I ACCESS [conn9] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:17.685-0500 I QUERY [conn9] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:36:17.685-0500 D SHARDING [conn9] entering shard mode for connection m31200| 2014-11-26T14:36:17.685-0500 I QUERY [conn9] command admin.$cmd command: setShardVersion { setShardVersion: "fooSharded.barSharded", configdb: "ip-10-33-141-202:29000", shard: "test-rs1", shardHost: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", version: Timestamp 1000|0, versionEpoch: ObjectId('54762bb19255d3d73a3c7ae4') } ntoreturn:1 keyUpdates:0 reslen:92 0ms m31200| 2014-11-26T14:36:17.686-0500 I SHARDING [conn9] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers m31200| 2014-11-26T14:36:17.686-0500 D SHARDING [conn9] config string : ip-10-33-141-202:29000 m31200| 2014-11-26T14:36:17.686-0500 I SHARDING [conn9] remote client 10.33.141.202:40672 initialized this host (test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201) as shard test-rs1 m31200| 2014-11-26T14:36:17.686-0500 D SHARDING [conn9] metadata change requested for fooSharded.barSharded, from shard version 0|0||000000000000000000000000 to 1|0||54762bb19255d3d73a3c7ae4, need to verify with config server m31200| 2014-11-26T14:36:17.686-0500 I SHARDING [conn9] remotely refreshing metadata for fooSharded.barSharded with requested shard version 1|0||54762bb19255d3d73a3c7ae4, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000 m31200| 2014-11-26T14:36:17.686-0500 D NETWORK [conn9] creating new connection to:ip-10-33-141-202:29000 m31200| 2014-11-26T14:36:17.687-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:36:17.687-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41592 #7 (7 connections now open) m31200| 2014-11-26T14:36:17.687-0500 D NETWORK [conn9] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31200| 2014-11-26T14:36:17.687-0500 D NETWORK [conn9] connected connection! m29000| 2014-11-26T14:36:17.702-0500 I ACCESS [conn7] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:17.702-0500 I SHARDING [conn9] collection fooSharded.barSharded was previously unsharded, new metadata loaded with shard version 1|0||54762bb19255d3d73a3c7ae4 m31200| 2014-11-26T14:36:17.702-0500 I SHARDING [conn9] collection version was loaded at version 1|0||54762bb19255d3d73a3c7ae4, took 15ms m31200| 2014-11-26T14:36:17.702-0500 I QUERY [conn9] command admin.$cmd command: setShardVersion { setShardVersion: "fooSharded.barSharded", configdb: "ip-10-33-141-202:29000", shard: "test-rs1", shardHost: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", version: Timestamp 1000|0, versionEpoch: ObjectId('54762bb19255d3d73a3c7ae4'), authoritative: true } ntoreturn:1 keyUpdates:0 reslen:146 16ms m30999| 2014-11-26T14:36:17.702-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:17-54762bb19255d3d73a3c7ae5", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030577702), what: "shardCollection", ns: "fooSharded.barSharded", details: { version: "1|0||54762bb19255d3d73a3c7ae4" } } m29000| 2014-11-26T14:36:17.702-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:36:17.762-0500 I COMMAND [conn1] splitting chunk [{ _id: MinKey },{ _id: MaxKey }) in collection fooSharded.barSharded on shard test-rs1 m31200| 2014-11-26T14:36:17.762-0500 I SHARDING [conn7] received splitChunk request: { splitChunk: "fooSharded.barSharded", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "test-rs1", splitKeys: [ { _id: 0.0 } ], shardId: "fooSharded.barSharded-_id_MinKey", configdb: "ip-10-33-141-202:29000", epoch: ObjectId('54762bb19255d3d73a3c7ae4') } m31200| 2014-11-26T14:36:17.762-0500 D SHARDING [conn7] created new distributed lock for fooSharded.barSharded on ip-10-33-141-202:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m31200| 2014-11-26T14:36:17.762-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:29000 m31200| 2014-11-26T14:36:17.763-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:36:17.763-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41593 #8 (8 connections now open) m31200| 2014-11-26T14:36:17.763-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31200| 2014-11-26T14:36:17.763-0500 D NETWORK [conn7] connected connection! m29000| 2014-11-26T14:36:17.778-0500 I ACCESS [conn8] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:17.779-0500 D SHARDING [conn7] trying to acquire new distributed lock for fooSharded.barSharded on ip-10-33-141-202:29000 ( lock timeout : 900000, ping interval : 30000, process : ip-10-33-141-202:31200:1417030577:289435846 ) m31200| 2014-11-26T14:36:17.779-0500 I SHARDING [LockPinger] creating distributed lock ping thread for ip-10-33-141-202:29000 and process ip-10-33-141-202:31200:1417030577:289435846 (sleeping for 30000ms) m31200| 2014-11-26T14:36:17.779-0500 D NETWORK [LockPinger] creating new connection to:ip-10-33-141-202:29000 m31200| 2014-11-26T14:36:17.779-0500 D SHARDING [conn7] inserting initial doc in config.locks for lock fooSharded.barSharded m31200| 2014-11-26T14:36:17.779-0500 D SHARDING [conn7] about to acquire distributed lock 'fooSharded.barSharded/ip-10-33-141-202:31200:1417030577:289435846' m31200| 2014-11-26T14:36:17.779-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:36:17.779-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41594 #9 (9 connections now open) m31200| 2014-11-26T14:36:17.780-0500 D NETWORK [LockPinger] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31200| 2014-11-26T14:36:17.780-0500 D NETWORK [LockPinger] connected connection! m31200| 2014-11-26T14:36:17.780-0500 I SHARDING [conn7] distributed lock 'fooSharded.barSharded/ip-10-33-141-202:31200:1417030577:289435846' acquired, ts : 54762bb167f6f077e3000834 m31200| 2014-11-26T14:36:17.780-0500 I SHARDING [conn7] remotely refreshing metadata for fooSharded.barSharded based on current shard version 1|0||54762bb19255d3d73a3c7ae4, current metadata version is 1|0||54762bb19255d3d73a3c7ae4 m31200| 2014-11-26T14:36:17.781-0500 I SHARDING [conn7] metadata of collection fooSharded.barSharded already up to date (shard version : 1|0||54762bb19255d3d73a3c7ae4, took 0ms) m31200| 2014-11-26T14:36:17.781-0500 I SHARDING [conn7] splitChunk accepted at version 1|0||54762bb19255d3d73a3c7ae4 m31200| 2014-11-26T14:36:17.781-0500 D SHARDING [conn7] before split on { min: { _id: MinKey }, max: { _id: MaxKey } } m31200| 2014-11-26T14:36:17.781-0500 D SHARDING [conn7] splitChunk update: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "fooSharded.barSharded-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('54762bb19255d3d73a3c7ae4'), ns: "fooSharded.barSharded", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "test-rs1" }, o2: { _id: "fooSharded.barSharded-_id_MinKey" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "fooSharded.barSharded-_id_0.0", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('54762bb19255d3d73a3c7ae4'), ns: "fooSharded.barSharded", min: { _id: 0.0 }, max: { _id: MaxKey }, shard: "test-rs1" }, o2: { _id: "fooSharded.barSharded-_id_0.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "fooSharded.barSharded" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|0 } } ] } m31200| 2014-11-26T14:36:17.781-0500 I SHARDING [conn7] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:17-54762bb167f6f077e3000835", server: "ip-10-33-141-202", clientAddr: "10.33.141.202:40662", time: new Date(1417030577781), what: "split", ns: "fooSharded.barSharded", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey } }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('54762bb19255d3d73a3c7ae4') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('54762bb19255d3d73a3c7ae4') } } } m31200| 2014-11-26T14:36:17.782-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:29000 m31200| 2014-11-26T14:36:17.782-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:36:17.782-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41595 #10 (10 connections now open) m31200| 2014-11-26T14:36:17.782-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31200| 2014-11-26T14:36:17.782-0500 D NETWORK [conn7] connected connection! m31201| 2014-11-26T14:36:17.794-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m29000| 2014-11-26T14:36:17.801-0500 I ACCESS [conn9] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:17.802-0500 I SHARDING [LockPinger] cluster ip-10-33-141-202:29000 pinged successfully at Wed Nov 26 14:36:17 2014 by distributed lock pinger 'ip-10-33-141-202:29000/ip-10-33-141-202:31200:1417030577:289435846', sleeping for 30000ms m29000| 2014-11-26T14:36:17.803-0500 I ACCESS [conn10] Successfully authenticated as principal __system on local m29000| 2014-11-26T14:36:17.803-0500 I STORAGE [conn10] CMD fsync: sync:1 lock:0 m31200| 2014-11-26T14:36:17.817-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m29000| 2014-11-26T14:36:17.941-0500 I QUERY [conn10] command admin.$cmd command: fsync { fsync: true } ntoreturn:1 keyUpdates:0 reslen:51 137ms m31200| 2014-11-26T14:36:17.941-0500 I SHARDING [conn7] distributed lock 'fooSharded.barSharded/ip-10-33-141-202:31200:1417030577:289435846' unlocked. m31200| 2014-11-26T14:36:17.942-0500 I QUERY [conn7] command admin.$cmd command: splitChunk { splitChunk: "fooSharded.barSharded", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "test-rs1", splitKeys: [ { _id: 0.0 } ], shardId: "fooSharded.barSharded-_id_MinKey", configdb: "ip-10-33-141-202:29000", epoch: ObjectId('54762bb19255d3d73a3c7ae4') } ntoreturn:1 keyUpdates:0 reslen:37 179ms m30999| 2014-11-26T14:36:17.942-0500 I SHARDING [conn1] ChunkManager: time to load chunks for fooSharded.barSharded: 0ms sequenceNumber: 3 version: 1|2||54762bb19255d3d73a3c7ae4 based on: 1|0||54762bb19255d3d73a3c7ae4 m30999| 2014-11-26T14:36:17.943-0500 I COMMAND [conn1] CMD: movechunk: { moveChunk: "fooSharded.barSharded", find: { _id: -1.0 }, to: "test-rs0" } m30999| 2014-11-26T14:36:17.943-0500 I SHARDING [conn1] moving chunk ns: fooSharded.barSharded moving ( ns: fooSharded.barSharded, shard: test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201, lastmod: 1|1||000000000000000000000000, min: { _id: MinKey }, max: { _id: 0.0 }) test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201 -> test-rs0:test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101 m31200| 2014-11-26T14:36:17.943-0500 D SHARDING [conn7] found 3 shards listed on config server(s): ip-10-33-141-202:29000 (10.33.141.202) m31200| 2014-11-26T14:36:17.943-0500 I SHARDING [conn7] received moveChunk request: { moveChunk: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", to: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", fromShard: "test-rs1", toShard: "test-rs0", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "fooSharded.barSharded-_id_MinKey", configdb: "ip-10-33-141-202:29000", secondaryThrottle: true, waitForDelete: false, maxTimeMS: 0, epoch: ObjectId('54762bb19255d3d73a3c7ae4') } m31200| 2014-11-26T14:36:17.943-0500 D SHARDING [conn7] created new distributed lock for fooSharded.barSharded on ip-10-33-141-202:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m31200| 2014-11-26T14:36:17.944-0500 D SHARDING [conn7] trying to acquire new distributed lock for fooSharded.barSharded on ip-10-33-141-202:29000 ( lock timeout : 900000, ping interval : 30000, process : ip-10-33-141-202:31200:1417030577:289435846 ) m31200| 2014-11-26T14:36:17.944-0500 D SHARDING [conn7] about to acquire distributed lock 'fooSharded.barSharded/ip-10-33-141-202:31200:1417030577:289435846' m31200| 2014-11-26T14:36:17.945-0500 I SHARDING [conn7] distributed lock 'fooSharded.barSharded/ip-10-33-141-202:31200:1417030577:289435846' acquired, ts : 54762bb167f6f077e3000836 m31200| 2014-11-26T14:36:17.945-0500 I SHARDING [conn7] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:17-54762bb167f6f077e3000837", server: "ip-10-33-141-202", clientAddr: "10.33.141.202:40662", time: new Date(1417030577945), what: "moveChunk.start", ns: "fooSharded.barSharded", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "test-rs1", to: "test-rs0" } } m29000| 2014-11-26T14:36:17.945-0500 I STORAGE [conn10] CMD fsync: sync:1 lock:0 m31101| 2014-11-26T14:36:17.985-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:36:18.010-0500 I SHARDING [conn7] remotely refreshing metadata for fooSharded.barSharded based on current shard version 1|2||54762bb19255d3d73a3c7ae4, current metadata version is 1|2||54762bb19255d3d73a3c7ae4 m31200| 2014-11-26T14:36:18.010-0500 I SHARDING [conn7] metadata of collection fooSharded.barSharded already up to date (shard version : 1|2||54762bb19255d3d73a3c7ae4, took 0ms) m31200| 2014-11-26T14:36:18.010-0500 I SHARDING [conn7] moveChunk request accepted at version 1|2||54762bb19255d3d73a3c7ae4 m31200| 2014-11-26T14:36:18.010-0500 I SHARDING [conn7] moveChunk number of documents: 0 m31200| 2014-11-26T14:36:18.010-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:31100 m31200| 2014-11-26T14:36:18.011-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:36:18.011-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38220 #12 (8 connections now open) m31200| 2014-11-26T14:36:18.011-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31200| 2014-11-26T14:36:18.011-0500 D NETWORK [conn7] connected connection! m31100| 2014-11-26T14:36:18.013-0500 I QUERY [conn12] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4A4956336A61376A5A2B2F6330704963754D425833686543774D42414A455975) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:36:18.022-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31101| 2014-11-26T14:36:18.022-0500 I REPL [ReplicationExecutor] syncing from: ip-10-33-141-202:31100 m31101| 2014-11-26T14:36:18.023-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:36:18.023-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38221 #13 (9 connections now open) m31101| 2014-11-26T14:36:18.023-0500 D NETWORK [rsBackgroundSync] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31100| 2014-11-26T14:36:18.026-0500 I QUERY [conn13] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D654E6D6956702F547A516E4C46376C616C736E5A79703348442B763555474C57) } ntoreturn:1 keyUpdates:0 reslen:179 1ms m31100| 2014-11-26T14:36:18.028-0500 I QUERY [conn12] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4A4956336A61376A5A2B2F6330704963754D425833686543774D42414A45597547765558704B497A73796E33676D73713772495469422F42656D6E3664...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:36:18.028-0500 I ACCESS [conn12] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:18.028-0500 I QUERY [conn12] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:36:18.028-0500 I SHARDING [conn12] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers m31100| 2014-11-26T14:36:18.029-0500 D SHARDING [conn12] config string : ip-10-33-141-202:29000 m31100| 2014-11-26T14:36:18.029-0500 I SHARDING [conn12] remote client 10.33.141.202:38220 initialized this host as shard test-rs0 m31100| 2014-11-26T14:36:18.029-0500 I SHARDING [conn12] remotely refreshing metadata for fooSharded.barSharded, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000 m31100| 2014-11-26T14:36:18.029-0500 D NETWORK [conn12] creating new connection to:ip-10-33-141-202:29000 m31100| 2014-11-26T14:36:18.029-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:36:18.029-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41598 #11 (11 connections now open) m31100| 2014-11-26T14:36:18.030-0500 D NETWORK [conn12] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31100| 2014-11-26T14:36:18.030-0500 D NETWORK [conn12] connected connection! m31100| 2014-11-26T14:36:18.048-0500 I QUERY [conn13] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D654E6D6956702F547A516E4C46376C616C736E5A79703348442B763555474C5751757A61694E5637517A6877516B7845644275546457564A615342384E...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:36:18.049-0500 I ACCESS [conn13] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:18.049-0500 I QUERY [conn13] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:36:18.049-0500 I QUERY [conn13] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31101| 2014-11-26T14:36:18.049-0500 D REPL [SyncSourceFeedback] resetting connection in sync source feedback m31101| 2014-11-26T14:36:18.049-0500 I REPL [SyncSourceFeedback] replset setting syncSourceFeedback to ip-10-33-141-202:31100 m31100| 2014-11-26T14:36:18.049-0500 I QUERY [conn13] query local.oplog.rs query: { ts: { $gte: Timestamp 1417030556000|1 } } planSummary: COLLSCAN cursorid:17667035805 ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:4 keyUpdates:0 nreturned:4 reslen:436 0ms m31101| 2014-11-26T14:36:18.049-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:36:18.049-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38223 #14 (10 connections now open) m31101| 2014-11-26T14:36:18.049-0500 D STORAGE [repl writer worker 15] create collection fooUnsharded.barUnsharded {} m31101| 2014-11-26T14:36:18.050-0500 D STORAGE [repl writer worker 15] stored meta data for fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:36:18.050-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createRecordStore uri: table:collection-9-1404722688054298599 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31101| 2014-11-26T14:36:18.050-0500 D NETWORK [SyncSourceFeedback] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31100| 2014-11-26T14:36:18.052-0500 I QUERY [conn14] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4D496F39705774355A657564516430397431572F44537A714232636677587648) } ntoreturn:1 keyUpdates:0 reslen:179 1ms m29000| 2014-11-26T14:36:18.056-0500 I ACCESS [conn11] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:18.056-0500 I SHARDING [conn12] collection fooSharded.barSharded was previously unsharded, new metadata loaded with shard version 0|0||54762bb19255d3d73a3c7ae4 m31101| 2014-11-26T14:36:18.056-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31100| 2014-11-26T14:36:18.057-0500 I SHARDING [conn12] collection version was loaded at version 1|2||54762bb19255d3d73a3c7ae4, took 27ms m31100| 2014-11-26T14:36:18.057-0500 I QUERY [conn12] command admin.$cmd command: _recvChunkStart { _recvChunkStart: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", fromShardName: "test-rs1", toShardName: "test-rs0", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, configServer: "ip-10-33-141-202:29000", secondaryThrottle: true } ntoreturn:1 keyUpdates:0 reslen:47 28ms m31100| 2014-11-26T14:36:18.057-0500 I SHARDING [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 0.0 } for collection fooSharded.barSharded from test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201 at epoch 54762bb19255d3d73a3c7ae4 m31100| 2014-11-26T14:36:18.057-0500 I NETWORK [migrateThread] starting new replica set monitor for replica set test-rs1 with seeds ip-10-33-141-202:31200,ip-10-33-141-202:31201 m31100| 2014-11-26T14:36:18.057-0500 D COMMAND [ReplicaSetMonitorWatcher] BackgroundJob starting: ReplicaSetMonitorWatcher m31100| 2014-11-26T14:36:18.057-0500 D NETWORK [migrateThread] creating new connection to:ip-10-33-141-202:31200 m31100| 2014-11-26T14:36:18.057-0500 I NETWORK [ReplicaSetMonitorWatcher] starting m31101| 2014-11-26T14:36:18.056-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:36:18.056-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:36:18.056-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:36:18.056-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:36:18.056-0500 D STORAGE [repl writer worker 15] create uri: table:index-10-1404722688054298599 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "fooUnsharded.barUnsharded" } m31100| 2014-11-26T14:36:18.058-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31200| 2014-11-26T14:36:18.058-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40681 #10 (8 connections now open) m31100| 2014-11-26T14:36:18.058-0500 I QUERY [conn12] command admin.$cmd command: _recvChunkStatus { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:314 0ms m31200| 2014-11-26T14:36:18.058-0500 I SHARDING [conn7] moveChunk data transfer progress: { active: true, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| 2014-11-26T14:36:18.058-0500 D NETWORK [migrateThread] connected to server ip-10-33-141-202:31200 (10.33.141.202) m31100| 2014-11-26T14:36:18.058-0500 D NETWORK [migrateThread] connected connection! m31100| 2014-11-26T14:36:18.060-0500 I QUERY [conn12] command admin.$cmd command: _recvChunkStatus { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:314 0ms m31200| 2014-11-26T14:36:18.060-0500 I SHARDING [conn7] moveChunk data transfer progress: { active: true, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| 2014-11-26T14:36:18.061-0500 I QUERY [conn10] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D51614E5035624B326E7A6F68352F377762597A564453716836715668526C3635) } ntoreturn:1 keyUpdates:0 reslen:179 1ms m31101| 2014-11-26T14:36:18.063-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:36:18.063-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:36:18.063-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:36:18.063-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:36:18.063-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:36:18.063-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:36:18.063-0500 D STORAGE [repl writer worker 15] fooUnsharded.barUnsharded: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:36:18.063-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31100| 2014-11-26T14:36:18.064-0500 I QUERY [conn12] command admin.$cmd command: _recvChunkStatus { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:314 0ms m31200| 2014-11-26T14:36:18.065-0500 I SHARDING [conn7] moveChunk data transfer progress: { active: true, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| 2014-11-26T14:36:18.073-0500 I QUERY [conn12] command admin.$cmd command: _recvChunkStatus { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:314 0ms m31200| 2014-11-26T14:36:18.073-0500 I SHARDING [conn7] moveChunk data transfer progress: { active: true, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| 2014-11-26T14:36:18.075-0500 I QUERY [conn14] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4D496F39705774355A657564516430397431572F44537A714232636677587648626D2B3070464F56362F6655656E4E394F5136517050654B4C69643836...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:36:18.075-0500 I ACCESS [conn14] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:18.075-0500 I QUERY [conn14] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31101| 2014-11-26T14:36:18.075-0500 D REPL [SyncSourceFeedback] handshaking upstream updater m31100| 2014-11-26T14:36:18.075-0500 I QUERY [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, handshake: { handshake: ObjectId('54762b9b14cb7e52ef83f3d5'), member: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:36:18.075-0500 I QUERY [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b9b14cb7e52ef83f3d5'), optime: Timestamp 1417030577000|3, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:36:18.075-0500 I QUERY [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b9b14cb7e52ef83f3d5'), optime: Timestamp 1417030577000|3, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31200| 2014-11-26T14:36:18.080-0500 I QUERY [conn10] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D51614E5035624B326E7A6F68352F377762597A564453716836715668526C36352F4B375335684B4B354C4D795A5272336C39576D49664C516E45506431...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:36:18.081-0500 I ACCESS [conn10] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:18.081-0500 I QUERY [conn10] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:36:18.081-0500 I QUERY [conn10] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:36:18.081-0500 I QUERY [conn10] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31100| 2014-11-26T14:36:18.081-0500 D NETWORK [migrateThread] creating new connection to:ip-10-33-141-202:31200 m31100| 2014-11-26T14:36:18.081-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31200| 2014-11-26T14:36:18.082-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40682 #11 (9 connections now open) m31100| 2014-11-26T14:36:18.082-0500 D NETWORK [migrateThread] connected to server ip-10-33-141-202:31200 (10.33.141.202) m31100| 2014-11-26T14:36:18.082-0500 D NETWORK [migrateThread] connected connection! m31200| 2014-11-26T14:36:18.083-0500 I QUERY [conn11] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D2F425243305755485A45496741672B34534B43636950675A2F30354446553657) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:36:18.089-0500 I QUERY [conn12] command admin.$cmd command: _recvChunkStatus { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:314 0ms m31200| 2014-11-26T14:36:18.089-0500 I SHARDING [conn7] moveChunk data transfer progress: { active: true, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| 2014-11-26T14:36:18.096-0500 I QUERY [conn11] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D2F425243305755485A45496741672B34534B43636950675A2F303544465536576A71334C55726D6349477650644357507A576D4F73304B787250705448...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:36:18.096-0500 I ACCESS [conn11] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:18.097-0500 I QUERY [conn11] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:36:18.097-0500 I QUERY [conn11] command admin.$cmd command: getLastError { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:110 0ms m31200| 2014-11-26T14:36:18.097-0500 I QUERY [conn11] query fooSharded.system.namespaces query: { name: "fooSharded.barSharded" } planSummary: EOF ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31100| 2014-11-26T14:36:18.097-0500 D STORAGE [migrateThread] create collection fooSharded.barSharded {} m31100| 2014-11-26T14:36:18.097-0500 D STORAGE [migrateThread] stored meta data for fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:36:18.097-0500 D STORAGE [migrateThread] WiredTigerKVEngine::createRecordStore uri: table:collection-9--1911027222389114415 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:36:18.104-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31200| 2014-11-26T14:36:18.104-0500 D STORAGE [conn11] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:18.104-0500 D STORAGE [conn11] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:36:18.104-0500 I QUERY [conn11] command fooSharded.$cmd command: listIndexes { listIndexes: "barSharded" } ntoreturn:1 keyUpdates:0 reslen:130 0ms m31100| 2014-11-26T14:36:18.104-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:36:18.104-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:36:18.104-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:36:18.104-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:36:18.104-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:36:18.104-0500 D STORAGE [migrateThread] create uri: table:index-10--1911027222389114415 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "fooSharded.barSharded" } m31100| 2014-11-26T14:36:18.106-0500 I QUERY [conn13] getmore local.oplog.rs cursorid:17667035805 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:120 54ms m31101| 2014-11-26T14:36:18.106-0500 D STORAGE [repl writer worker 15] create collection fooSharded.barSharded {} m31101| 2014-11-26T14:36:18.106-0500 D STORAGE [repl writer worker 15] stored meta data for fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:36:18.106-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createRecordStore uri: table:collection-11-1404722688054298599 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:36:18.110-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:36:18.110-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:36:18.110-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:36:18.110-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:36:18.110-0500 I INDEX [migrateThread] build index on: fooSharded.barSharded properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "fooSharded.barSharded" } m31100| 2014-11-26T14:36:18.110-0500 I INDEX [migrateThread] building index using bulk method m31100| 2014-11-26T14:36:18.110-0500 D STORAGE [migrateThread] fooSharded.barSharded: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:36:18.110-0500 D INDEX [migrateThread] bulk commit starting for index: _id_ m31101| 2014-11-26T14:36:18.110-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:36:18.110-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:36:18.110-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:36:18.110-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:36:18.110-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:36:18.111-0500 D STORAGE [repl writer worker 15] create uri: table:index-12-1404722688054298599 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "fooSharded.barSharded" } m31100| 2014-11-26T14:36:18.111-0500 D INDEX [migrateThread] done building bottom layer, going to commit m31100| 2014-11-26T14:36:18.116-0500 I INDEX [migrateThread] build index done. scanned 0 total records. 0 secs m31100| 2014-11-26T14:36:18.116-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:36:18.116-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:36:18.116-0500 D STORAGE [migrateThread] fooSharded.barSharded: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:36:18.116-0500 D STORAGE [migrateThread] fooSharded.barSharded: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:36:18.116-0500 I SHARDING [migrateThread] Deleter starting delete for: fooSharded.barSharded from { _id: MinKey } -> { _id: 0.0 }, with opId: 144 m31100| 2014-11-26T14:36:18.117-0500 D SHARDING [migrateThread] begin removal of { : MinKey } to { : 0.0 } in fooSharded.barSharded with write concern: { w: 2, wtimeout: 60000 } m31101| 2014-11-26T14:36:18.117-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:36:18.117-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:36:18.117-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:36:18.117-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:36:18.117-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:36:18.117-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:36:18.117-0500 D STORAGE [repl writer worker 15] fooSharded.barSharded: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:36:18.117-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31100| 2014-11-26T14:36:18.117-0500 I SHARDING [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| 2014-11-26T14:36:18.117-0500 D SHARDING [migrateThread] end removal of { : MinKey } to { : 0.0 } in fooSharded.barSharded (took 0ms) m31100| 2014-11-26T14:36:18.117-0500 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for fooSharded.barSharded from { _id: MinKey } -> { _id: 0.0 } m31100| 2014-11-26T14:36:18.117-0500 I QUERY [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b9b14cb7e52ef83f3d5'), optime: Timestamp 1417030578000|1, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:36:18.119-0500 I QUERY [conn13] getmore local.oplog.rs cursorid:17667035805 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:192 10ms m31100| 2014-11-26T14:36:18.119-0500 I QUERY [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b9b14cb7e52ef83f3d5'), optime: Timestamp 1417030578000|2, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:36:18.119-0500 D SHARDING [migrateThread] rangeDeleter took 0 seconds waiting for deletes to be replicated to majority nodes m31200| 2014-11-26T14:36:18.119-0500 I QUERY [conn11] command admin.$cmd command: _migrateClone { _migrateClone: 1 } ntoreturn:1 keyUpdates:0 reslen:51 0ms m31200| 2014-11-26T14:36:18.119-0500 I QUERY [conn11] command admin.$cmd command: _transferMods { _transferMods: 1 } ntoreturn:1 keyUpdates:0 reslen:51 0ms m31100| 2014-11-26T14:36:18.119-0500 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section m31100| 2014-11-26T14:36:18.119-0500 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'fooSharded.barSharded' { _id: MinKey } -> { _id: 0.0 } m31200| 2014-11-26T14:36:18.119-0500 I QUERY [conn11] command admin.$cmd command: _transferMods { _transferMods: 1 } ntoreturn:1 keyUpdates:0 reslen:51 0ms m31100| 2014-11-26T14:36:18.121-0500 I QUERY [conn12] command admin.$cmd command: _recvChunkStatus { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:315 0ms m31200| 2014-11-26T14:36:18.121-0500 I SHARDING [conn7] moveChunk data transfer progress: { active: true, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| 2014-11-26T14:36:18.121-0500 I SHARDING [conn7] About to check if it is safe to enter critical section m31200| 2014-11-26T14:36:18.122-0500 I SHARDING [conn7] About to enter migrate critical section m31200| 2014-11-26T14:36:18.122-0500 I SHARDING [conn7] moveChunk setting version to: 2|0||54762bb19255d3d73a3c7ae4 m31200| 2014-11-26T14:36:18.122-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:31100 m31200| 2014-11-26T14:36:18.122-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:36:18.122-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38226 #15 (11 connections now open) m31200| 2014-11-26T14:36:18.122-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31200| 2014-11-26T14:36:18.122-0500 D NETWORK [conn7] connected connection! m31100| 2014-11-26T14:36:18.124-0500 I QUERY [conn15] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6F5464454255656E35477151314F7370317973624B546E39737735494E673935) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:36:18.130-0500 I QUERY [conn11] command admin.$cmd command: _transferMods { _transferMods: 1 } ntoreturn:1 keyUpdates:0 reslen:51 0ms m31100| 2014-11-26T14:36:18.137-0500 I QUERY [conn15] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D6F5464454255656E35477151314F7370317973624B546E39737735494E67393561766C6A746C5539336847454D384C37303831586D566D4E4366667957...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:36:18.137-0500 I ACCESS [conn15] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:18.137-0500 I QUERY [conn15] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:36:18.140-0500 I QUERY [conn11] command admin.$cmd command: _transferMods { _transferMods: 1 } ntoreturn:1 keyUpdates:0 reslen:51 0ms m31100| 2014-11-26T14:36:18.140-0500 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'fooSharded.barSharded' { _id: MinKey } -> { _id: 0.0 } m31100| 2014-11-26T14:36:18.140-0500 I SHARDING [migrateThread] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:18-54762bb2eff30b03c3b1cccb", server: "ip-10-33-141-202", clientAddr: ":27017", time: new Date(1417030578140), what: "moveChunk.to", ns: "fooSharded.barSharded", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step 1 of 5: 59, step 2 of 5: 2, step 3 of 5: 0, step 4 of 5: 0, step 5 of 5: 20, note: "success" } } m31100| 2014-11-26T14:36:18.140-0500 D NETWORK [migrateThread] creating new connection to:ip-10-33-141-202:29000 m31100| 2014-11-26T14:36:18.141-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:36:18.141-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41603 #12 (12 connections now open) m31100| 2014-11-26T14:36:18.141-0500 D NETWORK [migrateThread] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31100| 2014-11-26T14:36:18.141-0500 D NETWORK [migrateThread] connected connection! m29000| 2014-11-26T14:36:18.155-0500 I ACCESS [conn12] Successfully authenticated as principal __system on local m29000| 2014-11-26T14:36:18.156-0500 I STORAGE [conn12] CMD fsync: sync:1 lock:0 m31100| 2014-11-26T14:36:18.208-0500 I QUERY [conn15] command admin.$cmd command: _recvChunkCommit { _recvChunkCommit: 1 } ntoreturn:1 keyUpdates:0 reslen:313 70ms m31200| 2014-11-26T14:36:18.208-0500 I SHARDING [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m31200| 2014-11-26T14:36:18.208-0500 I SHARDING [conn7] moveChunk updating self version to: 2|1||54762bb19255d3d73a3c7ae4 through { _id: 0.0 } -> { _id: MaxKey } for collection 'fooSharded.barSharded' m31200| 2014-11-26T14:36:18.208-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:29000 m31200| 2014-11-26T14:36:18.208-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:36:18.209-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41604 #13 (13 connections now open) m31200| 2014-11-26T14:36:18.209-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31200| 2014-11-26T14:36:18.209-0500 D NETWORK [conn7] connected connection! m29000| 2014-11-26T14:36:18.223-0500 I ACCESS [conn13] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:18.224-0500 I SHARDING [conn7] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:18-54762bb267f6f077e3000838", server: "ip-10-33-141-202", clientAddr: "10.33.141.202:40662", time: new Date(1417030578224), what: "moveChunk.commit", ns: "fooSharded.barSharded", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "test-rs1", to: "test-rs0", cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 } } m29000| 2014-11-26T14:36:18.224-0500 I STORAGE [conn10] CMD fsync: sync:1 lock:0 m31200| 2014-11-26T14:36:18.301-0500 I SHARDING [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m31200| 2014-11-26T14:36:18.301-0500 I SHARDING [conn7] MigrateFromStatus::done coll lock for fooSharded.barSharded acquired m31200| 2014-11-26T14:36:18.301-0500 I SHARDING [conn7] forking for cleanup of chunk data m31200| 2014-11-26T14:36:18.301-0500 I SHARDING [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m31200| 2014-11-26T14:36:18.301-0500 I SHARDING [conn7] MigrateFromStatus::done coll lock for fooSharded.barSharded acquired m31200| 2014-11-26T14:36:18.301-0500 I SHARDING [RangeDeleter] Deleter starting delete for: fooSharded.barSharded from { _id: MinKey } -> { _id: 0.0 }, with opId: 5 m31200| 2014-11-26T14:36:18.301-0500 D SHARDING [RangeDeleter] begin removal of { : MinKey } to { : 0.0 } in fooSharded.barSharded with write concern: { w: 2, wtimeout: 60000 } m31200| 2014-11-26T14:36:18.301-0500 I SHARDING [RangeDeleter] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31200| 2014-11-26T14:36:18.301-0500 D SHARDING [RangeDeleter] end removal of { : MinKey } to { : 0.0 } in fooSharded.barSharded (took 0ms) m31200| 2014-11-26T14:36:18.301-0500 I SHARDING [RangeDeleter] rangeDeleter deleted 0 documents for fooSharded.barSharded from { _id: MinKey } -> { _id: 0.0 } m31200| 2014-11-26T14:36:18.301-0500 D SHARDING [RangeDeleter] rangeDeleter took 0 seconds waiting for deletes to be replicated to majority nodes m31200| 2014-11-26T14:36:18.302-0500 I SHARDING [conn7] distributed lock 'fooSharded.barSharded/ip-10-33-141-202:31200:1417030577:289435846' unlocked. m31200| 2014-11-26T14:36:18.302-0500 I SHARDING [conn7] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:36:18-54762bb267f6f077e3000839", server: "ip-10-33-141-202", clientAddr: "10.33.141.202:40662", time: new Date(1417030578302), what: "moveChunk.from", ns: "fooSharded.barSharded", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step 1 of 6: 0, step 2 of 6: 66, step 3 of 6: 46, step 4 of 6: 64, step 5 of 6: 179, step 6 of 6: 0, to: "test-rs0", from: "test-rs1", note: "success" } } m29000| 2014-11-26T14:36:18.302-0500 I STORAGE [conn10] CMD fsync: sync:1 lock:0 m31200| 2014-11-26T14:36:18.362-0500 I QUERY [conn7] command admin.$cmd command: moveChunk { moveChunk: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", to: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", fromShard: "test-rs1", toShard: "test-rs0", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "fooSharded.barSharded-_id_MinKey", configdb: "ip-10-33-141-202:29000", secondaryThrottle: true, waitForDelete: false, maxTimeMS: 0, epoch: ObjectId('54762bb19255d3d73a3c7ae4') } ntoreturn:1 keyUpdates:0 reslen:37 418ms m30999| 2014-11-26T14:36:18.362-0500 I SHARDING [conn1] ChunkManager: time to load chunks for fooSharded.barSharded: 0ms sequenceNumber: 4 version: 2|1||54762bb19255d3d73a3c7ae4 based on: 1|2||54762bb19255d3d73a3c7ae4 --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("54762baf9255d3d73a3c7ad7") } shards: { "_id" : "test-rs0", "host" : "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101" } { "_id" : "test-rs1", "host" : "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201" } { "_id" : "test-rs2", "host" : "test-rs2/ip-10-33-141-202:31300,ip-10-33-141-202:31301" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "fooUnsharded", "partitioned" : false, "primary" : "test-rs0" } { "_id" : "fooSharded", "partitioned" : true, "primary" : "test-rs1" } fooSharded.barSharded shard key: { "_id" : 1 } chunks: test-rs0 1 test-rs1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : 0 } on : test-rs0 Timestamp(2, 0) { "_id" : 0 } -->> { "_id" : { "$maxKey" : 1 } } on : test-rs1 Timestamp(2, 1) ---- Setting up database users... ---- m30999| 2014-11-26T14:36:18.383-0500 I SHARDING [conn1] distributed lock 'authorizationData/ip-10-33-141-202:30999:1417030575:1804289383' acquired, ts : 54762bb29255d3d73a3c7ae6 m29000| 2014-11-26T14:36:18.383-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:36:18.453-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:36:18.486-0500 I SHARDING [conn1] distributed lock 'authorizationData/ip-10-33-141-202:30999:1417030575:1804289383' unlocked. Successfully added user: { "user" : "shardedDBUser", "roles" : [ "readWrite" ] } m30999| 2014-11-26T14:36:18.501-0500 I SHARDING [conn1] distributed lock 'authorizationData/ip-10-33-141-202:30999:1417030575:1804289383' acquired, ts : 54762bb29255d3d73a3c7ae7 m29000| 2014-11-26T14:36:18.501-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:36:18.581-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:36:18.617-0500 I SHARDING [conn1] distributed lock 'authorizationData/ip-10-33-141-202:30999:1417030575:1804289383' unlocked. Successfully added user: { "user" : "unshardedDBUser", "roles" : [ "readWrite" ] } ---- Inserting initial data... ---- m30999| 2014-11-26T14:36:18.618-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49126 #2 (2 connections now open) m30999| 2014-11-26T14:36:18.633-0500 I ACCESS [conn2] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:36:18.648-0500 I ACCESS [conn2] Successfully authenticated as principal unshardedDBUser on fooUnsharded m30999| 2014-11-26T14:36:18.663-0500 I ACCESS [conn2] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:36:18.677-0500 I ACCESS [conn2] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31100| 2014-11-26T14:36:18.678-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: -1.0 } ninserted:0 keyUpdates:0 exception: stale shard version detected before write, received 2|0||54762bb19255d3d73a3c7ae4 but local version is 0|0||54762bb19255d3d73a3c7ae4 code:63 0ms m31100| 2014-11-26T14:36:18.678-0500 D SHARDING [conn8] metadata version update requested for fooSharded.barSharded, from shard version 0|0||54762bb19255d3d73a3c7ae4 to 2|0||54762bb19255d3d73a3c7ae4, need to verify with config server m31100| 2014-11-26T14:36:18.678-0500 I SHARDING [conn8] remotely refreshing metadata for fooSharded.barSharded with requested shard version 2|0||54762bb19255d3d73a3c7ae4 based on current shard version 0|0||54762bb19255d3d73a3c7ae4, current metadata version is 1|2||54762bb19255d3d73a3c7ae4 m31100| 2014-11-26T14:36:18.679-0500 I SHARDING [conn8] updating metadata for fooSharded.barSharded from shard version 0|0||54762bb19255d3d73a3c7ae4 to shard version 2|0||54762bb19255d3d73a3c7ae4 m31100| 2014-11-26T14:36:18.679-0500 I SHARDING [conn8] collection version was loaded at version 2|1||54762bb19255d3d73a3c7ae4, took 0ms m31100| 2014-11-26T14:36:18.679-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: -1.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 2000|0, ObjectId('54762bb19255d3d73a3c7ae4') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:329 0ms m30999| 2014-11-26T14:36:18.681-0500 I SHARDING [conn2] ChunkManager: time to load chunks for fooSharded.barSharded: 0ms sequenceNumber: 5 version: 2|1||54762bb19255d3d73a3c7ae4 based on: (empty) m30999| 2014-11-26T14:36:18.681-0500 I SHARDING [conn2] ChunkManager: time to load chunks for fooSharded.barSharded: 0ms sequenceNumber: 6 version: 2|1||54762bb19255d3d73a3c7ae4 based on: 2|1||54762bb19255d3d73a3c7ae4 m30999| 2014-11-26T14:36:18.681-0500 W SHARDING [conn2] chunk manager reload forced for collection 'fooSharded.barSharded', config version is 2|1||54762bb19255d3d73a3c7ae4 m31100| 2014-11-26T14:36:18.681-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: -1.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:36:18.681-0500 I QUERY [conn13] getmore local.oplog.rs cursorid:17667035805 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:116 560ms m30999| 2014-11-26T14:36:18.681-0500 I NETWORK [conn2] scoped connection to ip-10-33-141-202:29000 not being returned to the pool m31100| 2014-11-26T14:36:18.681-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: -1.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 2000|0, ObjectId('54762bb19255d3d73a3c7ae4') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m29000| 2014-11-26T14:36:18.682-0500 I NETWORK [conn3] end connection 10.33.141.202:41569 (12 connections now open) m31100| 2014-11-26T14:36:18.682-0500 I QUERY [conn7] command admin.$cmd command: splitVector { splitVector: "fooSharded.barSharded", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 937355, maxSplitPoints: 0, maxChunkObjects: 250000 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31100| 2014-11-26T14:36:18.682-0500 I QUERY [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b9b14cb7e52ef83f3d5'), optime: Timestamp 1417030578000|3, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31200| 2014-11-26T14:36:18.683-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: 1.0 } ninserted:1 keyUpdates:0 0ms m31200| 2014-11-26T14:36:18.683-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: 1.0 } ], ordered: true, metadata: { shardName: "test-rs1", shardVersion: [ Timestamp 2000|1, ObjectId('54762bb19255d3d73a3c7ae4') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m30999| 2014-11-26T14:36:18.683-0500 I NETWORK [conn2] scoped connection to ip-10-33-141-202:29000 not being returned to the pool m29000| 2014-11-26T14:36:18.683-0500 I NETWORK [conn4] end connection 10.33.141.202:41570 (11 connections now open) m31200| 2014-11-26T14:36:18.683-0500 I QUERY [conn7] command admin.$cmd command: splitVector { splitVector: "fooSharded.barSharded", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 937351, maxSplitPoints: 0, maxChunkObjects: 250000 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31100| 2014-11-26T14:36:18.685-0500 I WRITE [conn8] insert fooUnsharded.barUnsharded query: { _id: 1.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:36:18.685-0500 I QUERY [conn8] command fooUnsharded.$cmd command: insert { insert: "barUnsharded", documents: [ { _id: 1.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31100| 2014-11-26T14:36:18.686-0500 I QUERY [conn13] getmore local.oplog.rs cursorid:17667035805 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:120 2ms ---- Stopping primary of third shard... ---- m31100| 2014-11-26T14:36:18.686-0500 I QUERY [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b9b14cb7e52ef83f3d5'), optime: Timestamp 1417030578000|4, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m30999| 2014-11-26T14:36:18.686-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49127 #3 (3 connections now open) m30999| 2014-11-26T14:36:18.701-0500 I ACCESS [conn3] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:36:18.715-0500 I ACCESS [conn3] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31300| 2014-11-26T14:36:18.716-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:36:18.716-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms ReplSetTest n: 0 ports: [ 31300, 31301 ] 31300 number ReplSetTest stop *** Shutting down mongod in port 31300 *** m31300| 2014-11-26T14:36:18.717-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m31300| 2014-11-26T14:36:18.717-0500 I REPL [signalProcessingThread] Stopping replication applier threads m31201| 2014-11-26T14:36:18.817-0500 I REPL [ReplicationExecutor] syncing from: ip-10-33-141-202:31200 m31201| 2014-11-26T14:36:18.818-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31200| 2014-11-26T14:36:18.818-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40688 #12 (10 connections now open) m31201| 2014-11-26T14:36:18.818-0500 D NETWORK [rsBackgroundSync] connected to server ip-10-33-141-202:31200 (10.33.141.202) m31200| 2014-11-26T14:36:18.820-0500 I QUERY [conn12] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D533350303776464E6558412F563665356D31733234366D4D74467852356C7338) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:36:18.833-0500 I QUERY [conn12] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D533350303776464E6558412F563665356D31733234366D4D74467852356C73386F4B6D5A587663514A416653335A70706A39365279525778527078534E...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:36:18.833-0500 I ACCESS [conn12] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:18.833-0500 I QUERY [conn12] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:36:18.833-0500 I QUERY [conn12] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31201| 2014-11-26T14:36:18.833-0500 D REPL [SyncSourceFeedback] resetting connection in sync source feedback m31201| 2014-11-26T14:36:18.833-0500 I REPL [SyncSourceFeedback] replset setting syncSourceFeedback to ip-10-33-141-202:31200 m31200| 2014-11-26T14:36:18.833-0500 I QUERY [conn12] query local.oplog.rs query: { ts: { $gte: Timestamp 1417030561000|1 } } planSummary: COLLSCAN cursorid:17673727317 ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:3 keyUpdates:0 nreturned:3 reslen:302 0ms m31201| 2014-11-26T14:36:18.834-0500 D STORAGE [repl writer worker 15] create collection fooSharded.barSharded {} m31201| 2014-11-26T14:36:18.834-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31200| 2014-11-26T14:36:18.834-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40689 #13 (11 connections now open) m31201| 2014-11-26T14:36:18.834-0500 D STORAGE [repl writer worker 15] stored meta data for fooSharded.barSharded @ 0:6 m31201| 2014-11-26T14:36:18.834-0500 D NETWORK [SyncSourceFeedback] connected to server ip-10-33-141-202:31200 (10.33.141.202) m31201| 2014-11-26T14:36:18.834-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createRecordStore uri: table:collection-9--7855397372784430281 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31200| 2014-11-26T14:36:18.835-0500 I QUERY [conn13] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D77316E35436E526B5768707066416A687538446E666D7851587A6C4D45443575) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:36:18.838-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:6 m31201| 2014-11-26T14:36:18.838-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:6 m31201| 2014-11-26T14:36:18.838-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:6 m31201| 2014-11-26T14:36:18.838-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:6 m31201| 2014-11-26T14:36:18.838-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:6 m31201| 2014-11-26T14:36:18.838-0500 D STORAGE [repl writer worker 15] create uri: table:index-10--7855397372784430281 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "fooSharded.barSharded" } m31201| 2014-11-26T14:36:18.844-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:6 m31201| 2014-11-26T14:36:18.844-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:6 m31201| 2014-11-26T14:36:18.844-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:6 m31201| 2014-11-26T14:36:18.844-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:6 m31201| 2014-11-26T14:36:18.844-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:6 m31201| 2014-11-26T14:36:18.844-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:6 m31201| 2014-11-26T14:36:18.844-0500 D STORAGE [repl writer worker 15] fooSharded.barSharded: clearing plan cache - collection info cache reset m31201| 2014-11-26T14:36:18.844-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:6 m31200| 2014-11-26T14:36:18.849-0500 I QUERY [conn13] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D77316E35436E526B5768707066416A687538446E666D7851587A6C4D454435755A796D6E576E6F59414945714C48506533746D35715464582B5631422B...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:36:18.849-0500 I ACCESS [conn13] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:36:18.849-0500 I QUERY [conn13] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:36:18.849-0500 D REPL [SyncSourceFeedback] handshaking upstream updater m31200| 2014-11-26T14:36:18.849-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, handshake: { handshake: ObjectId('54762ba1064c511c969d2b23'), member: 1, config: { _id: 1, host: "ip-10-33-141-202:31201", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31200| 2014-11-26T14:36:18.849-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762ba1064c511c969d2b23'), optime: Timestamp 1417030578000|1, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31201", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31200| 2014-11-26T14:36:18.849-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762ba1064c511c969d2b23'), optime: Timestamp 1417030578000|1, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31201", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31300| 2014-11-26T14:36:19.456-0500 I STORAGE [conn2] got request after shutdown() m31301| 2014-11-26T14:36:19.456-0500 D NETWORK [ReplExecNetThread-6] SocketException: remote: 10.33.141.202:31300 error: 9001 socket exception [CLOSED] server [10.33.141.202:31300] m31301| 2014-11-26T14:36:19.456-0500 I NETWORK [ReplExecNetThread-6] DBClientCursor::init call() failed m31301| 2014-11-26T14:36:19.456-0500 D - [ReplExecNetThread-6] User Assertion: 10276:DBClientBase::findN: transport error: ip-10-33-141-202:31300 ns: admin.$cmd query: { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } m31301| 2014-11-26T14:36:19.456-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location10276 DBClientBase::findN: transport error: ip-10-33-141-202:31300 ns: admin.$cmd query: { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } m31301| 2014-11-26T14:36:19.457-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 1; 0ms have already elapsed m31301| 2014-11-26T14:36:19.457-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:36:19.457-0500 D NETWORK [ReplExecNetThread-4] connected to server ip-10-33-141-202:31300 (10.33.141.202) m31300| 2014-11-26T14:36:19.529-0500 I COMMAND [signalProcessingThread] now exiting m31300| 2014-11-26T14:36:19.529-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m31300| 2014-11-26T14:36:19.529-0500 I NETWORK [signalProcessingThread] closing listening socket: 19 m31300| 2014-11-26T14:36:19.530-0500 I NETWORK [signalProcessingThread] closing listening socket: 20 m31300| 2014-11-26T14:36:19.530-0500 I NETWORK [signalProcessingThread] closing listening socket: 26 m31300| 2014-11-26T14:36:19.530-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31300.sock m31300| 2014-11-26T14:36:19.530-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m31300| 2014-11-26T14:36:19.530-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m31300| 2014-11-26T14:36:19.530-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me m31300| 2014-11-26T14:36:19.530-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs m31300| 2014-11-26T14:36:19.530-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log m31300| 2014-11-26T14:36:19.530-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset m31300| 2014-11-26T14:36:19.530-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog m31300| 2014-11-26T14:36:19.530-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m31301| 2014-11-26T14:36:19.530-0500 I NETWORK [ReplExecNetThread-4] Socket recv() errno:104 Connection reset by peer 10.33.141.202:31300 m31300| 2014-11-26T14:36:19.530-0500 I NETWORK [conn7] end connection 10.33.141.202:60745 (3 connections now open) m31301| 2014-11-26T14:36:19.530-0500 I NETWORK [ReplExecNetThread-4] SocketException: remote: 10.33.141.202:31300 error: 9001 socket exception [RECV_ERROR] server [10.33.141.202:31300] m31301| 2014-11-26T14:36:19.530-0500 I NETWORK [ReplExecNetThread-4] DBClientCursor::init call() failed m31301| 2014-11-26T14:36:19.530-0500 D - [ReplExecNetThread-4] User Assertion: 10276:DBClientBase::findN: transport error: ip-10-33-141-202:31300 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D315667364265736B6A384548397772367551493565764D3155714B5347376270) } m31300| 2014-11-26T14:36:19.530-0500 I NETWORK [conn1] end connection 127.0.0.1:51038 (3 connections now open) m31301| 2014-11-26T14:36:19.530-0500 I NETWORK [conn3] end connection 10.33.141.202:41093 (1 connection now open) m31301| 2014-11-26T14:36:19.530-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location10276 DBClientBase::findN: transport error: ip-10-33-141-202:31300 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D315667364265736B6A384548397772367551493565764D3155714B5347376270) } m31301| 2014-11-26T14:36:19.530-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 0; 74ms have already elapsed m31300| 2014-11-26T14:36:19.530-0500 I NETWORK [conn5] end connection 10.33.141.202:60726 (3 connections now open) m31300| 2014-11-26T14:36:19.530-0500 I NETWORK [conn6] end connection 10.33.141.202:60744 (3 connections now open) m31301| 2014-11-26T14:36:19.531-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:36:19.531-0500 W NETWORK [ReplExecNetThread-0] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:36:19.531-0500 D - [ReplExecNetThread-0] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:36:19.531-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31300| 2014-11-26T14:36:19.558-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 2014-11-26T14:36:19.717-0500 I - shell: stopped mongo program on port 31300 ReplSetTest stop *** Mongod in port 31300 shutdown with code (0) *** ---- Testing active connection with third primary down... ---- m31100| 2014-11-26T14:36:19.718-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38235 #16 (12 connections now open) m31100| 2014-11-26T14:36:19.720-0500 I QUERY [conn16] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4D2F70454E4672323436732B47325A564E62536156624143533239356E6C4533) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:36:19.733-0500 I QUERY [conn16] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4D2F70454E4672323436732B47325A564E62536156624143533239356E6C45336854734A45656F48347263736C77687165646256697971715465586D58...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:36:19.733-0500 I ACCESS [conn16] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:19.733-0500 I QUERY [conn16] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:36:19.733-0500 D SHARDING [conn16] entering shard mode for connection m31100| 2014-11-26T14:36:19.733-0500 I QUERY [conn16] command admin.$cmd command: setShardVersion { setShardVersion: "fooSharded.barSharded", configdb: "ip-10-33-141-202:29000", shard: "test-rs0", shardHost: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", version: Timestamp 2000|0, versionEpoch: ObjectId('54762bb19255d3d73a3c7ae4') } ntoreturn:1 keyUpdates:0 reslen:251 0ms m29000| 2014-11-26T14:36:19.734-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41612 #14 (12 connections now open) m29000| 2014-11-26T14:36:19.748-0500 I ACCESS [conn14] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:36:19.748-0500 I QUERY [conn16] command admin.$cmd command: setShardVersion { setShardVersion: "fooSharded.barSharded", configdb: "ip-10-33-141-202:29000", shard: "test-rs0", shardHost: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", version: Timestamp 2000|0, versionEpoch: ObjectId('54762bb19255d3d73a3c7ae4'), authoritative: true } ntoreturn:1 keyUpdates:0 reslen:146 0ms m31100| 2014-11-26T14:36:19.749-0500 I QUERY [conn16] query fooSharded.barSharded query: { _id: -1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m31200| 2014-11-26T14:36:19.749-0500 I QUERY [conn9] command admin.$cmd command: setShardVersion { setShardVersion: "fooSharded.barSharded", configdb: "ip-10-33-141-202:29000", shard: "test-rs1", shardHost: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", version: Timestamp 2000|1, versionEpoch: ObjectId('54762bb19255d3d73a3c7ae4') } ntoreturn:1 keyUpdates:0 reslen:146 0ms m31200| 2014-11-26T14:36:19.749-0500 I QUERY [conn9] query fooSharded.barSharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m31100| 2014-11-26T14:36:19.750-0500 I QUERY [conn16] command admin.$cmd command: setShardVersion { setShardVersion: "fooUnsharded.barUnsharded", configdb: "ip-10-33-141-202:29000", shard: "test-rs0", shardHost: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000') } ntoreturn:1 keyUpdates:0 reslen:146 0ms m31100| 2014-11-26T14:36:19.750-0500 I QUERY [conn16] query fooUnsharded.barUnsharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m31100| 2014-11-26T14:36:19.751-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: -2.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:36:19.751-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: -2.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 2000|0, ObjectId('54762bb19255d3d73a3c7ae4') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31200| 2014-11-26T14:36:19.752-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: 2.0 } ninserted:1 keyUpdates:0 0ms m31200| 2014-11-26T14:36:19.752-0500 I QUERY [conn12] getmore local.oplog.rs cursorid:17673727317 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:116 916ms m31200| 2014-11-26T14:36:19.752-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: 2.0 } ], ordered: true, metadata: { shardName: "test-rs1", shardVersion: [ Timestamp 2000|1, ObjectId('54762bb19255d3d73a3c7ae4') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31200| 2014-11-26T14:36:19.752-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762ba1064c511c969d2b23'), optime: Timestamp 1417030579000|1, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31201", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:36:19.753-0500 I WRITE [conn8] insert fooUnsharded.barUnsharded query: { _id: 2.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:36:19.753-0500 I QUERY [conn13] getmore local.oplog.rs cursorid:17667035805 ntoreturn:0 keyUpdates:0 nreturned:2 reslen:216 1065ms m31100| 2014-11-26T14:36:19.753-0500 I QUERY [conn8] command fooUnsharded.$cmd command: insert { insert: "barUnsharded", documents: [ { _id: 2.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms ---- Testing idle connection with third primary down... ---- m31100| 2014-11-26T14:36:19.754-0500 I QUERY [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b9b14cb7e52ef83f3d5'), optime: Timestamp 1417030579000|2, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:36:19.754-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: -3.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:36:19.754-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: -3.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 2000|0, ObjectId('54762bb19255d3d73a3c7ae4') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31200| 2014-11-26T14:36:19.755-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: 3.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:36:19.755-0500 I QUERY [conn13] getmore local.oplog.rs cursorid:17667035805 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:116 0ms m31200| 2014-11-26T14:36:19.755-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: 3.0 } ], ordered: true, metadata: { shardName: "test-rs1", shardVersion: [ Timestamp 2000|1, ObjectId('54762bb19255d3d73a3c7ae4') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31100| 2014-11-26T14:36:19.756-0500 I QUERY [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b9b14cb7e52ef83f3d5'), optime: Timestamp 1417030579000|3, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:36:19.756-0500 I WRITE [conn8] insert fooUnsharded.barUnsharded query: { _id: 3.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:36:19.756-0500 I QUERY [conn8] command fooUnsharded.$cmd command: insert { insert: "barUnsharded", documents: [ { _id: 3.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31200| 2014-11-26T14:36:19.756-0500 I QUERY [conn12] getmore local.oplog.rs cursorid:17673727317 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:116 2ms m31200| 2014-11-26T14:36:19.757-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762ba1064c511c969d2b23'), optime: Timestamp 1417030579000|2, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31201", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:36:19.757-0500 I QUERY [conn16] query fooSharded.barSharded query: { _id: -1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m31100| 2014-11-26T14:36:19.757-0500 I QUERY [conn13] getmore local.oplog.rs cursorid:17667035805 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:120 0ms m31200| 2014-11-26T14:36:19.757-0500 I QUERY [conn9] query fooSharded.barSharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m31100| 2014-11-26T14:36:19.758-0500 I QUERY [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b9b14cb7e52ef83f3d5'), optime: Timestamp 1417030579000|4, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:36:19.758-0500 I QUERY [conn16] query fooUnsharded.barUnsharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms ---- Testing new connections with third primary down... ---- m30999| 2014-11-26T14:36:19.758-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49134 #4 (4 connections now open) m30999| 2014-11-26T14:36:19.773-0500 I ACCESS [conn4] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:36:19.788-0500 I ACCESS [conn4] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31100| 2014-11-26T14:36:19.788-0500 I QUERY [conn16] query fooSharded.barSharded query: { _id: -1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m30999| 2014-11-26T14:36:19.789-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49135 #5 (5 connections now open) m31201| 2014-11-26T14:36:19.795-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:158 0ms m30999| 2014-11-26T14:36:19.804-0500 I ACCESS [conn5] Successfully authenticated as principal shardedDBUser on fooSharded m31200| 2014-11-26T14:36:19.817-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m30999| 2014-11-26T14:36:19.818-0500 I ACCESS [conn5] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31200| 2014-11-26T14:36:19.819-0500 I QUERY [conn9] query fooSharded.barSharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m30999| 2014-11-26T14:36:19.819-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49136 #6 (6 connections now open) m30999| 2014-11-26T14:36:19.834-0500 I ACCESS [conn6] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:36:19.848-0500 I ACCESS [conn6] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31100| 2014-11-26T14:36:19.849-0500 I QUERY [conn16] query fooUnsharded.barUnsharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m30999| 2014-11-26T14:36:19.849-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49137 #7 (7 connections now open) m30999| 2014-11-26T14:36:19.864-0500 I ACCESS [conn7] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:36:19.878-0500 I ACCESS [conn7] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31100| 2014-11-26T14:36:19.879-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: -4.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:36:19.879-0500 I QUERY [conn13] getmore local.oplog.rs cursorid:17667035805 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:116 119ms m31100| 2014-11-26T14:36:19.879-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: -4.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 2000|0, ObjectId('54762bb19255d3d73a3c7ae4') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31100| 2014-11-26T14:36:19.880-0500 I QUERY [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b9b14cb7e52ef83f3d5'), optime: Timestamp 1417030579000|5, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m30999| 2014-11-26T14:36:19.880-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49138 #8 (8 connections now open) m30999| 2014-11-26T14:36:19.894-0500 I ACCESS [conn8] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:36:19.909-0500 I ACCESS [conn8] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31200| 2014-11-26T14:36:19.910-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: 4.0 } ninserted:1 keyUpdates:0 0ms m31200| 2014-11-26T14:36:19.910-0500 I QUERY [conn12] getmore local.oplog.rs cursorid:17673727317 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:116 151ms m31200| 2014-11-26T14:36:19.910-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: 4.0 } ], ordered: true, metadata: { shardName: "test-rs1", shardVersion: [ Timestamp 2000|1, ObjectId('54762bb19255d3d73a3c7ae4') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31200| 2014-11-26T14:36:19.910-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762ba1064c511c969d2b23'), optime: Timestamp 1417030579000|3, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31201", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m30999| 2014-11-26T14:36:19.910-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49139 #9 (9 connections now open) m30999| 2014-11-26T14:36:19.925-0500 I ACCESS [conn9] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:36:19.940-0500 I ACCESS [conn9] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31100| 2014-11-26T14:36:19.940-0500 I WRITE [conn8] insert fooUnsharded.barUnsharded query: { _id: 4.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:36:19.940-0500 I QUERY [conn13] getmore local.oplog.rs cursorid:17667035805 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:120 59ms m31100| 2014-11-26T14:36:19.940-0500 I QUERY [conn8] command fooUnsharded.$cmd command: insert { insert: "barUnsharded", documents: [ { _id: 4.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31100| 2014-11-26T14:36:19.941-0500 I QUERY [conn14] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b9b14cb7e52ef83f3d5'), optime: Timestamp 1417030579000|6, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m30999| 2014-11-26T14:36:19.945-0500 W - [conn4] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 10.33.141.202:49134 m30999| 2014-11-26T14:36:19.945-0500 W - [conn5] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 10.33.141.202:49135 m30999| 2014-11-26T14:36:19.945-0500 W - [conn6] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 10.33.141.202:49136 m30999| 2014-11-26T14:36:19.945-0500 W - [conn7] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 10.33.141.202:49137 m30999| 2014-11-26T14:36:19.945-0500 W - [conn8] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 10.33.141.202:49138 m30999| 2014-11-26T14:36:19.950-0500 I - [conn4] m30999| 0xc071b9 0xb943cc 0xbc1f77 0xbc29ba 0xbc29c9 0xbc2a15 0xbb7a39 0xbbabaf 0x7f8565896c6b 0x7f856492c5ed m30999| ----- BEGIN BACKTRACE ----- m30999| {"backtrace":[{"b":"400000","o":"8071B9"},{"b":"400000","o":"7943CC"},{"b":"400000","o":"7C1F77"},{"b":"400000","o":"7C29BA"},{"b":"400000","o":"7C29C9"},{"b":"400000","o":"7C2A15"},{"b":"400000","o":"7B7A39"},{"b":"400000","o":"7BABAF"},{"b":"7F856588F000","o":"7C6B"},{"b":"7F856484A000","o":"E25ED"}],"processInfo":{ "mongodbVersion" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2", "uname" : { "sysname" : "Linux", "release" : "3.4.43-43.43.amzn1.x86_64", "version" : "#1 SMP Mon May 6 18:04:41 UTC 2013", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFFAAEF7000", "elfType" : 3, "buildId" : "29B1BE128D1CD74EF11FFB8546C70D9BD5691168" }, { "b" : "7F856588F000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "CD5AAC30FD9161B40651639583A8600AFEDC9C4C" }, { "b" : "7F8565629000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AB341F36095E832872A333DD8418D88879D3CE3A" }, { "b" : "7F8565265000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "2E24651788AF4247D2358B7AE73FD0E42EF4123C" }, { "b" : "7F856505D000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "71D3B1475C8376D90DB02C1BC9D44C662B588B44" }, { "b" : "7F8564E59000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1F0D8E5A3A05C51AB017DD3B25DCA5A84691EA29" }, { "b" : "7F8564BD6000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "A7844DD3B5847BF8480B549FD96EF34C7AA10CB6" }, { "b" : "7F856484A000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "93179477188BD673E8EECF305C7D14B3824DBDE5" }, { "b" : "7F8565AAB000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1690D895D998DA3903D3327815C41143B8131168" }, { "b" : "7F8564607000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "9DF61878D8918F25CC74AD01F417FDB051DFE3DA" }, { "b" : "7F8564321000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "6F1DB0F811D1B210520443442D4437BC43BF9A80" }, { "b" : "7F856411E000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "1A6E97644CC9149C2E1871C6AE1DB51975E78A41" }, { "b" : "7F8563EF3000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "F7DF34078FD7BFD684FE46D5F677EEDA1D9B9DC9" }, { "b" : "7F8563CDC000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "E492542502DF88A2F752AD77D1905D13FF1AC6FF" }, { "b" : "7F8563AD1000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "381960ACAB9C39461D58BDE7B272C4F61BB3582F" }, { "b" : "7F85638CD000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "BF48CD5658DE95CE058C4B828E81C97E2AE19643" }, { "b" : "7F85636B2000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "0B8C3A6D8A1FF1E638C0EC551635FD4F5393B258" }, { "b" : "7F8563491000", "path" : "/usr/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "803D7EF21A989677D056E52BAEB9AB5B154FB9D9" } ] }} m30999| mongos(_ZN5mongo15printStackTraceERSo+0x29) [0xc071b9] m30999| mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12C) [0xb943cc] m30999| mongos(_ZN5mongo6Socket15handleRecvErrorEii+0x917) [0xbc1f77] m30999| mongos(_ZN5mongo6Socket5_recvEPci+0x6A) [0xbc29ba] m30999| mongos(_ZN5mongo6Socket11unsafe_recvEPci+0x9) [0xbc29c9] m30999| mongos(_ZN5mongo6Socket4recvEPci+0x35) [0xbc2a15] m30999| mongos(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xA9) [0xbb7a39] m30999| mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x3EF) [0xbbabaf] m30999| libpthread.so.0(+0x7C6B) [0x7f8565896c6b] m30999| libc.so.6(clone+0x6D) [0x7f856492c5ed] m30999| ----- END BACKTRACE ----- m30999| 2014-11-26T14:36:19.950-0500 I NETWORK [conn4] end connection 10.33.141.202:49134 (8 connections now open) m30999| 2014-11-26T14:36:19.953-0500 I - [conn6] m30999| 0xc071b9 0xb943cc 0xbc1f77 0xbc29ba 0xbc29c9 0xbc2a15 0xbb7a39 0xbbabaf 0x7f8565896c6b 0x7f856492c5ed m30999| ----- BEGIN BACKTRACE ----- m30999| {"backtrace":[{"b":"400000","o":"8071B9"},{"b":"400000","o":"7943CC"},{"b":"400000","o":"7C1F77"},{"b":"400000","o":"7C29BA"},{"b":"400000","o":"7C29C9"},{"b":"400000","o":"7C2A15"},{"b":"400000","o":"7B7A39"},{"b":"400000","o":"7BABAF"},{"b":"7F856588F000","o":"7C6B"},{"b":"7F856484A000","o":"E25ED"}],"processInfo":{ "mongodbVersion" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2", "uname" : { "sysname" : "Linux", "release" : "3.4.43-43.43.amzn1.x86_64", "version" : "#1 SMP Mon May 6 18:04:41 UTC 2013", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFFAAEF7000", "elfType" : 3, "buildId" : "29B1BE128D1CD74EF11FFB8546C70D9BD5691168" }, { "b" : "7F856588F000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "CD5AAC30FD9161B40651639583A8600AFEDC9C4C" }, { "b" : "7F8565629000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AB341F36095E832872A333DD8418D88879D3CE3A" }, { "b" : "7F8565265000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "2E24651788AF4247D2358B7AE73FD0E42EF4123C" }, { "b" : "7F856505D000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "71D3B1475C8376D90DB02C1BC9D44C662B588B44" }, { "b" : "7F8564E59000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1F0D8E5A3A05C51AB017DD3B25DCA5A84691EA29" }, { "b" : "7F8564BD6000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "A7844DD3B5847BF8480B549FD96EF34C7AA10CB6" }, { "b" : "7F856484A000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "93179477188BD673E8EECF305C7D14B3824DBDE5" }, { "b" : "7F8565AAB000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1690D895D998DA3903D3327815C41143B8131168" }, { "b" : "7F8564607000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "9DF61878D8918F25CC74AD01F417FDB051DFE3DA" }, { "b" : "7F8564321000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "6F1DB0F811D1B210520443442D4437BC43BF9A80" }, { "b" : "7F856411E000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "1A6E97644CC9149C2E1871C6AE1DB51975E78A41" }, { "b" : "7F8563EF3000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "F7DF34078FD7BFD684FE46D5F677EEDA1D9B9DC9" }, { "b" : "7F8563CDC000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "E492542502DF88A2F752AD77D1905D13FF1AC6FF" }, { "b" : "7F8563AD1000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "381960ACAB9C39461D58BDE7B272C4F61BB3582F" }, { "b" : "7F85638CD000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "BF48CD5658DE95CE058C4B828E81C97E2AE19643" }, { "b" : "7F85636B2000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "0B8C3A6D8A1FF1E638C0EC551635FD4F5393B258" }, { "b" : "7F8563491000", "path" : "/usr/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "803D7EF21A989677D056E52BAEB9AB5B154FB9D9" } ] }} m30999| mongos(_ZN5mongo15printStackTraceERSo+0x29) [0xc071b9] m30999| mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12C) [0xb943cc] m30999| mongos(_ZN5mongo6Socket15handleRecvErrorEii+0x917) [0xbc1f77] m30999| mongos(_ZN5mongo6Socket5_recvEPci+0x6A) [0xbc29ba] m30999| mongos(_ZN5mongo6Socket11unsafe_recvEPci+0x9) [0xbc29c9] m30999| mongos(_ZN5mongo6Socket4recvEPci+0x35) [0xbc2a15] m30999| mongos(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xA9) [0xbb7a39] m30999| mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x3EF) [0xbbabaf] m30999| libpthread.so.0(+0x7C6B) [0x7f8565896c6b] m30999| libc.so.6(clone+0x6D) [0x7f856492c5ed] m30999| ----- END BACKTRACE ----- m30999| 2014-11-26T14:36:19.953-0500 I NETWORK [conn6] end connection 10.33.141.202:49136 (7 connections now open) m30999| 2014-11-26T14:36:19.957-0500 I - [conn7] m30999| 0xc071b9 0xb943cc 0xbc1f77 0xbc29ba 0xbc29c9 0xbc2a15 0xbb7a39 0xbbabaf 0x7f8565896c6b 0x7f856492c5ed m30999| ----- BEGIN BACKTRACE ----- m30999| {"backtrace":[{"b":"400000","o":"8071B9"},{"b":"400000","o":"7943CC"},{"b":"400000","o":"7C1F77"},{"b":"400000","o":"7C29BA"},{"b":"400000","o":"7C29C9"},{"b":"400000","o":"7C2A15"},{"b":"400000","o":"7B7A39"},{"b":"400000","o":"7BABAF"},{"b":"7F856588F000","o":"7C6B"},{"b":"7F856484A000","o":"E25ED"}],"processInfo":{ "mongodbVersion" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2", "uname" : { "sysname" : "Linux", "release" : "3.4.43-43.43.amzn1.x86_64", "version" : "#1 SMP Mon May 6 18:04:41 UTC 2013", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFFAAEF7000", "elfType" : 3, "buildId" : "29B1BE128D1CD74EF11FFB8546C70D9BD5691168" }, { "b" : "7F856588F000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "CD5AAC30FD9161B40651639583A8600AFEDC9C4C" }, { "b" : "7F8565629000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AB341F36095E832872A333DD8418D88879D3CE3A" }, { "b" : "7F8565265000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "2E24651788AF4247D2358B7AE73FD0E42EF4123C" }, { "b" : "7F856505D000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "71D3B1475C8376D90DB02C1BC9D44C662B588B44" }, { "b" : "7F8564E59000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1F0D8E5A3A05C51AB017DD3B25DCA5A84691EA29" }, { "b" : "7F8564BD6000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "A7844DD3B5847BF8480B549FD96EF34C7AA10CB6" }, { "b" : "7F856484A000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "93179477188BD673E8EECF305C7D14B3824DBDE5" }, { "b" : "7F8565AAB000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1690D895D998DA3903D3327815C41143B8131168" }, { "b" : "7F8564607000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "9DF61878D8918F25CC74AD01F417FDB051DFE3DA" }, { "b" : "7F8564321000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "6F1DB0F811D1B210520443442D4437BC43BF9A80" }, { "b" : "7F856411E000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "1A6E97644CC9149C2E1871C6AE1DB51975E78A41" }, { "b" : "7F8563EF3000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "F7DF34078FD7BFD684FE46D5F677EEDA1D9B9DC9" }, { "b" : "7F8563CDC000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "E492542502DF88A2F752AD77D1905D13FF1AC6FF" }, { "b" : "7F8563AD1000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "381960ACAB9C39461D58BDE7B272C4F61BB3582F" }, { "b" : "7F85638CD000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "BF48CD5658DE95CE058C4B828E81C97E2AE19643" }, { "b" : "7F85636B2000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "0B8C3A6D8A1FF1E638C0EC551635FD4F5393B258" }, { "b" : "7F8563491000", "path" : "/usr/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "803D7EF21A989677D056E52BAEB9AB5B154FB9D9" } ] }} m30999| mongos(_ZN5mongo15printStackTraceERSo+0x29) [0xc071b9] m30999| mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12C) [0xb943cc] m30999| mongos(_ZN5mongo6Socket15handleRecvErrorEii+0x917) [0xbc1f77] m30999| mongos(_ZN5mongo6Socket5_recvEPci+0x6A) [0xbc29ba] m30999| mongos(_ZN5mongo6Socket11unsafe_recvEPci+0x9) [0xbc29c9] m30999| mongos(_ZN5mongo6Socket4recvEPci+0x35) [0xbc2a15] m30999| mongos(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xA9) [0xbb7a39] m30999| mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x3EF) [0xbbabaf] m30999| libpthread.so.0(+0x7C6B) [0x7f8565896c6b] m30999| libc.so.6(clone+0x6D) [0x7f856492c5ed] m30999| ----- END BACKTRACE ----- m30999| 2014-11-26T14:36:19.957-0500 I NETWORK [conn7] end connection 10.33.141.202:49137 (6 connections now open) m30999| 2014-11-26T14:36:19.961-0500 I - [conn5] m30999| 0xc071b9 0xb943cc 0xbc1f77 0xbc29ba 0xbc29c9 0xbc2a15 0xbb7a39 0xbbabaf 0x7f8565896c6b 0x7f856492c5ed m30999| ----- BEGIN BACKTRACE ----- m30999| {"backtrace":[{"b":"400000","o":"8071B9"},{"b":"400000","o":"7943CC"},{"b":"400000","o":"7C1F77"},{"b":"400000","o":"7C29BA"},{"b":"400000","o":"7C29C9"},{"b":"400000","o":"7C2A15"},{"b":"400000","o":"7B7A39"},{"b":"400000","o":"7BABAF"},{"b":"7F856588F000","o":"7C6B"},{"b":"7F856484A000","o":"E25ED"}],"processInfo":{ "mongodbVersion" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2", "uname" : { "sysname" : "Linux", "release" : "3.4.43-43.43.amzn1.x86_64", "version" : "#1 SMP Mon May 6 18:04:41 UTC 2013", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFFAAEF7000", "elfType" : 3, "buildId" : "29B1BE128D1CD74EF11FFB8546C70D9BD5691168" }, { "b" : "7F856588F000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "CD5AAC30FD9161B40651639583A8600AFEDC9C4C" }, { "b" : "7F8565629000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AB341F36095E832872A333DD8418D88879D3CE3A" }, { "b" : "7F8565265000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "2E24651788AF4247D2358B7AE73FD0E42EF4123C" }, { "b" : "7F856505D000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "71D3B1475C8376D90DB02C1BC9D44C662B588B44" }, { "b" : "7F8564E59000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1F0D8E5A3A05C51AB017DD3B25DCA5A84691EA29" }, { "b" : "7F8564BD6000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "A7844DD3B5847BF8480B549FD96EF34C7AA10CB6" }, { "b" : "7F856484A000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "93179477188BD673E8EECF305C7D14B3824DBDE5" }, { "b" : "7F8565AAB000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1690D895D998DA3903D3327815C41143B8131168" }, { "b" : "7F8564607000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "9DF61878D8918F25CC74AD01F417FDB051DFE3DA" }, { "b" : "7F8564321000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "6F1DB0F811D1B210520443442D4437BC43BF9A80" }, { "b" : "7F856411E000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "1A6E97644CC9149C2E1871C6AE1DB51975E78A41" }, { "b" : "7F8563EF3000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "F7DF34078FD7BFD684FE46D5F677EEDA1D9B9DC9" }, { "b" : "7F8563CDC000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "E492542502DF88A2F752AD77D1905D13FF1AC6FF" }, { "b" : "7F8563AD1000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "381960ACAB9C39461D58BDE7B272C4F61BB3582F" }, { "b" : "7F85638CD000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "BF48CD5658DE95CE058C4B828E81C97E2AE19643" }, { "b" : "7F85636B2000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "0B8C3A6D8A1FF1E638C0EC551635FD4F5393B258" }, { "b" : "7F8563491000", "path" : "/usr/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "803D7EF21A989677D056E52BAEB9AB5B154FB9D9" } ] }} m30999| mongos(_ZN5mongo15printStackTraceERSo+0x29) [0xc071b9] m30999| mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12C) [0xb943cc] m30999| mongos(_ZN5mongo6Socket15handleRecvErrorEii+0x917) [0xbc1f77] m30999| mongos(_ZN5mongo6Socket5_recvEPci+0x6A) [0xbc29ba] m30999| mongos(_ZN5mongo6Socket11unsafe_recvEPci+0x9) [0xbc29c9] m30999| mongos(_ZN5mongo6Socket4recvEPci+0x35) [0xbc2a15] m30999| mongos(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xA9) [0xbb7a39] m30999| mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x3EF) [0xbbabaf] m30999| libpthread.so.0(+0x7C6B) [0x7f8565896c6b] m30999| libc.so.6(clone+0x6D) [0x7f856492c5ed] m30999| ----- END BACKTRACE ----- m30999| 2014-11-26T14:36:19.961-0500 I NETWORK [conn5] end connection 10.33.141.202:49135 (5 connections now open) ---- Stopping primary of second shard... ---- m30999| 2014-11-26T14:36:19.964-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49140 #10 (6 connections now open) m30999| 2014-11-26T14:36:19.965-0500 I - [conn8] m30999| 0xc071b9 0xb943cc 0xbc1f77 0xbc29ba 0xbc29c9 0xbc2a15 0xbb7a39 0xbbabaf 0x7f8565896c6b 0x7f856492c5ed m30999| ----- BEGIN BACKTRACE ----- m30999| {"backtrace":[{"b":"400000","o":"8071B9"},{"b":"400000","o":"7943CC"},{"b":"400000","o":"7C1F77"},{"b":"400000","o":"7C29BA"},{"b":"400000","o":"7C29C9"},{"b":"400000","o":"7C2A15"},{"b":"400000","o":"7B7A39"},{"b":"400000","o":"7BABAF"},{"b":"7F856588F000","o":"7C6B"},{"b":"7F856484A000","o":"E25ED"}],"processInfo":{ "mongodbVersion" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2", "uname" : { "sysname" : "Linux", "release" : "3.4.43-43.43.amzn1.x86_64", "version" : "#1 SMP Mon May 6 18:04:41 UTC 2013", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFFAAEF7000", "elfType" : 3, "buildId" : "29B1BE128D1CD74EF11FFB8546C70D9BD5691168" }, { "b" : "7F856588F000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "CD5AAC30FD9161B40651639583A8600AFEDC9C4C" }, { "b" : "7F8565629000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AB341F36095E832872A333DD8418D88879D3CE3A" }, { "b" : "7F8565265000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "2E24651788AF4247D2358B7AE73FD0E42EF4123C" }, { "b" : "7F856505D000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "71D3B1475C8376D90DB02C1BC9D44C662B588B44" }, { "b" : "7F8564E59000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1F0D8E5A3A05C51AB017DD3B25DCA5A84691EA29" }, { "b" : "7F8564BD6000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "A7844DD3B5847BF8480B549FD96EF34C7AA10CB6" }, { "b" : "7F856484A000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "93179477188BD673E8EECF305C7D14B3824DBDE5" }, { "b" : "7F8565AAB000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1690D895D998DA3903D3327815C41143B8131168" }, { "b" : "7F8564607000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "9DF61878D8918F25CC74AD01F417FDB051DFE3DA" }, { "b" : "7F8564321000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "6F1DB0F811D1B210520443442D4437BC43BF9A80" }, { "b" : "7F856411E000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "1A6E97644CC9149C2E1871C6AE1DB51975E78A41" }, { "b" : "7F8563EF3000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "F7DF34078FD7BFD684FE46D5F677EEDA1D9B9DC9" }, { "b" : "7F8563CDC000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "E492542502DF88A2F752AD77D1905D13FF1AC6FF" }, { "b" : "7F8563AD1000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "381960ACAB9C39461D58BDE7B272C4F61BB3582F" }, { "b" : "7F85638CD000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "BF48CD5658DE95CE058C4B828E81C97E2AE19643" }, { "b" : "7F85636B2000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "0B8C3A6D8A1FF1E638C0EC551635FD4F5393B258" }, { "b" : "7F8563491000", "path" : "/usr/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "803D7EF21A989677D056E52BAEB9AB5B154FB9D9" } ] }} m30999| mongos(_ZN5mongo15printStackTraceERSo+0x29) [0xc071b9] m30999| mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12C) [0xb943cc] m30999| mongos(_ZN5mongo6Socket15handleRecvErrorEii+0x917) [0xbc1f77] m30999| mongos(_ZN5mongo6Socket5_recvEPci+0x6A) [0xbc29ba] m30999| mongos(_ZN5mongo6Socket11unsafe_recvEPci+0x9) [0xbc29c9] m30999| mongos(_ZN5mongo6Socket4recvEPci+0x35) [0xbc2a15] m30999| mongos(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xA9) [0xbb7a39] m30999| mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x3EF) [0xbbabaf] m30999| libpthread.so.0(+0x7C6B) [0x7f8565896c6b] m30999| libc.so.6(clone+0x6D) [0x7f856492c5ed] m30999| ----- END BACKTRACE ----- m30999| 2014-11-26T14:36:19.965-0500 I NETWORK [conn8] end connection 10.33.141.202:49138 (5 connections now open) m30999| 2014-11-26T14:36:19.979-0500 I ACCESS [conn10] Successfully authenticated as principal shardedDBUser on fooSharded m31101| 2014-11-26T14:36:19.985-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:158 0ms m30999| 2014-11-26T14:36:19.994-0500 I ACCESS [conn10] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31200| 2014-11-26T14:36:19.994-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:19.995-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:36:19.995-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:36:19.996-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms ReplSetTest n: 0 ports: [ 31200, 31201 ] 31200 number ReplSetTest stop *** Shutting down mongod in port 31200 *** m31200| 2014-11-26T14:36:19.997-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m31200| 2014-11-26T14:36:19.997-0500 I REPL [signalProcessingThread] Stopping replication applier threads m31100| 2014-11-26T14:36:20.023-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31200| 2014-11-26T14:36:20.913-0500 I COMMAND [signalProcessingThread] now exiting m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [signalProcessingThread] closing listening socket: 13 m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [signalProcessingThread] closing listening socket: 14 m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [signalProcessingThread] closing listening socket: 20 m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31200.sock m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m31200| 2014-11-26T14:36:20.913-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: fooSharded.barSharded m31200| 2014-11-26T14:36:20.913-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me m31200| 2014-11-26T14:36:20.913-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs m31200| 2014-11-26T14:36:20.913-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log m31200| 2014-11-26T14:36:20.913-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset m31200| 2014-11-26T14:36:20.913-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog m31200| 2014-11-26T14:36:20.913-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [conn1] end connection 127.0.0.1:50778 (10 connections now open) m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [conn2] end connection 10.33.141.202:40631 (10 connections now open) m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [conn5] end connection 10.33.141.202:40644 (10 connections now open) m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [conn6] end connection 10.33.141.202:40661 (10 connections now open) m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [conn8] end connection 10.33.141.202:40671 (10 connections now open) m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [conn9] end connection 10.33.141.202:40672 (10 connections now open) m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [conn10] end connection 10.33.141.202:40681 (10 connections now open) m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [conn11] end connection 10.33.141.202:40682 (10 connections now open) m31200| 2014-11-26T14:36:20.913-0500 I NETWORK [conn7] end connection 10.33.141.202:40662 (10 connections now open) m31201| 2014-11-26T14:36:20.913-0500 I NETWORK [conn3] end connection 10.33.141.202:53846 (1 connection now open) m29000| 2014-11-26T14:36:20.914-0500 I NETWORK [conn10] end connection 10.33.141.202:41595 (11 connections now open) m31101| 2014-11-26T14:36:20.914-0500 I NETWORK [conn5] end connection 10.33.141.202:54147 (2 connections now open) m29000| 2014-11-26T14:36:20.914-0500 I NETWORK [conn8] end connection 10.33.141.202:41593 (10 connections now open) m31100| 2014-11-26T14:36:20.914-0500 I NETWORK [conn10] end connection 10.33.141.202:38212 (11 connections now open) m31200| 2014-11-26T14:36:20.914-0500 I NETWORK [conn13] end connection 10.33.141.202:40689 (10 connections now open) m29000| 2014-11-26T14:36:20.914-0500 I NETWORK [conn9] end connection 10.33.141.202:41594 (9 connections now open) m29000| 2014-11-26T14:36:20.914-0500 I NETWORK [conn7] end connection 10.33.141.202:41592 (9 connections now open) m31100| 2014-11-26T14:36:20.914-0500 I NETWORK [conn12] end connection 10.33.141.202:38220 (10 connections now open) m31201| 2014-11-26T14:36:20.914-0500 D NETWORK [rsBackgroundSync] SocketException: remote: 10.33.141.202:31200 error: 9001 socket exception [CLOSED] server [10.33.141.202:31200] m31201| 2014-11-26T14:36:20.914-0500 D - [rsBackgroundSync] User Assertion: 10278:dbclient error communicating with server: ip-10-33-141-202:31200 m31201| 2014-11-26T14:36:20.914-0500 E REPL [rsBackgroundSync] sync producer problem: 10278 dbclient error communicating with server: ip-10-33-141-202:31200 m31100| 2014-11-26T14:36:20.914-0500 I NETWORK [conn15] end connection 10.33.141.202:38226 (9 connections now open) m29000| 2014-11-26T14:36:20.914-0500 I NETWORK [conn13] end connection 10.33.141.202:41604 (7 connections now open) m31201| 2014-11-26T14:36:20.915-0500 I REPL [ReplicationExecutor] could not find member to sync from m31200| 2014-11-26T14:36:20.955-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 2014-11-26T14:36:20.997-0500 I - shell: stopped mongo program on port 31200 ReplSetTest stop *** Mongod in port 31200 shutdown with code (0) *** ---- Testing active connection with second primary down... ---- m31100| 2014-11-26T14:36:20.998-0500 I QUERY [conn16] query fooSharded.barSharded query: { _id: -1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m30999| 2014-11-26T14:36:20.999-0500 W - [conn2] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 10.33.141.202:31200 m30999| 2014-11-26T14:36:21.007-0500 I - [conn2] m30999| 0xc071b9 0xb943cc 0xbc1f77 0xbc29ba 0xbc29c9 0xbc2a15 0xbb7a39 0x7b1c15 0x7c6152 0x7d0cfd 0x7de6c7 0xb08613 0xa67093 0xa66489 0xaf5edd 0xa81c8f 0xb07c2d 0xaf5461 0x7695a8 0xbbabd1 0x7f8565896c6b 0x7f856492c5ed m30999| ----- BEGIN BACKTRACE ----- m30999| {"backtrace":[{"b":"400000","o":"8071B9"},{"b":"400000","o":"7943CC"},{"b":"400000","o":"7C1F77"},{"b":"400000","o":"7C29BA"},{"b":"400000","o":"7C29C9"},{"b":"400000","o":"7C2A15"},{"b":"400000","o":"7B7A39"},{"b":"400000","o":"3B1C15"},{"b":"400000","o":"3C6152"},{"b":"400000","o":"3D0CFD"},{"b":"400000","o":"3DE6C7"},{"b":"400000","o":"708613"},{"b":"400000","o":"667093"},{"b":"400000","o":"666489"},{"b":"400000","o":"6F5EDD"},{"b":"400000","o":"681C8F"},{"b":"400000","o":"707C2D"},{"b":"400000","o":"6F5461"},{"b":"400000","o":"3695A8"},{"b":"400000","o":"7BABD1"},{"b":"7F856588F000","o":"7C6B"},{"b":"7F856484A000","o":"E25ED"}],"processInfo":{ "mongodbVersion" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2", "uname" : { "sysname" : "Linux", "release" : "3.4.43-43.43.amzn1.x86_64", "version" : "#1 SMP Mon May 6 18:04:41 UTC 2013", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFFAAEF7000", "elfType" : 3, "buildId" : "29B1BE128D1CD74EF11FFB8546C70D9BD5691168" }, { "b" : "7F856588F000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "CD5AAC30FD9161B40651639583A8600AFEDC9C4C" }, { "b" : "7F8565629000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AB341F36095E832872A333DD8418D88879D3CE3A" }, { "b" : "7F8565265000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "2E24651788AF4247D2358B7AE73FD0E42EF4123C" }, { "b" : "7F856505D000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "71D3B1475C8376D90DB02C1BC9D44C662B588B44" }, { "b" : "7F8564E59000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1F0D8E5A3A05C51AB017DD3B25DCA5A84691EA29" }, { "b" : "7F8564BD6000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "A7844DD3B5847BF8480B549FD96EF34C7AA10CB6" }, { "b" : "7F856484A000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "93179477188BD673E8EECF305C7D14B3824DBDE5" }, { "b" : "7F8565AAB000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1690D895D998DA3903D3327815C41143B8131168" }, { "b" : "7F8564607000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "9DF61878D8918F25CC74AD01F417FDB051DFE3DA" }, { "b" : "7F8564321000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "6F1DB0F811D1B210520443442D4437BC43BF9A80" }, { "b" : "7F856411E000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "1A6E97644CC9149C2E1871C6AE1DB51975E78A41" }, { "b" : "7F8563EF3000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "F7DF34078FD7BFD684FE46D5F677EEDA1D9B9DC9" }, { "b" : "7F8563CDC000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "E492542502DF88A2F752AD77D1905D13FF1AC6FF" }, { "b" : "7F8563AD1000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "381960ACAB9C39461D58BDE7B272C4F61BB3582F" }, { "b" : "7F85638CD000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "BF48CD5658DE95CE058C4B828E81C97E2AE19643" }, { "b" : "7F85636B2000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "0B8C3A6D8A1FF1E638C0EC551635FD4F5393B258" }, { "b" : "7F8563491000", "path" : "/usr/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "803D7EF21A989677D056E52BAEB9AB5B154FB9D9" } ] }} m30999| mongos(_ZN5mongo15printStackTraceERSo+0x29) [0xc071b9] m30999| mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12C) [0xb943cc] m30999| mongos(_ZN5mongo6Socket15handleRecvErrorEii+0x917) [0xbc1f77] m30999| mongos(_ZN5mongo6Socket5_recvEPci+0x6A) [0xbc29ba] m30999| mongos(_ZN5mongo6Socket11unsafe_recvEPci+0x9) [0xbc29c9] m30999| mongos(_ZN5mongo6Socket4recvEPci+0x35) [0xbc2a15] m30999| mongos(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xA9) [0xbb7a39] m30999| mongos(_ZN5mongo18DBClientConnection4recvERNS_7MessageE+0x15) [0x7b1c15] m30999| mongos(_ZN5mongo18DBClientReplicaSet4recvERNS_7MessageE+0x22) [0x7c6152] m30999| mongos(_ZN5mongo14DBClientCursor14initLazyFinishERb+0x2D) [0x7d0cfd] m30999| mongos(_ZN5mongo27ParallelSortClusteredCursor10finishInitEv+0x277) [0x7de6c7] m30999| mongos(_ZN5mongo8Strategy9commandOpERKSsRKNS_7BSONObjEiS2_S5_PSt6vectorINS0_13CommandResultESaIS7_EE+0x113) [0xb08613] m30999| mongos(_ZNK5mongo14ClusterFindCmd7explainEPNS_16OperationContextERKSsRKNS_7BSONObjENS_13ExplainCommon9VerbosityEPNS_14BSONObjBuilderE+0x253) [0xa67093] m30999| mongos(_ZN5mongo17ClusterExplainCmd3runEPNS_16OperationContextERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb+0x169) [0xa66489] m30999| mongos(_ZN5mongo7Command22execCommandClientBasicEPNS_16OperationContextEPS0_RNS_11ClientBasicEiPKcRNS_7BSONObjERNS_14BSONObjBuilderEb+0x3FD) [0xaf5edd] m30999| mongos(_ZN5mongo7Command20runAgainstRegisteredEPKcRNS_7BSONObjERNS_14BSONObjBuilderEi+0x22F) [0xa81c8f] m30999| mongos(_ZN5mongo8Strategy15clientCommandOpERNS_7RequestE+0x1BD) [0xb07c2d] m30999| mongos(_ZN5mongo7Request7processEi+0x591) [0xaf5461] m30999| mongos(_ZN5mongo21ShardedMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x58) [0x7695a8] m30999| mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x411) [0xbbabd1] m30999| libpthread.so.0(+0x7C6B) [0x7f8565896c6b] m30999| libc.so.6(clone+0x6D) [0x7f856492c5ed] m30999| ----- END BACKTRACE ----- m30999| 2014-11-26T14:36:21.007-0500 I NETWORK [conn2] DBClientCursor::init lazy say() failed m30999| 2014-11-26T14:36:21.007-0500 I NETWORK [conn2] DBClientCursor::init message from say() was empty m30999| 2014-11-26T14:36:21.007-0500 I NETWORK [conn2] slave no longer has secondary status: ip-10-33-141-202:31200 m31201| 2014-11-26T14:36:21.008-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:53917 #4 (2 connections now open) m31201| 2014-11-26T14:36:21.010-0500 I QUERY [conn4] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4A7769616C7446477345484E662F6D7A51476A6F547042664773727573796650) } ntoreturn:1 keyUpdates:0 reslen:179 1ms m31201| 2014-11-26T14:36:21.023-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4A7769616C7446477345484E662F6D7A51476A6F5470426647737275737966504D677570615444694E6764495565665262303471375348765850346847...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:36:21.023-0500 I ACCESS [conn4] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:36:21.023-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:36:21.023-0500 I QUERY [conn4] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:21.024-0500 I QUERY [conn4] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:21.024-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:53918 #5 (3 connections now open) m31201| 2014-11-26T14:36:21.026-0500 I QUERY [conn5] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D517979727A313434794F5A6474613643436D55666A527947555448685A76514A) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:36:21.038-0500 I QUERY [conn5] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D517979727A313434794F5A6474613643436D55666A527947555448685A76514A69506870414643554C555858715856384B6F35376D4B66586C66786A79...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:36:21.039-0500 I ACCESS [conn5] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:36:21.039-0500 I QUERY [conn5] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:36:21.039-0500 I QUERY [conn5] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:36:21.040-0500 I QUERY [conn5] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D70463834736D3779747141734A38387167376341494A6667582B646479373733) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:36:21.053-0500 I QUERY [conn5] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D70463834736D3779747141734A38387167376341494A6667582B6464793737333270482F4B386B315163614662615A566D723143704E2F677256473446...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:36:21.053-0500 I ACCESS [conn5] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:36:21.053-0500 I QUERY [conn5] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:36:21.054-0500 I QUERY [conn5] command fooSharded.$cmd command: explain { explain: { find: "barSharded", filter: { _id: 1.0 }, options: { slaveOk: true } }, verbosity: "allPlansExecution" } ntoreturn:1 keyUpdates:0 reslen:701 0ms REN: exp: { "queryPlanner" : { "mongosPlannerVersion" : 1, "winningPlan" : { "stage" : "SINGLE_SHARD", "shards" : [ { "shardName" : "test-rs1", "connectionString" : "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", "serverInfo" : { "host" : "ip-10-33-141-202", "port" : 31201, "version" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2" }, "plannerVersion" : 1, "parsedQuery" : { "_id" : { "$eq" : 1 } }, "winningPlan" : { "stage" : "IDHACK" }, "rejectedPlans" : [ ] } ] } }, "executionStats" : { "nReturned" : 1, "executionTimeMillis" : 54, "totalKeysExamined" : 1, "totalDocsExamined" : 1, "executionStages" : { "stage" : "SINGLE_SHARD", "nReturned" : 1, "executionTimeMillis" : 54, "totalKeysExamined" : 1, "totalDocsExamined" : 1, "totalChildMillis" : NumberLong(0), "shards" : [ { "shardName" : "test-rs1", "executionSuccess" : true, "executionStages" : { "stage" : "IDHACK", "nReturned" : 1, "executionTimeMillisEstimate" : 0, "works" : 2, "advanced" : 1, "needTime" : 0, "needFetch" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 1, "invalidates" : 0, "keysExamined" : 1, "docsExamined" : 1 } } ] }, "allPlansExecution" : [ { "shardName" : "test-rs1", "allPlans" : [ ] } ] }, "ok" : 1 } m31201| 2014-11-26T14:36:21.057-0500 I QUERY [conn5] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D3839625767522B5659474D737A38395A754E4875302F30424F6F762F56395669) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:36:21.070-0500 I QUERY [conn5] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D3839625767522B5659474D737A38395A754E4875302F30424F6F762F5639566938555479786C4C4C7141586B374E4F635848316D61306E6E5874774A77...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:36:21.070-0500 I ACCESS [conn5] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:36:21.070-0500 I QUERY [conn5] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:36:21.071-0500 I QUERY [conn5] query fooSharded.barSharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m31100| 2014-11-26T14:36:21.071-0500 I QUERY [conn16] query fooUnsharded.barUnsharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m30999| 2014-11-26T14:36:21.071-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m30999| 2014-11-26T14:36:21.071-0500 I SHARDING [signalProcessingThread] dbexit: rc:0 m31100| 2014-11-26T14:36:21.072-0500 I NETWORK [conn16] end connection 10.33.141.202:38235 (8 connections now open) m31100| 2014-11-26T14:36:21.072-0500 I NETWORK [conn6] end connection 10.33.141.202:38202 (8 connections now open) m31100| 2014-11-26T14:36:21.073-0500 I NETWORK [conn7] end connection 10.33.141.202:38203 (6 connections now open) m29000| 2014-11-26T14:36:21.072-0500 I NETWORK [conn6] end connection 10.33.141.202:41572 (6 connections now open) m29000| 2014-11-26T14:36:21.073-0500 I NETWORK [conn5] end connection 10.33.141.202:41571 (5 connections now open) m31201| 2014-11-26T14:36:21.073-0500 I NETWORK [conn5] end connection 10.33.141.202:53918 (2 connections now open) m31100| 2014-11-26T14:36:21.073-0500 I NETWORK [conn8] end connection 10.33.141.202:38208 (5 connections now open) m31201| 2014-11-26T14:36:21.073-0500 I NETWORK [conn4] end connection 10.33.141.202:53917 (1 connection now open) m29000| 2014-11-26T14:36:21.073-0500 I NETWORK [conn14] end connection 10.33.141.202:41612 (4 connections now open) m31301| 2014-11-26T14:36:21.532-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:36:21.532-0500 W NETWORK [ReplExecNetThread-7] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:36:21.532-0500 D - [ReplExecNetThread-7] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:36:21.532-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:36:21.532-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 1; 1ms have already elapsed m31301| 2014-11-26T14:36:21.533-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:36:21.533-0500 W NETWORK [ReplExecNetThread-1] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:36:21.533-0500 D - [ReplExecNetThread-1] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:36:21.533-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:36:21.533-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 0; 2ms have already elapsed m31301| 2014-11-26T14:36:21.533-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:36:21.533-0500 W NETWORK [ReplExecNetThread-2] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:36:21.533-0500 D - [ReplExecNetThread-2] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:36:21.534-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:36:21.817-0500 D NETWORK [ReplExecNetThread-2] SocketException: remote: 10.33.141.202:31200 error: 9001 socket exception [CLOSED] server [10.33.141.202:31200] m31201| 2014-11-26T14:36:21.817-0500 I NETWORK [ReplExecNetThread-2] DBClientCursor::init call() failed m31201| 2014-11-26T14:36:21.817-0500 D - [ReplExecNetThread-2] User Assertion: 10276:DBClientBase::findN: transport error: ip-10-33-141-202:31200 ns: admin.$cmd query: { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } m31201| 2014-11-26T14:36:21.817-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31200; Location10276 DBClientBase::findN: transport error: ip-10-33-141-202:31200 ns: admin.$cmd query: { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } m31201| 2014-11-26T14:36:21.817-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31200; trying again; Retries left: 1; 0ms have already elapsed m31201| 2014-11-26T14:36:21.818-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:36:21.818-0500 W NETWORK [ReplExecNetThread-3] Failed to connect to 10.33.141.202:31200, reason: errno:111 Connection refused m31201| 2014-11-26T14:36:21.818-0500 D - [ReplExecNetThread-3] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:36:21.818-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31200; Location18915 Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:36:21.818-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31200; trying again; Retries left: 0; 1ms have already elapsed m31201| 2014-11-26T14:36:21.819-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:36:21.819-0500 W NETWORK [ReplExecNetThread-4] Failed to connect to 10.33.141.202:31200, reason: errno:111 Connection refused m31201| 2014-11-26T14:36:21.819-0500 D - [ReplExecNetThread-4] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:36:21.819-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31200; Location18915 Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31101| 2014-11-26T14:36:21.987-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:158 0ms m31100| 2014-11-26T14:36:22.023-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms 2014-11-26T14:36:22.071-0500 I - shell: stopped mongo program on port 30999 2014-11-26T14:36:22.072-0500 I - No db started on port: 30000 2014-11-26T14:36:22.072-0500 I - shell: stopped mongo program on port 30000 2014-11-26T14:36:22.072-0500 I - No db started on port: 30001 2014-11-26T14:36:22.072-0500 I - shell: stopped mongo program on port 30001 2014-11-26T14:36:22.072-0500 I - No db started on port: 30002 2014-11-26T14:36:22.072-0500 I - shell: stopped mongo program on port 30002 ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number ReplSetTest stop *** Shutting down mongod in port 31100 *** m31100| 2014-11-26T14:36:22.072-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m31100| 2014-11-26T14:36:22.072-0500 I REPL [signalProcessingThread] Stopping replication applier threads m31100| 2014-11-26T14:36:22.097-0500 I COMMAND [signalProcessingThread] now exiting m31100| 2014-11-26T14:36:22.098-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m31100| 2014-11-26T14:36:22.098-0500 I NETWORK [signalProcessingThread] closing listening socket: 7 m31100| 2014-11-26T14:36:22.098-0500 I NETWORK [signalProcessingThread] closing listening socket: 8 m31100| 2014-11-26T14:36:22.098-0500 I NETWORK [signalProcessingThread] closing listening socket: 14 m31100| 2014-11-26T14:36:22.098-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31100.sock m31100| 2014-11-26T14:36:22.098-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m31100| 2014-11-26T14:36:22.098-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m31100| 2014-11-26T14:36:22.098-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: fooSharded.barSharded m31100| 2014-11-26T14:36:22.098-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: fooUnsharded.barUnsharded m31100| 2014-11-26T14:36:22.098-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me m31100| 2014-11-26T14:36:22.098-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs m31100| 2014-11-26T14:36:22.098-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log m31100| 2014-11-26T14:36:22.098-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset m31100| 2014-11-26T14:36:22.098-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog m31100| 2014-11-26T14:36:22.098-0500 I NETWORK [conn1] end connection 127.0.0.1:47487 (4 connections now open) m31100| 2014-11-26T14:36:22.098-0500 I NETWORK [conn5] end connection 10.33.141.202:38186 (4 connections now open) m31100| 2014-11-26T14:36:22.098-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m31100| 2014-11-26T14:36:22.098-0500 I NETWORK [conn14] end connection 10.33.141.202:38223 (4 connections now open) m31100| 2014-11-26T14:36:22.098-0500 I NETWORK [conn2] end connection 10.33.141.202:38165 (4 connections now open) m31101| 2014-11-26T14:36:22.098-0500 I NETWORK [conn3] end connection 10.33.141.202:54100 (1 connection now open) m31101| 2014-11-26T14:36:22.098-0500 D NETWORK [rsBackgroundSync] SocketException: remote: 10.33.141.202:31100 error: 9001 socket exception [CLOSED] server [10.33.141.202:31100] m31101| 2014-11-26T14:36:22.098-0500 D - [rsBackgroundSync] User Assertion: 10278:dbclient error communicating with server: ip-10-33-141-202:31100 m31101| 2014-11-26T14:36:22.098-0500 E REPL [rsBackgroundSync] sync producer problem: 10278 dbclient error communicating with server: ip-10-33-141-202:31100 m31101| 2014-11-26T14:36:22.098-0500 I REPL [ReplicationExecutor] could not find member to sync from m29000| 2014-11-26T14:36:22.098-0500 I NETWORK [conn11] end connection 10.33.141.202:41598 (3 connections now open) m29000| 2014-11-26T14:36:22.098-0500 I NETWORK [conn12] end connection 10.33.141.202:41603 (2 connections now open) m31100| 2014-11-26T14:36:22.152-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 2014-11-26T14:36:23.072-0500 I - shell: stopped mongo program on port 31100 ReplSetTest stop *** Mongod in port 31100 shutdown with code (0) *** ReplSetTest n: 1 ports: [ 31100, 31101 ] 31101 number ReplSetTest stop *** Shutting down mongod in port 31101 *** m31101| 2014-11-26T14:36:23.072-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m31101| 2014-11-26T14:36:23.073-0500 I REPL [signalProcessingThread] Stopping replication applier threads m31301| 2014-11-26T14:36:23.533-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:36:23.534-0500 W NETWORK [ReplExecNetThread-3] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:36:23.534-0500 D - [ReplExecNetThread-3] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:36:23.534-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:36:23.534-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 1; 1ms have already elapsed m31301| 2014-11-26T14:36:23.534-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:36:23.534-0500 W NETWORK [ReplExecNetThread-5] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:36:23.534-0500 D - [ReplExecNetThread-5] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:36:23.534-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:36:23.534-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 0; 1ms have already elapsed m31301| 2014-11-26T14:36:23.535-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:36:23.535-0500 W NETWORK [ReplExecNetThread-6] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:36:23.535-0500 D - [ReplExecNetThread-6] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:36:23.535-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:36:23.820-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:36:23.820-0500 W NETWORK [ReplExecNetThread-5] Failed to connect to 10.33.141.202:31200, reason: errno:111 Connection refused m31201| 2014-11-26T14:36:23.820-0500 D - [ReplExecNetThread-5] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:36:23.820-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31200; Location18915 Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:36:23.820-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31200; trying again; Retries left: 1; 1ms have already elapsed m31201| 2014-11-26T14:36:23.820-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:36:23.821-0500 W NETWORK [ReplExecNetThread-6] Failed to connect to 10.33.141.202:31200, reason: errno:111 Connection refused m31201| 2014-11-26T14:36:23.821-0500 D - [ReplExecNetThread-6] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:36:23.821-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31200; Location18915 Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:36:23.821-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31200; trying again; Retries left: 0; 2ms have already elapsed m31201| 2014-11-26T14:36:23.821-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:36:23.821-0500 W NETWORK [ReplExecNetThread-0] Failed to connect to 10.33.141.202:31200, reason: errno:111 Connection refused m31201| 2014-11-26T14:36:23.821-0500 D - [ReplExecNetThread-0] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:36:23.821-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31200; Location18915 Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31101| 2014-11-26T14:36:23.944-0500 I COMMAND [signalProcessingThread] now exiting m31101| 2014-11-26T14:36:23.944-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m31101| 2014-11-26T14:36:23.944-0500 I NETWORK [signalProcessingThread] closing listening socket: 10 m31101| 2014-11-26T14:36:23.944-0500 I NETWORK [signalProcessingThread] closing listening socket: 11 m31101| 2014-11-26T14:36:23.944-0500 I NETWORK [signalProcessingThread] closing listening socket: 17 m31101| 2014-11-26T14:36:23.944-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31101.sock m31101| 2014-11-26T14:36:23.944-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m31101| 2014-11-26T14:36:23.944-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m31101| 2014-11-26T14:36:23.944-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: fooSharded.barSharded m31101| 2014-11-26T14:36:23.944-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: fooUnsharded.barUnsharded m31101| 2014-11-26T14:36:23.944-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me m31101| 2014-11-26T14:36:23.944-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs m31101| 2014-11-26T14:36:23.944-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.replset.minvalid m31101| 2014-11-26T14:36:23.944-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log m31101| 2014-11-26T14:36:23.944-0500 I NETWORK [conn1] end connection 127.0.0.1:36454 (0 connections now open) m31101| 2014-11-26T14:36:23.944-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset m31101| 2014-11-26T14:36:23.944-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog m31101| 2014-11-26T14:36:23.944-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m31101| 2014-11-26T14:36:24.004-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 2014-11-26T14:36:24.073-0500 I - shell: stopped mongo program on port 31101 ReplSetTest stop *** Mongod in port 31101 shutdown with code (0) *** ReplSetTest stopSet deleting all dbpaths ReplSetTest stopSet *** Shut down repl set - test worked **** ReplSetTest n: 0 ports: [ 31200, 31201 ] 31200 number ReplSetTest stop *** Shutting down mongod in port 31200 *** 2014-11-26T14:36:24.074-0500 I - No db started on port: 31200 2014-11-26T14:36:24.074-0500 I - shell: stopped mongo program on port 31200 ReplSetTest stop *** Mongod in port 31200 shutdown with code (0) *** ReplSetTest n: 1 ports: [ 31200, 31201 ] 31201 number ReplSetTest stop *** Shutting down mongod in port 31201 *** m31201| 2014-11-26T14:36:24.074-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m31201| 2014-11-26T14:36:24.075-0500 I REPL [signalProcessingThread] Stopping replication applier threads m31301| 2014-11-26T14:36:24.807-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41168 #4 (2 connections now open) m31301| 2014-11-26T14:36:24.807-0500 I QUERY [conn4] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:341 0ms 2014-11-26T14:36:24.808-0500 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 10.33.141.202:31101, reason: errno:111 Connection refused m31201| 2014-11-26T14:36:24.915-0500 I COMMAND [signalProcessingThread] now exiting m31201| 2014-11-26T14:36:24.915-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m31201| 2014-11-26T14:36:24.915-0500 I NETWORK [signalProcessingThread] closing listening socket: 16 m31201| 2014-11-26T14:36:24.915-0500 I NETWORK [signalProcessingThread] closing listening socket: 17 m31201| 2014-11-26T14:36:24.915-0500 I NETWORK [signalProcessingThread] closing listening socket: 23 2014-11-26T14:36:24.916-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() errno:104 Connection reset by peer 10.33.141.202:31201 2014-11-26T14:36:24.916-0500 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.33.141.202:31201 error: 9001 socket exception [RECV_ERROR] server [10.33.141.202:31201] 2014-11-26T14:36:24.916-0500 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2014-11-26T14:36:24.916-0500 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1417030584808578 microSec, clearing pool for ip-10-33-141-202:31201 of 0 connections m31201| 2014-11-26T14:36:24.916-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31201.sock m31201| 2014-11-26T14:36:24.916-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m31201| 2014-11-26T14:36:24.916-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m31201| 2014-11-26T14:36:24.916-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: fooSharded.barSharded m31201| 2014-11-26T14:36:24.916-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me m31201| 2014-11-26T14:36:24.916-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs m31201| 2014-11-26T14:36:24.916-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.replset.minvalid m31201| 2014-11-26T14:36:24.916-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log m31201| 2014-11-26T14:36:24.916-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset m31201| 2014-11-26T14:36:24.916-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog m31201| 2014-11-26T14:36:24.916-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m31201| 2014-11-26T14:36:24.916-0500 I NETWORK [conn1] end connection 127.0.0.1:44115 (0 connections now open) m31201| 2014-11-26T14:36:24.969-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 2014-11-26T14:36:25.074-0500 I - shell: stopped mongo program on port 31201 ReplSetTest stop *** Mongod in port 31201 shutdown with code (0) *** ReplSetTest stopSet deleting all dbpaths ReplSetTest stopSet *** Shut down repl set - test worked **** ReplSetTest n: 0 ports: [ 31300, 31301 ] 31300 number ReplSetTest stop *** Shutting down mongod in port 31300 *** 2014-11-26T14:36:25.076-0500 I - No db started on port: 31300 2014-11-26T14:36:25.076-0500 I - shell: stopped mongo program on port 31300 ReplSetTest stop *** Mongod in port 31300 shutdown with code (0) *** ReplSetTest n: 1 ports: [ 31300, 31301 ] 31301 number ReplSetTest stop *** Shutting down mongod in port 31301 *** m31301| 2014-11-26T14:36:25.076-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m31301| 2014-11-26T14:36:25.077-0500 I REPL [signalProcessingThread] Stopping replication applier threads m31301| 2014-11-26T14:36:25.601-0500 I COMMAND [signalProcessingThread] now exiting m31301| 2014-11-26T14:36:25.601-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m31301| 2014-11-26T14:36:25.601-0500 I NETWORK [signalProcessingThread] closing listening socket: 22 m31301| 2014-11-26T14:36:25.601-0500 I NETWORK [signalProcessingThread] closing listening socket: 23 m31301| 2014-11-26T14:36:25.601-0500 I NETWORK [signalProcessingThread] closing listening socket: 29 m31301| 2014-11-26T14:36:25.601-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31301.sock m31301| 2014-11-26T14:36:25.601-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m31301| 2014-11-26T14:36:25.601-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m31301| 2014-11-26T14:36:25.601-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me m31301| 2014-11-26T14:36:25.601-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs m31301| 2014-11-26T14:36:25.601-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.replset.minvalid m31301| 2014-11-26T14:36:25.601-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log m31301| 2014-11-26T14:36:25.601-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset m31301| 2014-11-26T14:36:25.601-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog m31301| 2014-11-26T14:36:25.601-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m31301| 2014-11-26T14:36:25.601-0500 I NETWORK [conn1] end connection 127.0.0.1:49994 (1 connection now open) m31301| 2014-11-26T14:36:25.602-0500 I NETWORK [conn4] end connection 10.33.141.202:41168 (0 connections now open) m31301| 2014-11-26T14:36:25.646-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 2014-11-26T14:36:26.076-0500 I - shell: stopped mongo program on port 31301 ReplSetTest stop *** Mongod in port 31301 shutdown with code (0) *** ReplSetTest stopSet deleting all dbpaths ReplSetTest stopSet *** Shut down repl set - test worked **** m29000| 2014-11-26T14:36:26.078-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m29000| 2014-11-26T14:36:26.078-0500 I COMMAND [signalProcessingThread] now exiting m29000| 2014-11-26T14:36:26.078-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m29000| 2014-11-26T14:36:26.078-0500 I NETWORK [signalProcessingThread] closing listening socket: 28 m29000| 2014-11-26T14:36:26.078-0500 I NETWORK [signalProcessingThread] closing listening socket: 29 m29000| 2014-11-26T14:36:26.078-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-29000.sock m29000| 2014-11-26T14:36:26.078-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m29000| 2014-11-26T14:36:26.078-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m29000| 2014-11-26T14:36:26.078-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m29000| 2014-11-26T14:36:26.078-0500 I NETWORK [conn2] end connection 10.33.141.202:41567 (1 connection now open) m29000| 2014-11-26T14:36:26.078-0500 I NETWORK [conn1] end connection 127.0.0.1:59986 (1 connection now open) m29000| 2014-11-26T14:36:26.149-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 2014-11-26T14:36:27.078-0500 I - shell: stopped mongo program on port 29000 *** ShardingTest test completed successfully in 31.56 seconds *** 31.6006 seconds 2014-11-26T14:36:27.081-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35607 #3 (1 connection now open) 2014-11-26T14:36:27.082-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends 2014-11-26T14:36:27.082-0500 I NETWORK [conn3] end connection 127.0.0.1:35607 (0 connections now open) 2014-11-26T14:36:27.082-0500 I COMMAND [signalProcessingThread] now exiting 2014-11-26T14:36:27.082-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... 2014-11-26T14:36:27.082-0500 I NETWORK [signalProcessingThread] closing listening socket: 4 2014-11-26T14:36:27.082-0500 I NETWORK [signalProcessingThread] closing listening socket: 5 2014-11-26T14:36:27.082-0500 I NETWORK [signalProcessingThread] closing listening socket: 11 2014-11-26T14:36:27.082-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27999.sock 2014-11-26T14:36:27.082-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... 2014-11-26T14:36:27.082-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... 2014-11-26T14:36:27.082-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down 2014-11-26T14:36:27.121-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 1 tests succeeded