2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] MongoDB starting : pid=9011 port=27999 dbpath=/data/db/sconsTests/ 64-bit host=ip-10-33-141-202 2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] 2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] 2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. 2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] 2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- 2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] allocator: tcmalloc 2014-11-26T14:33:44.082-0500 I CONTROL [initandlisten] options: { net: { http: { enabled: true }, port: 27999 }, nopreallocj: true, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/sconsTests/", engine: "wiredTiger" } } 2014-11-26T14:33:44.082-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), 2014-11-26T14:33:44.116-0500 I NETWORK [websvr] admin web console waiting for connections on port 28999 2014-11-26T14:33:44.127-0500 I NETWORK [initandlisten] waiting for connections on port 27999 2014-11-26T14:33:45.071-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35392 #1 (1 connection now open) 2014-11-26T14:33:45.071-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35393 #2 (2 connections now open) 2014-11-26T14:33:45.071-0500 I NETWORK [conn1] end connection 127.0.0.1:35392 (1 connection now open) clean_dbroot: /data/db/sconsTests/ num procs:86 running /data/mongo/mongod --port 27999 --dbpath /data/db/sconsTests/ --setParameter enableTestCommands=1 --httpinterface --storageEngine wiredTiger --nopreallocj ******************************************* Test : mongos_rs_auth_shard_failure_tolerance.js ... 2014-11-26T14:33:45.071-0500 I NETWORK [conn2] end connection 127.0.0.1:35393 (0 connections now open) Command : /data/mongo/mongo --port 27999 --authenticationMechanism SCRAM-SHA-1 --writeMode commands --nodb /data/mongo/jstests/sharding/mongos_rs_auth_shard_failure_tolerance.js --eval TestData = new Object();TestData.storageEngine = "wiredTiger";TestData.wiredTigerEngineConfig = "";TestData.wiredTigerCollectionConfig = "";TestData.wiredTigerIndexConfig = "";TestData.testPath = "/data/mongo/jstests/sharding/mongos_rs_auth_shard_failure_tolerance.js";TestData.testFile = "mongos_rs_auth_shard_failure_tolerance.js";TestData.testName = "mongos_rs_auth_shard_failure_tolerance";TestData.setParameters = "";TestData.setParametersMongos = "";TestData.noJournal = false;TestData.noJournalPrealloc = true;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null;TestData.authMechanism = "SCRAM-SHA-1";TestData.useSSL = false;TestData.useX509 = false;MongoRunner.dataDir = "/data/db";MongoRunner.dataPath = MongoRunner.dataDir + "/"; Date : Wed Nov 26 14:33:45 2014 MongoDB shell version: 2.8.0-rc2-pre- /data/db/ Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31100, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 0, "node" : 0, "set" : "test-rs0" }, "verbose" : 1, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs0-0' 2014-11-26T14:33:45.119-0500 I - shell: started program (sh9029): /data/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31100 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-0 -v --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:33:45.119-0500 W NETWORK Failed to connect to 127.0.0.1:31100, reason: errno:111 Connection refused m31100| 2014-11-26T14:33:45.128-0500 I CONTROL ** WARNING: --rest is specified without --httpinterface, m31100| 2014-11-26T14:33:45.128-0500 I CONTROL ** enabling http interface m31100| note: noprealloc may hurt performance in many applications m31100| 2014-11-26T14:33:45.146-0500 D SHARDING isInRangeTest passed m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] MongoDB starting : pid=9029 port=31100 dbpath=/data/db/test-rs0-0 64-bit host=ip-10-33-141-202 m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] allocator: tcmalloc m31100| 2014-11-26T14:33:45.146-0500 I CONTROL [initandlisten] options: { net: { http: { RESTInterfaceEnabled: true, enabled: true }, port: 31100 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "test-rs0" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/test-rs0-0", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 1 } } m31100| 2014-11-26T14:33:45.146-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger m31100| 2014-11-26T14:33:45.146-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 m31100| 2014-11-26T14:33:45.146-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m31100| 2014-11-26T14:33:45.169-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:33:45.181-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) m31100| 2014-11-26T14:33:45.181-0500 D STORAGE [initandlisten] done repairDatabases m31100| 2014-11-26T14:33:45.181-0500 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31100| 2014-11-26T14:33:45.181-0500 D COMMAND [snapshot] BackgroundJob starting: snapshot m31100| 2014-11-26T14:33:45.181-0500 D NETWORK [websvr] fd limit hard:64000 soft:64000 max conn: 51200 m31100| 2014-11-26T14:33:45.181-0500 D INDEX [initandlisten] checking complete m31100| 2014-11-26T14:33:45.181-0500 I NETWORK [websvr] admin web console waiting for connections on port 32100 m31100| 2014-11-26T14:33:45.181-0500 D STORAGE [initandlisten] stored meta data for local.me @ 0:1 m31100| 2014-11-26T14:33:45.181-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0--118320920160305333 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:33:45.187-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:45.187-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:45.187-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:45.187-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:45.187-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:45.187-0500 D STORAGE [initandlisten] create uri: table:index-1--118320920160305333 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" } m31100| 2014-11-26T14:33:45.192-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:45.192-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:45.192-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:45.192-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:45.192-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:45.192-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:45.192-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:33:45.192-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:45.192-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset m31100| 2014-11-26T14:33:45.192-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor m31100| 2014-11-26T14:33:45.193-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor m31100| 2014-11-26T14:33:45.193-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner m31100| 2014-11-26T14:33:45.193-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } m31100| 2014-11-26T14:33:45.193-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.193-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2--118320920160305333 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:33:45.199-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.200-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.200-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.200-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.200-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.200-0500 D STORAGE [initandlisten] create uri: table:index-3--118320920160305333 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" } m31100| 2014-11-26T14:33:45.204-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.204-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.204-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.204-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.204-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.204-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.204-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:33:45.204-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:45.205-0500 I NETWORK [initandlisten] waiting for connections on port 31100 m31100| 2014-11-26T14:33:45.320-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47375 #1 (1 connection now open) [ connection to ip-10-33-141-202:31100 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31100, 31101 ] 31101 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31101, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 0, "node" : 1, "set" : "test-rs0" }, "verbose" : 1, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs0-1' 2014-11-26T14:33:45.324-0500 I - shell: started program (sh9056): /data/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31101 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-1 -v --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:33:45.324-0500 W NETWORK Failed to connect to 127.0.0.1:31101, reason: errno:111 Connection refused m31101| 2014-11-26T14:33:45.333-0500 I CONTROL ** WARNING: --rest is specified without --httpinterface, m31101| 2014-11-26T14:33:45.333-0500 I CONTROL ** enabling http interface m31101| note: noprealloc may hurt performance in many applications m31101| 2014-11-26T14:33:45.352-0500 D SHARDING isInRangeTest passed m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] MongoDB starting : pid=9056 port=31101 dbpath=/data/db/test-rs0-1 64-bit host=ip-10-33-141-202 m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] allocator: tcmalloc m31101| 2014-11-26T14:33:45.352-0500 I CONTROL [initandlisten] options: { net: { http: { RESTInterfaceEnabled: true, enabled: true }, port: 31101 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "test-rs0" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/test-rs0-1", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 1 } } m31101| 2014-11-26T14:33:45.352-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger m31101| 2014-11-26T14:33:45.352-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 m31101| 2014-11-26T14:33:45.352-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m31101| 2014-11-26T14:33:45.375-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31101| 2014-11-26T14:33:45.387-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) m31101| 2014-11-26T14:33:45.387-0500 D STORAGE [initandlisten] done repairDatabases m31101| 2014-11-26T14:33:45.387-0500 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31101| 2014-11-26T14:33:45.387-0500 D COMMAND [snapshot] BackgroundJob starting: snapshot m31101| 2014-11-26T14:33:45.387-0500 D NETWORK [websvr] fd limit hard:64000 soft:64000 max conn: 51200 m31101| 2014-11-26T14:33:45.387-0500 D INDEX [initandlisten] checking complete m31101| 2014-11-26T14:33:45.388-0500 I NETWORK [websvr] admin web console waiting for connections on port 32101 m31101| 2014-11-26T14:33:45.388-0500 D STORAGE [initandlisten] stored meta data for local.me @ 0:1 m31101| 2014-11-26T14:33:45.388-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0--377709408879965486 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31101| 2014-11-26T14:33:45.402-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:33:45.402-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:33:45.402-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:33:45.402-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:33:45.402-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:33:45.402-0500 D STORAGE [initandlisten] create uri: table:index-1--377709408879965486 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" } m31101| 2014-11-26T14:33:45.407-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:33:45.407-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:33:45.407-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:33:45.407-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:33:45.407-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:33:45.407-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:33:45.407-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:33:45.407-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31101| 2014-11-26T14:33:45.407-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset m31101| 2014-11-26T14:33:45.407-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor m31101| 2014-11-26T14:33:45.408-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor m31101| 2014-11-26T14:33:45.408-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner m31101| 2014-11-26T14:33:45.408-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } m31101| 2014-11-26T14:33:45.408-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.408-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2--377709408879965486 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31101| 2014-11-26T14:33:45.413-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.413-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.413-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.413-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.413-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.414-0500 D STORAGE [initandlisten] create uri: table:index-3--377709408879965486 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" } m31101| 2014-11-26T14:33:45.419-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.419-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.419-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.419-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.419-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.419-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.419-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:33:45.419-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31101| 2014-11-26T14:33:45.419-0500 I NETWORK [initandlisten] waiting for connections on port 31101 m31101| 2014-11-26T14:33:45.525-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:36342 #1 (1 connection now open) [ connection to ip-10-33-141-202:31100, connection to ip-10-33-141-202:31101 ] { "replSetInitiate" : { "_id" : "test-rs0", "members" : [ { "_id" : 0, "host" : "ip-10-33-141-202:31100" }, { "_id" : 1, "host" : "ip-10-33-141-202:31101" } ] } } m31100| 2014-11-26T14:33:45.527-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m31100| 2014-11-26T14:33:45.527-0500 I REPL [conn1] replSetInitiate admin command received from client m31100| 2014-11-26T14:33:45.528-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31101| 2014-11-26T14:33:45.528-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:53987 #2 (2 connections now open) m31100| 2014-11-26T14:33:45.528-0500 D NETWORK [conn1] connected to server ip-10-33-141-202:31101 (10.33.141.202) m31101| 2014-11-26T14:33:45.530-0500 I QUERY [conn2] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D444273385A792F576D5A6F665A4E58464C44336C586E6E5A5643366471583743) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31101| 2014-11-26T14:33:45.543-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D444273385A792F576D5A6F665A4E58464C44336C586E6E5A564336647158374336427A36753376584D4A6E566157555465344F56683453303543323666...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31101| 2014-11-26T14:33:45.543-0500 I ACCESS [conn2] Successfully authenticated as principal __system on local m31101| 2014-11-26T14:33:45.543-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31101| 2014-11-26T14:33:45.543-0500 I QUERY [conn2] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31100| 2014-11-26T14:33:45.543-0500 I REPL [conn1] replSet replSetInitiate config object with 2 members parses ok m31101| 2014-11-26T14:33:45.544-0500 I NETWORK [conn2] end connection 10.33.141.202:53987 (1 connection now open) m31100| 2014-11-26T14:33:45.544-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31101| 2014-11-26T14:33:45.544-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:53988 #3 (2 connections now open) m31100| 2014-11-26T14:33:45.544-0500 D NETWORK [ReplExecNetThread-7] connected to server ip-10-33-141-202:31101 (10.33.141.202) m31101| 2014-11-26T14:33:45.546-0500 I QUERY [conn3] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D334C4A6C2B6F4553362B5278737434377739776A4478377756616E6677547A31) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31101| 2014-11-26T14:33:45.559-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D334C4A6C2B6F4553362B5278737434377739776A4478377756616E6677547A312F5077363064545776384A7869374379577952594C5133646E59304652...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31101| 2014-11-26T14:33:45.559-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m31101| 2014-11-26T14:33:45.559-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31101| 2014-11-26T14:33:45.559-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: true } ntoreturn:1 keyUpdates:0 reslen:112 0ms m31100| 2014-11-26T14:33:45.560-0500 D STORAGE [conn1] stored meta data for local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.560-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:33:45.560-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-4--118320920160305333 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:33:45.560-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38053 #2 (2 connections now open) m31101| 2014-11-26T14:33:45.560-0500 D NETWORK [ReplExecNetThread-0] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31100| 2014-11-26T14:33:45.562-0500 I QUERY [conn2] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D54475336624968546C49354A4C4D5937374B6D683963336B4C65497569485A52) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:33:45.564-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:45.564-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:45.564-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:45.564-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:45.564-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:45.565-0500 D STORAGE [conn1] create uri: table:index-5--118320920160305333 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" } m31100| 2014-11-26T14:33:45.569-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:45.570-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:45.570-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:45.570-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:45.570-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:45.570-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:45.570-0500 D STORAGE [conn1] local.system.replset: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:33:45.570-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:45.571-0500 I REPL [ReplicationExecutor] new replica set config in use: { _id: "test-rs0", version: 1, members: [ { _id: 0, host: "ip-10-33-141-202:31100", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } m31100| 2014-11-26T14:33:45.571-0500 I REPL [ReplicationExecutor] transition to STARTUP2 m31100| 2014-11-26T14:33:45.571-0500 I REPL [conn1] ****** m31101| 2014-11-26T14:33:45.571-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:102 0ms m31100| 2014-11-26T14:33:45.571-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31101 is now in state STARTUP m31100| 2014-11-26T14:33:45.572-0500 I REPL [conn1] creating replication oplog of size: 40MB... m31100| 2014-11-26T14:33:45.572-0500 D STORAGE [conn1] stored meta data for local.oplog.rs @ 0:4 m31100| 2014-11-26T14:33:45.572-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-6--118320920160305333 config: type=file,memory_page_max=100m,block_compressor=snappy,,type=file,app_metadata=(oplogKeyExtractionVersion=1),key_format=q,value_format=u m31100| 2014-11-26T14:33:45.577-0500 D STORAGE [conn1] looking up metadata for: local.oplog.rs @ 0:4 m31100| 2014-11-26T14:33:45.577-0500 D STORAGE [conn1] WiredTigerKVEngine::flushAllFiles m31100| 2014-11-26T14:33:45.577-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D54475336624968546C49354A4C4D5937374B6D683963336B4C65497569485A52696D623543536D48663163467645762B4D68686834725854445162346B...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:33:45.577-0500 I ACCESS [conn2] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:33:45.577-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:33:45.577-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: -2, from: "", checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:597 0ms m31101| 2014-11-26T14:33:45.578-0500 D REPL [ReplicationExecutor] Received new config via heartbeat with version 1 m31101| 2014-11-26T14:33:45.578-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:33:45.578-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38054 #3 (3 connections now open) m31101| 2014-11-26T14:33:45.578-0500 D NETWORK connected to server ip-10-33-141-202:31100 (10.33.141.202) m31100| 2014-11-26T14:33:45.580-0500 I QUERY [conn3] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D73467566424C665A4152536B614D785A706F5431594F3235342F545064413852) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:33:45.593-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D73467566424C665A4152536B614D785A706F5431594F3235342F545064413852503948517A432F434B43504833743358765275666445494B59685A6649...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:33:45.593-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:33:45.593-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:33:45.593-0500 I QUERY [conn3] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31100| 2014-11-26T14:33:45.594-0500 I NETWORK [conn3] end connection 10.33.141.202:38054 (2 connections now open) m31101| 2014-11-26T14:33:45.594-0500 D STORAGE [WriteReplSetConfig] stored meta data for local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.594-0500 D STORAGE [WriteReplSetConfig] WiredTigerKVEngine::createRecordStore uri: table:collection-4--377709408879965486 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31101| 2014-11-26T14:33:45.605-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.605-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.605-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.605-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.605-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.605-0500 D STORAGE [WriteReplSetConfig] create uri: table:index-5--377709408879965486 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" } m31101| 2014-11-26T14:33:45.615-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.615-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.615-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.615-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.615-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.615-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.615-0500 D STORAGE [WriteReplSetConfig] local.system.replset: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:33:45.615-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31101| 2014-11-26T14:33:45.615-0500 I REPL [WriteReplSetConfig] Starting replication applier threads m31101| 2014-11-26T14:33:45.615-0500 I REPL [rsSync] replSet warning did not receive a valid config yet, sleeping 5 seconds m31101| 2014-11-26T14:33:45.615-0500 I REPL [ReplicationExecutor] new replica set config in use: { _id: "test-rs0", version: 1, members: [ { _id: 0, host: "ip-10-33-141-202:31100", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } m31101| 2014-11-26T14:33:45.615-0500 I REPL [ReplicationExecutor] transition to STARTUP2 m31100| 2014-11-26T14:33:45.616-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31101| 2014-11-26T14:33:45.616-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31100 is now in state STARTUP2 m31100| 2014-11-26T14:33:45.680-0500 I REPL [conn1] ****** m31100| 2014-11-26T14:33:45.680-0500 I REPL [conn1] Starting replication applier threads m31100| 2014-11-26T14:33:45.681-0500 I REPL [ReplicationExecutor] transition to RECOVERING m31100| 2014-11-26T14:33:45.681-0500 I QUERY [conn1] command admin.$cmd command: replSetInitiate { replSetInitiate: { _id: "test-rs0", members: [ { _id: 0.0, host: "ip-10-33-141-202:31100" }, { _id: 1.0, host: "ip-10-33-141-202:31101" } ] } } keyUpdates:0 reslen:37 153ms m31100| 2014-11-26T14:33:45.682-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:45.683-0500 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 54762b19:1 0 m31101| 2014-11-26T14:33:45.683-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m31101| 2014-11-26T14:33:45.683-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:45.684-0500 I REPL [ReplicationExecutor] transition to SECONDARY m31100| 2014-11-26T14:33:45.884-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:45.885-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:46.085-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:46.086-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:46.287-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:46.287-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:46.488-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:46.488-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:46.689-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:46.689-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:46.890-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:46.891-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:47.091-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:47.092-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:47.292-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:47.293-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:47.494-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:47.494-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:47.571-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:33:47.571-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31101 is now in state STARTUP2 m31100| 2014-11-26T14:33:47.571-0500 I REPL [ReplicationExecutor] Standing for election m31101| 2014-11-26T14:33:47.571-0500 I QUERY [conn3] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "test-rs0", opTime: new Date(6086099332811980801), who: "ip-10-33-141-202:31100", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:257 0ms m31100| 2014-11-26T14:33:47.572-0500 I REPL [ReplicationExecutor] not electing self, ip-10-33-141-202:31101 would veto with 'errmsg: "I don't think ip-10-33-141-202:31100 is electable because the member is not currently a secondary; member is more than 10 seconds behind the most up-t..."' m31100| 2014-11-26T14:33:47.572-0500 I REPL [ReplicationExecutor] not electing self, we are not freshest m31100| 2014-11-26T14:33:47.616-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31101| 2014-11-26T14:33:47.616-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31100 is now in state SECONDARY m31100| 2014-11-26T14:33:47.695-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:47.695-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:47.896-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:47.896-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:48.097-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:48.098-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:48.298-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:48.300-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:48.501-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:48.501-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:48.702-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:48.702-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:48.903-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:48.904-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:49.104-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:49.105-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:49.306-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:49.307-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:49.508-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:49.508-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:49.572-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:33:49.572-0500 I REPL [ReplicationExecutor] Standing for election m31101| 2014-11-26T14:33:49.572-0500 I QUERY [conn3] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "test-rs0", opTime: new Date(6086099332811980801), who: "ip-10-33-141-202:31100", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:70 0ms m31100| 2014-11-26T14:33:49.572-0500 I REPL [ReplicationExecutor] replSet info electSelf m31101| 2014-11-26T14:33:49.572-0500 I REPL [ReplicationExecutor] replSetElect voting yea for ip-10-33-141-202:31100 (0) m31101| 2014-11-26T14:33:49.573-0500 I QUERY [conn3] command admin.$cmd command: replSetElect { replSetElect: 1, set: "test-rs0", who: "ip-10-33-141-202:31100", whoid: 0, cfgver: 1, round: ObjectId('54762b1d331da6b15b6573aa') } ntoreturn:1 keyUpdates:0 reslen:66 0ms m31100| 2014-11-26T14:33:49.573-0500 D REPL [ReplicationExecutor] replSet elect res: { vote: 1, round: ObjectId('54762b1d331da6b15b6573aa'), ok: 1.0 } m31100| 2014-11-26T14:33:49.573-0500 I REPL [ReplicationExecutor] replSet election succeeded, assuming primary role m31100| 2014-11-26T14:33:49.573-0500 I REPL [ReplicationExecutor] transition to PRIMARY m31100| 2014-11-26T14:33:49.616-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31101| 2014-11-26T14:33:49.616-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31100 is now in state PRIMARY m31100| 2014-11-26T14:33:49.685-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted m31100| 2014-11-26T14:33:49.709-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:33:49.709-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:33:49.710-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:33:49.710-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:33:49.711-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:33:49.911-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:33:49.912-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:33:49.912-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:33:50.113-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:33:50.113-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:33:50.114-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:33:50.315-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:33:50.315-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:33:50.315-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:33:50.516-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:33:50.517-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:33:50.517-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:33:50.615-0500 I REPL [rsSync] ****** m31101| 2014-11-26T14:33:50.616-0500 I REPL [rsSync] creating replication oplog of size: 40MB... m31101| 2014-11-26T14:33:50.616-0500 D STORAGE [rsSync] stored meta data for local.oplog.rs @ 0:4 m31101| 2014-11-26T14:33:50.616-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-6--377709408879965486 config: type=file,memory_page_max=100m,block_compressor=snappy,,type=file,app_metadata=(oplogKeyExtractionVersion=1),key_format=q,value_format=u m31101| 2014-11-26T14:33:50.620-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:33:50.620-0500 D STORAGE [rsSync] WiredTigerKVEngine::flushAllFiles m31100| 2014-11-26T14:33:50.718-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:33:50.718-0500 I REPL [rsSync] ****** m31101| 2014-11-26T14:33:50.718-0500 I REPL [rsSync] initial sync pending m31101| 2014-11-26T14:33:50.719-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:33:50.719-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:33:50.719-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:33:50.719-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:33:50.719-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:33:50.719-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:33:50.719-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:33:50.719-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:33:50.719-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31101| 2014-11-26T14:33:50.719-0500 D STORAGE [rsSync] local.oplog.rs: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:33:50.719-0500 I REPL [ReplicationExecutor] syncing from: ip-10-33-141-202:31100 m31101| 2014-11-26T14:33:50.720-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:33:50.720-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:33:50.720-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38055 #4 (3 connections now open) m31101| 2014-11-26T14:33:50.720-0500 D NETWORK [rsSync] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31100| 2014-11-26T14:33:50.721-0500 I QUERY [conn4] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4C316A73442B7435303466416253517A4E426458746848703752343751486E69) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:33:50.734-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4C316A73442B7435303466416253517A4E426458746848703752343751486E69426636307238797A4F6433466E54664132486B635A4D52435A63735949...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:33:50.734-0500 I ACCESS [conn4] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:33:50.735-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:33:50.735-0500 I QUERY [conn4] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31100| 2014-11-26T14:33:50.736-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31101| 2014-11-26T14:33:50.737-0500 D STORAGE [rsSync] stored meta data for local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.737-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-7--377709408879965486 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31101| 2014-11-26T14:33:50.739-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.739-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.739-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.739-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.739-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.739-0500 D STORAGE [rsSync] create uri: table:index-8--377709408879965486 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.minvalid" } m31101| 2014-11-26T14:33:50.744-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.744-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.744-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.744-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.744-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.744-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.744-0500 D STORAGE [rsSync] local.replset.minvalid: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:33:50.744-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31101| 2014-11-26T14:33:50.744-0500 I REPL [rsSync] initial sync drop all databases m31101| 2014-11-26T14:33:50.744-0500 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 m31101| 2014-11-26T14:33:50.744-0500 I REPL [rsSync] initial sync clone all databases m31100| 2014-11-26T14:33:50.745-0500 D STORAGE [conn4] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:50.745-0500 D STORAGE [conn4] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:33:50.745-0500 D STORAGE [conn4] looking up metadata for: local.oplog.rs @ 0:4 m31100| 2014-11-26T14:33:50.745-0500 D STORAGE [conn4] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:50.745-0500 D STORAGE [conn4] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:33:50.745-0500 D STORAGE [conn4] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:50.745-0500 D STORAGE [conn4] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:33:50.746-0500 I QUERY [conn4] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 reslen:124 1ms m31101| 2014-11-26T14:33:50.746-0500 I REPL [rsSync] initial sync data copy, starting syncup m31101| 2014-11-26T14:33:50.746-0500 I REPL [rsSync] oplog sync 1 of 3 m31100| 2014-11-26T14:33:50.746-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31101| 2014-11-26T14:33:50.746-0500 I REPL [rsSync] oplog sync 2 of 3 m31100| 2014-11-26T14:33:50.746-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31101| 2014-11-26T14:33:50.746-0500 I REPL [rsSync] initial sync building indexes m31101| 2014-11-26T14:33:50.746-0500 I REPL [rsSync] oplog sync 3 of 3 m31100| 2014-11-26T14:33:50.748-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31101| 2014-11-26T14:33:50.748-0500 I QUERY [rsSync] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31101| 2014-11-26T14:33:50.748-0500 I REPL [rsSync] initial sync finishing up m31101| 2014-11-26T14:33:50.748-0500 I REPL [rsSync] replSet set minValid=54762b19:1 m31101| 2014-11-26T14:33:50.748-0500 I REPL [rsSync] initial sync done m31100| 2014-11-26T14:33:50.751-0500 I NETWORK [conn4] end connection 10.33.141.202:38055 (2 connections now open) m31101| 2014-11-26T14:33:50.752-0500 I REPL [ReplicationExecutor] transition to RECOVERING m31101| 2014-11-26T14:33:50.753-0500 I REPL [ReplicationExecutor] transition to SECONDARY m31100| 2014-11-26T14:33:50.920-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:33:50.921-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:33:50.921-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31200, 31201 ] 31200 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31200, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 1, "node" : 0, "set" : "test-rs1" }, "verbose" : 1, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs1-0' 2014-11-26T14:33:50.924-0500 I - shell: started program (sh9228): /data/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31200 --noprealloc --smallfiles --rest --replSet test-rs1 --dbpath /data/db/test-rs1-0 -v --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:33:50.925-0500 W NETWORK Failed to connect to 127.0.0.1:31200, reason: errno:111 Connection refused m31200| 2014-11-26T14:33:50.934-0500 I CONTROL ** WARNING: --rest is specified without --httpinterface, m31200| 2014-11-26T14:33:50.934-0500 I CONTROL ** enabling http interface m31200| note: noprealloc may hurt performance in many applications m31200| 2014-11-26T14:33:50.953-0500 D SHARDING isInRangeTest passed m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] MongoDB starting : pid=9228 port=31200 dbpath=/data/db/test-rs1-0 64-bit host=ip-10-33-141-202 m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] allocator: tcmalloc m31200| 2014-11-26T14:33:50.953-0500 I CONTROL [initandlisten] options: { net: { http: { RESTInterfaceEnabled: true, enabled: true }, port: 31200 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "test-rs1" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/test-rs1-0", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 1 } } m31200| 2014-11-26T14:33:50.953-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger m31200| 2014-11-26T14:33:50.953-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 m31200| 2014-11-26T14:33:50.953-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m31200| 2014-11-26T14:33:50.984-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31200| 2014-11-26T14:33:51.024-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) m31200| 2014-11-26T14:33:51.024-0500 D STORAGE [initandlisten] done repairDatabases m31200| 2014-11-26T14:33:51.025-0500 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31200| 2014-11-26T14:33:51.025-0500 D COMMAND [snapshot] BackgroundJob starting: snapshot m31200| 2014-11-26T14:33:51.025-0500 D NETWORK [websvr] fd limit hard:64000 soft:64000 max conn: 51200 m31200| 2014-11-26T14:33:51.025-0500 D INDEX [initandlisten] checking complete m31200| 2014-11-26T14:33:51.025-0500 I NETWORK [websvr] admin web console waiting for connections on port 32200 m31200| 2014-11-26T14:33:51.025-0500 D STORAGE [initandlisten] stored meta data for local.me @ 0:1 m31200| 2014-11-26T14:33:51.025-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0--4532563397751070484 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31200| 2014-11-26T14:33:51.033-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:51.034-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:51.034-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:51.034-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:51.034-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:51.034-0500 D STORAGE [initandlisten] create uri: table:index-1--4532563397751070484 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" } m31200| 2014-11-26T14:33:51.044-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:51.045-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:51.045-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:51.045-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:51.045-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:51.045-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:51.045-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset m31200| 2014-11-26T14:33:51.045-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:51.045-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset m31200| 2014-11-26T14:33:51.046-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor m31200| 2014-11-26T14:33:51.046-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor m31200| 2014-11-26T14:33:51.047-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner m31200| 2014-11-26T14:33:51.047-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } m31200| 2014-11-26T14:33:51.047-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.047-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2--4532563397751070484 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31200| 2014-11-26T14:33:51.055-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.055-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.055-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.055-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.055-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.055-0500 D STORAGE [initandlisten] create uri: table:index-3--4532563397751070484 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" } m31200| 2014-11-26T14:33:51.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.062-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset m31200| 2014-11-26T14:33:51.062-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:51.062-0500 I NETWORK [initandlisten] waiting for connections on port 31200 m31200| 2014-11-26T14:33:51.125-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50666 #1 (1 connection now open) [ connection to ip-10-33-141-202:31200 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31200, 31201 ] 31201 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31201, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 1, "node" : 1, "set" : "test-rs1" }, "verbose" : 1, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs1-1' 2014-11-26T14:33:51.128-0500 I - shell: started program (sh9255): /data/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31201 --noprealloc --smallfiles --rest --replSet test-rs1 --dbpath /data/db/test-rs1-1 -v --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:33:51.129-0500 W NETWORK Failed to connect to 127.0.0.1:31201, reason: errno:111 Connection refused m31201| 2014-11-26T14:33:51.138-0500 I CONTROL ** WARNING: --rest is specified without --httpinterface, m31201| 2014-11-26T14:33:51.138-0500 I CONTROL ** enabling http interface m31201| note: noprealloc may hurt performance in many applications m31201| 2014-11-26T14:33:51.156-0500 D SHARDING isInRangeTest passed m31201| 2014-11-26T14:33:51.156-0500 I CONTROL [initandlisten] MongoDB starting : pid=9255 port=31201 dbpath=/data/db/test-rs1-1 64-bit host=ip-10-33-141-202 m31201| 2014-11-26T14:33:51.157-0500 I CONTROL [initandlisten] m31201| 2014-11-26T14:33:51.157-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m31201| 2014-11-26T14:33:51.157-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m31201| 2014-11-26T14:33:51.157-0500 I CONTROL [initandlisten] m31201| 2014-11-26T14:33:51.157-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m31201| 2014-11-26T14:33:51.157-0500 I CONTROL [initandlisten] m31201| 2014-11-26T14:33:51.157-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m31201| 2014-11-26T14:33:51.157-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m31201| 2014-11-26T14:33:51.157-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m31201| 2014-11-26T14:33:51.157-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m31201| 2014-11-26T14:33:51.157-0500 I CONTROL [initandlisten] allocator: tcmalloc m31201| 2014-11-26T14:33:51.157-0500 I CONTROL [initandlisten] options: { net: { http: { RESTInterfaceEnabled: true, enabled: true }, port: 31201 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "test-rs1" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/test-rs1-1", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 1 } } m31201| 2014-11-26T14:33:51.157-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger m31201| 2014-11-26T14:33:51.157-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 m31201| 2014-11-26T14:33:51.157-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m31201| 2014-11-26T14:33:51.195-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31201| 2014-11-26T14:33:51.223-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) m31201| 2014-11-26T14:33:51.223-0500 D STORAGE [initandlisten] done repairDatabases m31201| 2014-11-26T14:33:51.223-0500 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31201| 2014-11-26T14:33:51.224-0500 D COMMAND [snapshot] BackgroundJob starting: snapshot m31201| 2014-11-26T14:33:51.224-0500 D NETWORK [websvr] fd limit hard:64000 soft:64000 max conn: 51200 m31201| 2014-11-26T14:33:51.224-0500 D INDEX [initandlisten] checking complete m31201| 2014-11-26T14:33:51.224-0500 I NETWORK [websvr] admin web console waiting for connections on port 32201 m31201| 2014-11-26T14:33:51.224-0500 D STORAGE [initandlisten] stored meta data for local.me @ 0:1 m31201| 2014-11-26T14:33:51.224-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0-373128891435557444 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31201| 2014-11-26T14:33:51.232-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:33:51.233-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:33:51.233-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:33:51.233-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:33:51.233-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:33:51.233-0500 D STORAGE [initandlisten] create uri: table:index-1-373128891435557444 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" } m31201| 2014-11-26T14:33:51.245-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:33:51.245-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:33:51.245-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:33:51.245-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:33:51.245-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:33:51.245-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:33:51.245-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset m31201| 2014-11-26T14:33:51.245-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31201| 2014-11-26T14:33:51.245-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset m31201| 2014-11-26T14:33:51.246-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor m31201| 2014-11-26T14:33:51.246-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor m31201| 2014-11-26T14:33:51.246-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner m31201| 2014-11-26T14:33:51.246-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } m31201| 2014-11-26T14:33:51.247-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.247-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2-373128891435557444 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31201| 2014-11-26T14:33:51.252-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.252-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.252-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.252-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.252-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.252-0500 D STORAGE [initandlisten] create uri: table:index-3-373128891435557444 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" } m31201| 2014-11-26T14:33:51.262-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.262-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.262-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.262-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.262-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.262-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.262-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset m31201| 2014-11-26T14:33:51.262-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31201| 2014-11-26T14:33:51.262-0500 I NETWORK [initandlisten] waiting for connections on port 31201 m31201| 2014-11-26T14:33:51.329-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:44003 #1 (1 connection now open) [ connection to ip-10-33-141-202:31200, connection to ip-10-33-141-202:31201 ] { "replSetInitiate" : { "_id" : "test-rs1", "members" : [ { "_id" : 0, "host" : "ip-10-33-141-202:31200" }, { "_id" : 1, "host" : "ip-10-33-141-202:31201" } ] } } m31200| 2014-11-26T14:33:51.330-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m31200| 2014-11-26T14:33:51.330-0500 I REPL [conn1] replSetInitiate admin command received from client m31200| 2014-11-26T14:33:51.332-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:33:51.332-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:53733 #2 (2 connections now open) m31200| 2014-11-26T14:33:51.332-0500 D NETWORK [conn1] connected to server ip-10-33-141-202:31201 (10.33.141.202) m31201| 2014-11-26T14:33:51.333-0500 I QUERY [conn2] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D336D30414854344B73376A50304D3175382F53616A6A736B367845356D792B64) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:33:51.346-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D336D30414854344B73376A50304D3175382F53616A6A736B367845356D792B6442416D37584A4754497271637A786D5255426F494A304667612B7A4D4F...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:33:51.346-0500 I ACCESS [conn2] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:33:51.346-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:33:51.347-0500 I QUERY [conn2] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31200| 2014-11-26T14:33:51.347-0500 I REPL [conn1] replSet replSetInitiate config object with 2 members parses ok m31201| 2014-11-26T14:33:51.347-0500 I NETWORK [conn2] end connection 10.33.141.202:53733 (1 connection now open) m31200| 2014-11-26T14:33:51.347-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:33:51.347-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:53734 #3 (2 connections now open) m31200| 2014-11-26T14:33:51.347-0500 D NETWORK [ReplExecNetThread-0] connected to server ip-10-33-141-202:31201 (10.33.141.202) m31201| 2014-11-26T14:33:51.349-0500 I QUERY [conn3] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4E47705171424B65396B667859784D2B44454F6F6B6E6738355678796D562F68) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:33:51.362-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4E47705171424B65396B667859784D2B44454F6F6B6E6738355678796D562F68495148394A5A337A4972356C6C42484F594F533841354C436B45574956...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:33:51.362-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:33:51.362-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:33:51.363-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: true } ntoreturn:1 keyUpdates:0 reslen:112 0ms m31201| 2014-11-26T14:33:51.363-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31200| 2014-11-26T14:33:51.363-0500 D STORAGE [conn1] stored meta data for local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.363-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40519 #2 (2 connections now open) m31201| 2014-11-26T14:33:51.363-0500 D NETWORK [ReplExecNetThread-0] connected to server ip-10-33-141-202:31200 (10.33.141.202) m31200| 2014-11-26T14:33:51.364-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-4--4532563397751070484 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31200| 2014-11-26T14:33:51.365-0500 I QUERY [conn2] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D466D564D4E54714B6C4D61566A7759595156562F753941393246686874304C68) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:33:51.371-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.371-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.372-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.372-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.372-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.372-0500 D STORAGE [conn1] create uri: table:index-5--4532563397751070484 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" } m31200| 2014-11-26T14:33:51.378-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D466D564D4E54714B6C4D61566A7759595156562F753941393246686874304C68576E3767356E594B577039506B3855364F4D6D4A79534F384347483537...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:33:51.379-0500 I ACCESS [conn2] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:33:51.379-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:33:51.379-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: -2, from: "", checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:102 0ms m31200| 2014-11-26T14:33:51.380-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.380-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.381-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.381-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.381-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.381-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.381-0500 D STORAGE [conn1] local.system.replset: clearing plan cache - collection info cache reset m31200| 2014-11-26T14:33:51.381-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:51.381-0500 I REPL [ReplicationExecutor] new replica set config in use: { _id: "test-rs1", version: 1, members: [ { _id: 0, host: "ip-10-33-141-202:31200", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ip-10-33-141-202:31201", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } m31200| 2014-11-26T14:33:51.381-0500 I REPL [ReplicationExecutor] transition to STARTUP2 m31201| 2014-11-26T14:33:51.381-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:102 0ms m31200| 2014-11-26T14:33:51.381-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31201 is now in state STARTUP m31200| 2014-11-26T14:33:51.381-0500 I REPL [conn1] ****** m31200| 2014-11-26T14:33:51.381-0500 I REPL [conn1] creating replication oplog of size: 40MB... m31200| 2014-11-26T14:33:51.382-0500 D STORAGE [conn1] stored meta data for local.oplog.rs @ 0:4 m31200| 2014-11-26T14:33:51.382-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-6--4532563397751070484 config: type=file,memory_page_max=100m,block_compressor=snappy,,type=file,app_metadata=(oplogKeyExtractionVersion=1),key_format=q,value_format=u m31200| 2014-11-26T14:33:51.394-0500 D STORAGE [conn1] looking up metadata for: local.oplog.rs @ 0:4 m31200| 2014-11-26T14:33:51.394-0500 D STORAGE [conn1] WiredTigerKVEngine::flushAllFiles m31200| 2014-11-26T14:33:51.565-0500 I REPL [conn1] ****** m31200| 2014-11-26T14:33:51.565-0500 I REPL [conn1] Starting replication applier threads m31200| 2014-11-26T14:33:51.565-0500 I REPL [ReplicationExecutor] transition to RECOVERING m31200| 2014-11-26T14:33:51.566-0500 I QUERY [conn1] command admin.$cmd command: replSetInitiate { replSetInitiate: { _id: "test-rs1", members: [ { _id: 0.0, host: "ip-10-33-141-202:31200" }, { _id: 1.0, host: "ip-10-33-141-202:31201" } ] } } keyUpdates:0 reslen:37 235ms m31200| 2014-11-26T14:33:51.566-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:33:51.567-0500 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 54762b1f:1 0 m31201| 2014-11-26T14:33:51.567-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m31201| 2014-11-26T14:33:51.567-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:33:51.568-0500 I REPL [ReplicationExecutor] transition to SECONDARY m31101| 2014-11-26T14:33:51.573-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:158 0ms m31100| 2014-11-26T14:33:51.573-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31101 is now in state SECONDARY m31100| 2014-11-26T14:33:51.616-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31101| 2014-11-26T14:33:51.616-0500 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 54762b19:1 0 m31101| 2014-11-26T14:33:51.616-0500 I REPL [ReplicationExecutor] could not find member to sync from m31200| 2014-11-26T14:33:51.768-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:51.769-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:33:51.970-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:51.970-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:33:52.171-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:52.171-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:33:52.372-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:52.373-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:33:52.573-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:52.574-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:33:52.775-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:52.775-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:33:52.976-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:52.976-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:33:53.177-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:53.178-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:33:53.378-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:53.379-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:266 0ms m31200| 2014-11-26T14:33:53.379-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: -2, from: "", checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:597 0ms m31201| 2014-11-26T14:33:53.379-0500 D REPL [ReplicationExecutor] Received new config via heartbeat with version 1 m31201| 2014-11-26T14:33:53.380-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31200| 2014-11-26T14:33:53.380-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40520 #3 (3 connections now open) m31201| 2014-11-26T14:33:53.380-0500 D NETWORK connected to server ip-10-33-141-202:31200 (10.33.141.202) m31201| 2014-11-26T14:33:53.381-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:102 0ms m31200| 2014-11-26T14:33:53.381-0500 I REPL [ReplicationExecutor] Standing for election m31201| 2014-11-26T14:33:53.381-0500 I QUERY [conn3] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "test-rs1", opTime: new Date(6086099358581784577), who: "ip-10-33-141-202:31200", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:154 0ms m31200| 2014-11-26T14:33:53.382-0500 I REPL [ReplicationExecutor] not electing self, we could not contact enough voting members m31200| 2014-11-26T14:33:53.382-0500 I QUERY [conn3] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D3964355435365A32533146674448733173636C3263456D4E6A4F77644C4E3775) } ntoreturn:1 keyUpdates:0 reslen:179 1ms m31200| 2014-11-26T14:33:53.395-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D3964355435365A32533146674448733173636C3263456D4E6A4F77644C4E3775525A72654E485148526F754D372B7A7738347375674F527174636E5A62...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:33:53.395-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:33:53.395-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:33:53.395-0500 I QUERY [conn3] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31200| 2014-11-26T14:33:53.395-0500 I NETWORK [conn3] end connection 10.33.141.202:40520 (2 connections now open) m31201| 2014-11-26T14:33:53.396-0500 D STORAGE [WriteReplSetConfig] stored meta data for local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.396-0500 D STORAGE [WriteReplSetConfig] WiredTigerKVEngine::createRecordStore uri: table:collection-4-373128891435557444 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31201| 2014-11-26T14:33:53.402-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.402-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.402-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.402-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.402-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.402-0500 D STORAGE [WriteReplSetConfig] create uri: table:index-5-373128891435557444 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" } m31201| 2014-11-26T14:33:53.407-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.407-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.407-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.407-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.407-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.407-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.407-0500 D STORAGE [WriteReplSetConfig] local.system.replset: clearing plan cache - collection info cache reset m31201| 2014-11-26T14:33:53.407-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31201| 2014-11-26T14:33:53.408-0500 I REPL [WriteReplSetConfig] Starting replication applier threads m31201| 2014-11-26T14:33:53.408-0500 I REPL [rsSync] replSet warning did not receive a valid config yet, sleeping 5 seconds m31201| 2014-11-26T14:33:53.408-0500 I REPL [ReplicationExecutor] new replica set config in use: { _id: "test-rs1", version: 1, members: [ { _id: 0, host: "ip-10-33-141-202:31200", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ip-10-33-141-202:31201", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } m31201| 2014-11-26T14:33:53.408-0500 I REPL [ReplicationExecutor] transition to STARTUP2 m31200| 2014-11-26T14:33:53.408-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31201| 2014-11-26T14:33:53.408-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31200 is now in state SECONDARY m31101| 2014-11-26T14:33:53.574-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:33:53.580-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:53.580-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31100| 2014-11-26T14:33:53.616-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31200| 2014-11-26T14:33:53.781-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:53.783-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:33:53.984-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:53.984-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:33:54.185-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:54.185-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:33:54.386-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:54.387-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:33:54.587-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:54.588-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:33:54.788-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:54.789-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:33:54.990-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:54.990-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:33:55.191-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:55.191-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:33:55.381-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:33:55.381-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31201 is now in state STARTUP2 m31200| 2014-11-26T14:33:55.381-0500 I REPL [ReplicationExecutor] Standing for election m31201| 2014-11-26T14:33:55.382-0500 I QUERY [conn3] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "test-rs1", opTime: new Date(6086099358581784577), who: "ip-10-33-141-202:31200", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:70 0ms m31200| 2014-11-26T14:33:55.382-0500 I REPL [ReplicationExecutor] replSet info electSelf m31201| 2014-11-26T14:33:55.382-0500 I REPL [ReplicationExecutor] replSetElect voting yea for ip-10-33-141-202:31200 (0) m31201| 2014-11-26T14:33:55.382-0500 I QUERY [conn3] command admin.$cmd command: replSetElect { replSetElect: 1, set: "test-rs1", who: "ip-10-33-141-202:31200", whoid: 0, cfgver: 1, round: ObjectId('54762b232c08972cefc9db66') } ntoreturn:1 keyUpdates:0 reslen:66 0ms m31200| 2014-11-26T14:33:55.382-0500 D REPL [ReplicationExecutor] replSet elect res: { vote: 1, round: ObjectId('54762b232c08972cefc9db66'), ok: 1.0 } m31200| 2014-11-26T14:33:55.382-0500 I REPL [ReplicationExecutor] replSet election succeeded, assuming primary role m31200| 2014-11-26T14:33:55.382-0500 I REPL [ReplicationExecutor] transition to PRIMARY m31200| 2014-11-26T14:33:55.392-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:55.392-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31200| 2014-11-26T14:33:55.409-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31201| 2014-11-26T14:33:55.409-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31200 is now in state PRIMARY m31200| 2014-11-26T14:33:55.568-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted m31101| 2014-11-26T14:33:55.574-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:33:55.593-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:55.594-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:33:55.594-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:55.594-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:55.595-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:33:55.616-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31200| 2014-11-26T14:33:55.795-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:55.796-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:55.796-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:33:55.997-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:55.998-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:55.998-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:33:56.199-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:56.199-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:56.200-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:33:56.400-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:56.401-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:56.401-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:33:56.602-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:56.602-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:56.603-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:33:56.803-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:56.804-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:56.805-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:33:57.005-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:57.006-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:57.006-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:33:57.207-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:57.211-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:57.213-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:57.381-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:33:57.410-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31200| 2014-11-26T14:33:57.414-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:57.414-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:57.414-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:33:57.574-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:33:57.615-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:57.616-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:57.616-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:33:57.617-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31200| 2014-11-26T14:33:57.817-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:57.817-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:57.818-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:33:58.018-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:58.019-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:58.019-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:33:58.220-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:58.220-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:58.221-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:58.408-0500 I REPL [rsSync] ****** m31201| 2014-11-26T14:33:58.408-0500 I REPL [rsSync] creating replication oplog of size: 40MB... m31201| 2014-11-26T14:33:58.408-0500 D STORAGE [rsSync] stored meta data for local.oplog.rs @ 0:4 m31201| 2014-11-26T14:33:58.408-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-6-373128891435557444 config: type=file,memory_page_max=100m,block_compressor=snappy,,type=file,app_metadata=(oplogKeyExtractionVersion=1),key_format=q,value_format=u m31201| 2014-11-26T14:33:58.411-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:33:58.411-0500 D STORAGE [rsSync] WiredTigerKVEngine::flushAllFiles m31200| 2014-11-26T14:33:58.421-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:58.515-0500 I REPL [rsSync] ****** m31201| 2014-11-26T14:33:58.515-0500 I REPL [rsSync] initial sync pending m31201| 2014-11-26T14:33:58.515-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:58.515-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:33:58.516-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:33:58.516-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:33:58.516-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:33:58.516-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:33:58.516-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:33:58.516-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:33:58.516-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31201| 2014-11-26T14:33:58.516-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:58.516-0500 D STORAGE [rsSync] local.oplog.rs: clearing plan cache - collection info cache reset m31201| 2014-11-26T14:33:58.516-0500 I REPL [ReplicationExecutor] syncing from: ip-10-33-141-202:31200 m31201| 2014-11-26T14:33:58.517-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31200| 2014-11-26T14:33:58.517-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40521 #4 (3 connections now open) m31201| 2014-11-26T14:33:58.517-0500 D NETWORK [rsSync] connected to server ip-10-33-141-202:31200 (10.33.141.202) m31200| 2014-11-26T14:33:58.519-0500 I QUERY [conn4] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D5477536E434230346A53617A317A577969354A49592B786D4561506E375A5837) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:33:58.531-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D5477536E434230346A53617A317A577969354A49592B786D4561506E375A58375A64426776503849385034386948503875316A477676636B4D534D4E30...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:33:58.532-0500 I ACCESS [conn4] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:33:58.532-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:33:58.532-0500 I QUERY [conn4] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31200| 2014-11-26T14:33:58.534-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31201| 2014-11-26T14:33:58.534-0500 D STORAGE [rsSync] stored meta data for local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.534-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-7-373128891435557444 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31201| 2014-11-26T14:33:58.538-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.538-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.538-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.538-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.538-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.539-0500 D STORAGE [rsSync] create uri: table:index-8-373128891435557444 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.minvalid" } m31201| 2014-11-26T14:33:58.543-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.543-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.543-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.543-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.543-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.543-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.543-0500 D STORAGE [rsSync] local.replset.minvalid: clearing plan cache - collection info cache reset m31201| 2014-11-26T14:33:58.543-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31201| 2014-11-26T14:33:58.544-0500 I REPL [rsSync] initial sync drop all databases m31201| 2014-11-26T14:33:58.544-0500 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 m31201| 2014-11-26T14:33:58.544-0500 I REPL [rsSync] initial sync clone all databases m31200| 2014-11-26T14:33:58.544-0500 D STORAGE [conn4] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:58.544-0500 D STORAGE [conn4] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:33:58.544-0500 D STORAGE [conn4] looking up metadata for: local.oplog.rs @ 0:4 m31200| 2014-11-26T14:33:58.544-0500 D STORAGE [conn4] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:58.544-0500 D STORAGE [conn4] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:33:58.545-0500 D STORAGE [conn4] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:58.545-0500 D STORAGE [conn4] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:33:58.545-0500 I QUERY [conn4] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 reslen:124 1ms m31201| 2014-11-26T14:33:58.545-0500 I REPL [rsSync] initial sync data copy, starting syncup m31201| 2014-11-26T14:33:58.545-0500 I REPL [rsSync] oplog sync 1 of 3 m31200| 2014-11-26T14:33:58.545-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31201| 2014-11-26T14:33:58.545-0500 I REPL [rsSync] oplog sync 2 of 3 m31200| 2014-11-26T14:33:58.545-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31201| 2014-11-26T14:33:58.545-0500 I REPL [rsSync] initial sync building indexes m31201| 2014-11-26T14:33:58.545-0500 I REPL [rsSync] oplog sync 3 of 3 m31200| 2014-11-26T14:33:58.547-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31201| 2014-11-26T14:33:58.547-0500 I QUERY [rsSync] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31201| 2014-11-26T14:33:58.547-0500 I REPL [rsSync] initial sync finishing up m31201| 2014-11-26T14:33:58.547-0500 I REPL [rsSync] replSet set minValid=54762b1f:1 m31201| 2014-11-26T14:33:58.547-0500 I REPL [rsSync] initial sync done m31200| 2014-11-26T14:33:58.550-0500 I NETWORK [conn4] end connection 10.33.141.202:40521 (2 connections now open) m31201| 2014-11-26T14:33:58.551-0500 I REPL [ReplicationExecutor] transition to RECOVERING m31201| 2014-11-26T14:33:58.553-0500 I REPL [ReplicationExecutor] transition to SECONDARY m31200| 2014-11-26T14:33:58.717-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:33:58.717-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:33:58.718-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31300, 31301 ] 31300 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31300, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs2", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 2, "node" : 0, "set" : "test-rs2" }, "verbose" : 1, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs2-0' 2014-11-26T14:33:58.721-0500 I - shell: started program (sh9427): /data/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31300 --noprealloc --smallfiles --rest --replSet test-rs2 --dbpath /data/db/test-rs2-0 -v --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:33:58.722-0500 W NETWORK Failed to connect to 127.0.0.1:31300, reason: errno:111 Connection refused m31300| 2014-11-26T14:33:58.731-0500 I CONTROL ** WARNING: --rest is specified without --httpinterface, m31300| 2014-11-26T14:33:58.731-0500 I CONTROL ** enabling http interface m31300| note: noprealloc may hurt performance in many applications m31300| 2014-11-26T14:33:58.749-0500 D SHARDING isInRangeTest passed m31300| 2014-11-26T14:33:58.749-0500 I CONTROL [initandlisten] MongoDB starting : pid=9427 port=31300 dbpath=/data/db/test-rs2-0 64-bit host=ip-10-33-141-202 m31300| 2014-11-26T14:33:58.749-0500 I CONTROL [initandlisten] m31300| 2014-11-26T14:33:58.749-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m31300| 2014-11-26T14:33:58.749-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m31300| 2014-11-26T14:33:58.749-0500 I CONTROL [initandlisten] m31300| 2014-11-26T14:33:58.749-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m31300| 2014-11-26T14:33:58.750-0500 I CONTROL [initandlisten] m31300| 2014-11-26T14:33:58.750-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m31300| 2014-11-26T14:33:58.750-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m31300| 2014-11-26T14:33:58.750-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m31300| 2014-11-26T14:33:58.750-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m31300| 2014-11-26T14:33:58.750-0500 I CONTROL [initandlisten] allocator: tcmalloc m31300| 2014-11-26T14:33:58.750-0500 I CONTROL [initandlisten] options: { net: { http: { RESTInterfaceEnabled: true, enabled: true }, port: 31300 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "test-rs2" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/test-rs2-0", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 1 } } m31300| 2014-11-26T14:33:58.750-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger m31300| 2014-11-26T14:33:58.750-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 m31300| 2014-11-26T14:33:58.750-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m31300| 2014-11-26T14:33:58.771-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31300| 2014-11-26T14:33:58.782-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) m31300| 2014-11-26T14:33:58.782-0500 D STORAGE [initandlisten] done repairDatabases m31300| 2014-11-26T14:33:58.783-0500 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31300| 2014-11-26T14:33:58.783-0500 D COMMAND [snapshot] BackgroundJob starting: snapshot m31300| 2014-11-26T14:33:58.783-0500 D NETWORK [websvr] fd limit hard:64000 soft:64000 max conn: 51200 m31300| 2014-11-26T14:33:58.783-0500 D INDEX [initandlisten] checking complete m31300| 2014-11-26T14:33:58.783-0500 I NETWORK [websvr] admin web console waiting for connections on port 32300 m31300| 2014-11-26T14:33:58.783-0500 D STORAGE [initandlisten] stored meta data for local.me @ 0:1 m31300| 2014-11-26T14:33:58.783-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0--1578539612747083226 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31300| 2014-11-26T14:33:58.788-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:33:58.789-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:33:58.789-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:33:58.789-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:33:58.789-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:33:58.789-0500 D STORAGE [initandlisten] create uri: table:index-1--1578539612747083226 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" } m31300| 2014-11-26T14:33:58.795-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:33:58.795-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:33:58.795-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:33:58.795-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:33:58.795-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:33:58.795-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:33:58.795-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset m31300| 2014-11-26T14:33:58.795-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:33:58.795-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset m31300| 2014-11-26T14:33:58.795-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor m31300| 2014-11-26T14:33:58.796-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor m31300| 2014-11-26T14:33:58.796-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner m31300| 2014-11-26T14:33:58.796-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } m31300| 2014-11-26T14:33:58.796-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.796-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2--1578539612747083226 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31300| 2014-11-26T14:33:58.801-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.801-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.801-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.801-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.801-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.801-0500 D STORAGE [initandlisten] create uri: table:index-3--1578539612747083226 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" } m31300| 2014-11-26T14:33:58.807-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.807-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.807-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.807-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.807-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.807-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.807-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset m31300| 2014-11-26T14:33:58.807-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:33:58.808-0500 I NETWORK [initandlisten] waiting for connections on port 31300 m31300| 2014-11-26T14:33:58.922-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50926 #1 (1 connection now open) [ connection to ip-10-33-141-202:31300 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31300, 31301 ] 31301 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : "jstests/libs/key1", "port" : 31301, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs2", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 2, "node" : 1, "set" : "test-rs2" }, "verbose" : 1, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs2-1' 2014-11-26T14:33:58.926-0500 I - shell: started program (sh9454): /data/mongo/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 31301 --noprealloc --smallfiles --rest --replSet test-rs2 --dbpath /data/db/test-rs2-1 -v --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:33:58.926-0500 W NETWORK Failed to connect to 127.0.0.1:31301, reason: errno:111 Connection refused m31301| 2014-11-26T14:33:58.935-0500 I CONTROL ** WARNING: --rest is specified without --httpinterface, m31301| 2014-11-26T14:33:58.935-0500 I CONTROL ** enabling http interface m31301| note: noprealloc may hurt performance in many applications m31301| 2014-11-26T14:33:58.954-0500 D SHARDING isInRangeTest passed m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] MongoDB starting : pid=9454 port=31301 dbpath=/data/db/test-rs2-1 64-bit host=ip-10-33-141-202 m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] allocator: tcmalloc m31301| 2014-11-26T14:33:58.954-0500 I CONTROL [initandlisten] options: { net: { http: { RESTInterfaceEnabled: true, enabled: true }, port: 31301 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "test-rs2" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/test-rs2-1", engine: "wiredTiger", mmapv1: { preallocDataFiles: false, smallFiles: true } }, systemLog: { verbosity: 1 } } m31301| 2014-11-26T14:33:58.954-0500 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger m31301| 2014-11-26T14:33:58.954-0500 D NETWORK [initandlisten] fd limit hard:64000 soft:64000 max conn: 51200 m31301| 2014-11-26T14:33:58.954-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m31301| 2014-11-26T14:33:58.978-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:_mdb_catalog config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:33:58.990-0500 D STORAGE [initandlisten] enter repairDatabases (to check pdfile version #) m31301| 2014-11-26T14:33:58.990-0500 D STORAGE [initandlisten] done repairDatabases m31301| 2014-11-26T14:33:58.990-0500 I QUERY [initandlisten] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31301| 2014-11-26T14:33:58.990-0500 D COMMAND [snapshot] BackgroundJob starting: snapshot m31301| 2014-11-26T14:33:58.990-0500 D NETWORK [websvr] fd limit hard:64000 soft:64000 max conn: 51200 m31301| 2014-11-26T14:33:58.991-0500 D INDEX [initandlisten] checking complete m31301| 2014-11-26T14:33:58.991-0500 I NETWORK [websvr] admin web console waiting for connections on port 32301 m31301| 2014-11-26T14:33:58.991-0500 D STORAGE [initandlisten] stored meta data for local.me @ 0:1 m31301| 2014-11-26T14:33:58.991-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-0--3633662199818464429 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:33:58.998-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:33:58.998-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:33:58.998-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:33:58.998-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:33:58.998-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:33:58.998-0500 D STORAGE [initandlisten] create uri: table:index-1--3633662199818464429 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.me" } m31301| 2014-11-26T14:33:59.003-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:33:59.003-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:33:59.003-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:33:59.003-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:33:59.003-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:33:59.003-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:33:59.003-0500 D STORAGE [initandlisten] local.me: clearing plan cache - collection info cache reset m31301| 2014-11-26T14:33:59.003-0500 D STORAGE [initandlisten] looking up metadata for: local.me @ 0:1 m31301| 2014-11-26T14:33:59.003-0500 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset m31301| 2014-11-26T14:33:59.003-0500 D COMMAND [TTLMonitor] BackgroundJob starting: TTLMonitor m31301| 2014-11-26T14:33:59.004-0500 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor m31301| 2014-11-26T14:33:59.004-0500 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner m31301| 2014-11-26T14:33:59.004-0500 D STORAGE [initandlisten] create collection local.startup_log { capped: true, size: 10485760 } m31301| 2014-11-26T14:33:59.004-0500 D STORAGE [initandlisten] stored meta data for local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.004-0500 D STORAGE [initandlisten] WiredTigerKVEngine::createRecordStore uri: table:collection-2--3633662199818464429 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:33:59.010-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.010-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.010-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.010-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.010-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.010-0500 D STORAGE [initandlisten] create uri: table:index-3--3633662199818464429 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.startup_log" } m31301| 2014-11-26T14:33:59.015-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.015-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.015-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.015-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.015-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.015-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.015-0500 D STORAGE [initandlisten] local.startup_log: clearing plan cache - collection info cache reset m31301| 2014-11-26T14:33:59.015-0500 D STORAGE [initandlisten] looking up metadata for: local.startup_log @ 0:2 m31301| 2014-11-26T14:33:59.015-0500 I NETWORK [initandlisten] waiting for connections on port 31301 m31301| 2014-11-26T14:33:59.127-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49882 #1 (1 connection now open) [ connection to ip-10-33-141-202:31300, connection to ip-10-33-141-202:31301 ] { "replSetInitiate" : { "_id" : "test-rs2", "members" : [ { "_id" : 0, "host" : "ip-10-33-141-202:31300" }, { "_id" : 1, "host" : "ip-10-33-141-202:31301" } ] } } m31300| 2014-11-26T14:33:59.128-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m31300| 2014-11-26T14:33:59.128-0500 I REPL [conn1] replSetInitiate admin command received from client m31300| 2014-11-26T14:33:59.129-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:33:59.129-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40980 #2 (2 connections now open) m31300| 2014-11-26T14:33:59.129-0500 D NETWORK [conn1] connected to server ip-10-33-141-202:31301 (10.33.141.202) m31301| 2014-11-26T14:33:59.131-0500 I QUERY [conn2] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D437733317953484C76703549762B504C5A6C7937694A37325375734934704D36) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31301| 2014-11-26T14:33:59.144-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D437733317953484C76703549762B504C5A6C7937694A37325375734934704D3655357650486F3159453468346348356B7833587539344C75614A4D5550...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31301| 2014-11-26T14:33:59.144-0500 I ACCESS [conn2] Successfully authenticated as principal __system on local m31301| 2014-11-26T14:33:59.144-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:33:59.144-0500 I REPL [conn1] replSet replSetInitiate config object with 2 members parses ok m31301| 2014-11-26T14:33:59.144-0500 I QUERY [conn2] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31301| 2014-11-26T14:33:59.144-0500 I NETWORK [conn2] end connection 10.33.141.202:40980 (1 connection now open) m31300| 2014-11-26T14:33:59.144-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:33:59.144-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40981 #3 (2 connections now open) m31300| 2014-11-26T14:33:59.145-0500 D NETWORK [ReplExecNetThread-0] connected to server ip-10-33-141-202:31301 (10.33.141.202) m31301| 2014-11-26T14:33:59.146-0500 I QUERY [conn3] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D70456A4D794E58676858565858714972737A59484544564B3872426868377567) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31301| 2014-11-26T14:33:59.159-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D70456A4D794E58676858565858714972737A59484544564B387242686837756756583355624F5935675139693759736F71377668346A74442F476B3572...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31301| 2014-11-26T14:33:59.159-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m31301| 2014-11-26T14:33:59.159-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31301| 2014-11-26T14:33:59.160-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31300", fromId: 0, checkEmpty: true } ntoreturn:1 keyUpdates:0 reslen:112 0ms m31300| 2014-11-26T14:33:59.160-0500 D STORAGE [conn1] stored meta data for local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.160-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31300| 2014-11-26T14:33:59.160-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:60609 #2 (2 connections now open) m31300| 2014-11-26T14:33:59.160-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-4--1578539612747083226 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:33:59.160-0500 D NETWORK [ReplExecNetThread-0] connected to server ip-10-33-141-202:31300 (10.33.141.202) m31300| 2014-11-26T14:33:59.162-0500 I QUERY [conn2] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4433486256323258332F30336F3875314F546967722B392B72484C7236343671) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31300| 2014-11-26T14:33:59.164-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:33:59.164-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:33:59.164-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:33:59.164-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:33:59.164-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:33:59.164-0500 D STORAGE [conn1] create uri: table:index-5--1578539612747083226 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" } m31300| 2014-11-26T14:33:59.169-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:33:59.169-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:33:59.169-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:33:59.169-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:33:59.169-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:33:59.169-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:33:59.169-0500 D STORAGE [conn1] local.system.replset: clearing plan cache - collection info cache reset m31300| 2014-11-26T14:33:59.169-0500 D STORAGE [conn1] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:33:59.170-0500 I REPL [ReplicationExecutor] new replica set config in use: { _id: "test-rs2", version: 1, members: [ { _id: 0, host: "ip-10-33-141-202:31300", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ip-10-33-141-202:31301", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } m31300| 2014-11-26T14:33:59.170-0500 I REPL [ReplicationExecutor] transition to STARTUP2 m31300| 2014-11-26T14:33:59.170-0500 I REPL [conn1] ****** m31300| 2014-11-26T14:33:59.170-0500 I REPL [conn1] creating replication oplog of size: 40MB... m31300| 2014-11-26T14:33:59.170-0500 D STORAGE [conn1] stored meta data for local.oplog.rs @ 0:4 m31301| 2014-11-26T14:33:59.170-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31300", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:102 0ms m31300| 2014-11-26T14:33:59.170-0500 D STORAGE [conn1] WiredTigerKVEngine::createRecordStore uri: table:collection-6--1578539612747083226 config: type=file,memory_page_max=100m,block_compressor=snappy,,type=file,app_metadata=(oplogKeyExtractionVersion=1),key_format=q,value_format=u m31300| 2014-11-26T14:33:59.170-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31301 is now in state STARTUP m31300| 2014-11-26T14:33:59.176-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4433486256323258332F30336F3875314F546967722B392B72484C7236343671693079626F2B61694A346D6D6A544361616D4E2F6C6762535172544841...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31300| 2014-11-26T14:33:59.176-0500 I ACCESS [conn2] Successfully authenticated as principal __system on local m31300| 2014-11-26T14:33:59.176-0500 I QUERY [conn2] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:33:59.176-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: -2, from: "", checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:597 0ms m31300| 2014-11-26T14:33:59.176-0500 D STORAGE [conn1] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:33:59.176-0500 D REPL [ReplicationExecutor] Received new config via heartbeat with version 1 m31300| 2014-11-26T14:33:59.176-0500 D STORAGE [conn1] WiredTigerKVEngine::flushAllFiles m31301| 2014-11-26T14:33:59.177-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31300| 2014-11-26T14:33:59.177-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:60610 #3 (3 connections now open) m31301| 2014-11-26T14:33:59.177-0500 D NETWORK connected to server ip-10-33-141-202:31300 (10.33.141.202) m31300| 2014-11-26T14:33:59.178-0500 I QUERY [conn3] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D412F6445443832767A6A70737744476A694652503148594231354E727A742F7A) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31300| 2014-11-26T14:33:59.191-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D412F6445443832767A6A70737744476A694652503148594231354E727A742F7A792B7A767839325A716B4C4F515368574647707845755A674A73647164...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31300| 2014-11-26T14:33:59.191-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m31300| 2014-11-26T14:33:59.191-0500 I QUERY [conn3] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:33:59.192-0500 I QUERY [conn3] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31300| 2014-11-26T14:33:59.192-0500 I NETWORK [conn3] end connection 10.33.141.202:60610 (2 connections now open) m31301| 2014-11-26T14:33:59.192-0500 D STORAGE [WriteReplSetConfig] stored meta data for local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.192-0500 D STORAGE [WriteReplSetConfig] WiredTigerKVEngine::createRecordStore uri: table:collection-4--3633662199818464429 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:33:59.199-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.199-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.199-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.199-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.199-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.199-0500 D STORAGE [WriteReplSetConfig] create uri: table:index-5--3633662199818464429 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.system.replset" } m31301| 2014-11-26T14:33:59.207-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.207-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.207-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.207-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.207-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.207-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.207-0500 D STORAGE [WriteReplSetConfig] local.system.replset: clearing plan cache - collection info cache reset m31301| 2014-11-26T14:33:59.208-0500 D STORAGE [WriteReplSetConfig] looking up metadata for: local.system.replset @ 0:3 m31301| 2014-11-26T14:33:59.208-0500 I REPL [WriteReplSetConfig] Starting replication applier threads m31301| 2014-11-26T14:33:59.208-0500 I REPL [rsSync] replSet warning did not receive a valid config yet, sleeping 5 seconds m31301| 2014-11-26T14:33:59.208-0500 I REPL [ReplicationExecutor] new replica set config in use: { _id: "test-rs2", version: 1, members: [ { _id: 0, host: "ip-10-33-141-202:31300", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ip-10-33-141-202:31301", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatTimeoutSecs: 10, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } m31301| 2014-11-26T14:33:59.208-0500 I REPL [ReplicationExecutor] transition to STARTUP2 m31300| 2014-11-26T14:33:59.208-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31301| 2014-11-26T14:33:59.208-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31300 is now in state STARTUP2 m31300| 2014-11-26T14:33:59.274-0500 I REPL [conn1] ****** m31300| 2014-11-26T14:33:59.275-0500 I REPL [conn1] Starting replication applier threads m31300| 2014-11-26T14:33:59.275-0500 I REPL [ReplicationExecutor] transition to RECOVERING m31300| 2014-11-26T14:33:59.275-0500 I QUERY [conn1] command admin.$cmd command: replSetInitiate { replSetInitiate: { _id: "test-rs2", members: [ { _id: 0.0, host: "ip-10-33-141-202:31300" }, { _id: 1.0, host: "ip-10-33-141-202:31301" } ] } } keyUpdates:0 reslen:37 147ms m31300| 2014-11-26T14:33:59.275-0500 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 54762b27:1 0 m31300| 2014-11-26T14:33:59.276-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:33:59.276-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m31301| 2014-11-26T14:33:59.276-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:33:59.277-0500 I REPL [ReplicationExecutor] transition to SECONDARY m31201| 2014-11-26T14:33:59.382-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:158 0ms m31200| 2014-11-26T14:33:59.382-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31201 is now in state SECONDARY m31201| 2014-11-26T14:33:59.409-0500 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 54762b1f:1 0 m31201| 2014-11-26T14:33:59.409-0500 I REPL [ReplicationExecutor] could not find member to sync from m31200| 2014-11-26T14:33:59.410-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31300| 2014-11-26T14:33:59.477-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:33:59.477-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:33:59.575-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:33:59.617-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31300| 2014-11-26T14:33:59.678-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:33:59.679-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:33:59.879-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:33:59.880-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:00.081-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:00.081-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:00.282-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:00.282-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:00.483-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:00.483-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:00.684-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:00.685-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:00.885-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:00.886-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:01.087-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:01.087-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:01.170-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31300", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31300| 2014-11-26T14:34:01.170-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31301 is now in state STARTUP2 m31300| 2014-11-26T14:34:01.170-0500 I REPL [ReplicationExecutor] Standing for election m31301| 2014-11-26T14:34:01.170-0500 I QUERY [conn3] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "test-rs2", opTime: new Date(6086099392941522945), who: "ip-10-33-141-202:31300", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:257 0ms m31300| 2014-11-26T14:34:01.171-0500 I REPL [ReplicationExecutor] not electing self, ip-10-33-141-202:31301 would veto with 'errmsg: "I don't think ip-10-33-141-202:31300 is electable because the member is not currently a secondary; member is more than 10 seconds behind the most up-t..."' m31300| 2014-11-26T14:34:01.171-0500 I REPL [ReplicationExecutor] not electing self, we are not freshest m31300| 2014-11-26T14:34:01.208-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31301| 2014-11-26T14:34:01.208-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31300 is now in state SECONDARY m31300| 2014-11-26T14:34:01.288-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:01.288-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31201| 2014-11-26T14:34:01.383-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:34:01.411-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31300| 2014-11-26T14:34:01.489-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:01.490-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:34:01.575-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:34:01.617-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31300| 2014-11-26T14:34:01.690-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:01.691-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:01.892-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:01.892-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:02.093-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:02.094-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:02.294-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:02.295-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:02.496-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:02.496-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:02.697-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:02.697-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:02.898-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:02.898-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31300| 2014-11-26T14:34:03.099-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:03.099-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:341 0ms m31301| 2014-11-26T14:34:03.170-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31300", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31300| 2014-11-26T14:34:03.171-0500 I REPL [ReplicationExecutor] Standing for election m31301| 2014-11-26T14:34:03.171-0500 I QUERY [conn3] command admin.$cmd command: replSetFresh { replSetFresh: 1, set: "test-rs2", opTime: new Date(6086099392941522945), who: "ip-10-33-141-202:31300", cfgver: 1, id: 0 } ntoreturn:1 keyUpdates:0 reslen:70 0ms m31300| 2014-11-26T14:34:03.171-0500 I REPL [ReplicationExecutor] replSet info electSelf m31301| 2014-11-26T14:34:03.171-0500 I REPL [ReplicationExecutor] replSetElect voting yea for ip-10-33-141-202:31300 (0) m31301| 2014-11-26T14:34:03.171-0500 I QUERY [conn3] command admin.$cmd command: replSetElect { replSetElect: 1, set: "test-rs2", who: "ip-10-33-141-202:31300", whoid: 0, cfgver: 1, round: ObjectId('54762b2b46c1f4ff67d18e74') } ntoreturn:1 keyUpdates:0 reslen:66 0ms m31300| 2014-11-26T14:34:03.171-0500 D REPL [ReplicationExecutor] replSet elect res: { vote: 1, round: ObjectId('54762b2b46c1f4ff67d18e74'), ok: 1.0 } m31300| 2014-11-26T14:34:03.171-0500 I REPL [ReplicationExecutor] replSet election succeeded, assuming primary role m31300| 2014-11-26T14:34:03.171-0500 I REPL [ReplicationExecutor] transition to PRIMARY m31300| 2014-11-26T14:34:03.208-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31301| 2014-11-26T14:34:03.209-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31300 is now in state PRIMARY m31300| 2014-11-26T14:34:03.277-0500 I REPL [rsSync] transition to primary complete; database writes are now permitted m31300| 2014-11-26T14:34:03.300-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:03.301-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31300| 2014-11-26T14:34:03.301-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:03.301-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:34:03.302-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:34:03.383-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:34:03.411-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31300| 2014-11-26T14:34:03.502-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:03.503-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:34:03.503-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:34:03.575-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:34:03.618-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31300| 2014-11-26T14:34:03.704-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:03.704-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:34:03.705-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31300| 2014-11-26T14:34:03.906-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:03.906-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:34:03.906-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31300| 2014-11-26T14:34:04.107-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:04.107-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:34:04.108-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:34:04.208-0500 I REPL [rsSync] ****** m31301| 2014-11-26T14:34:04.208-0500 I REPL [rsSync] creating replication oplog of size: 40MB... m31301| 2014-11-26T14:34:04.208-0500 D STORAGE [rsSync] stored meta data for local.oplog.rs @ 0:4 m31301| 2014-11-26T14:34:04.208-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-6--3633662199818464429 config: type=file,memory_page_max=100m,block_compressor=snappy,,type=file,app_metadata=(oplogKeyExtractionVersion=1),key_format=q,value_format=u m31301| 2014-11-26T14:34:04.212-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:34:04.212-0500 D STORAGE [rsSync] WiredTigerKVEngine::flushAllFiles m31300| 2014-11-26T14:34:04.309-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:04.315-0500 I REPL [rsSync] ****** m31301| 2014-11-26T14:34:04.316-0500 I REPL [rsSync] initial sync pending m31301| 2014-11-26T14:34:04.316-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:34:04.316-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:34:04.316-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:34:04.316-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:34:04.316-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:34:04.316-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:34:04.316-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:34:04.316-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:34:04.316-0500 D STORAGE [rsSync] looking up metadata for: local.oplog.rs @ 0:4 m31301| 2014-11-26T14:34:04.316-0500 D STORAGE [rsSync] local.oplog.rs: clearing plan cache - collection info cache reset m31301| 2014-11-26T14:34:04.316-0500 I REPL [ReplicationExecutor] syncing from: ip-10-33-141-202:31300 m31301| 2014-11-26T14:34:04.316-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:34:04.317-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31300| 2014-11-26T14:34:04.317-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:60611 #4 (3 connections now open) m31301| 2014-11-26T14:34:04.317-0500 D NETWORK [rsSync] connected to server ip-10-33-141-202:31300 (10.33.141.202) m31300| 2014-11-26T14:34:04.319-0500 I QUERY [conn4] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D2F4F57447835506A664D6435304956734652424E715650373264485545523068) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31300| 2014-11-26T14:34:04.331-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D2F4F57447835506A664D6435304956734652424E715650373264485545523068644F5364762F61614C2B2F3961614A5759334A6E4B6A4A654A3270706A...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31300| 2014-11-26T14:34:04.331-0500 I ACCESS [conn4] Successfully authenticated as principal __system on local m31300| 2014-11-26T14:34:04.332-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:34:04.332-0500 I QUERY [conn4] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31300| 2014-11-26T14:34:04.333-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31301| 2014-11-26T14:34:04.333-0500 D STORAGE [rsSync] stored meta data for local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.333-0500 D STORAGE [rsSync] WiredTigerKVEngine::createRecordStore uri: table:collection-7--3633662199818464429 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31301| 2014-11-26T14:34:04.337-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.337-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.337-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.337-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.337-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.337-0500 D STORAGE [rsSync] create uri: table:index-8--3633662199818464429 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "local.replset.minvalid" } m31301| 2014-11-26T14:34:04.344-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.344-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.344-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.344-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.344-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.344-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.344-0500 D STORAGE [rsSync] local.replset.minvalid: clearing plan cache - collection info cache reset m31301| 2014-11-26T14:34:04.344-0500 D STORAGE [rsSync] looking up metadata for: local.replset.minvalid @ 0:5 m31301| 2014-11-26T14:34:04.344-0500 I REPL [rsSync] initial sync drop all databases m31301| 2014-11-26T14:34:04.344-0500 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 m31301| 2014-11-26T14:34:04.344-0500 I REPL [rsSync] initial sync clone all databases m31300| 2014-11-26T14:34:04.345-0500 D STORAGE [conn4] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:34:04.345-0500 D STORAGE [conn4] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:34:04.345-0500 D STORAGE [conn4] looking up metadata for: local.oplog.rs @ 0:4 m31300| 2014-11-26T14:34:04.345-0500 D STORAGE [conn4] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:34:04.345-0500 D STORAGE [conn4] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:34:04.345-0500 D STORAGE [conn4] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:34:04.345-0500 D STORAGE [conn4] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:34:04.345-0500 I QUERY [conn4] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 reslen:124 1ms m31301| 2014-11-26T14:34:04.345-0500 I REPL [rsSync] initial sync data copy, starting syncup m31301| 2014-11-26T14:34:04.346-0500 I REPL [rsSync] oplog sync 1 of 3 m31300| 2014-11-26T14:34:04.346-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31301| 2014-11-26T14:34:04.346-0500 I REPL [rsSync] oplog sync 2 of 3 m31300| 2014-11-26T14:34:04.346-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31301| 2014-11-26T14:34:04.346-0500 I REPL [rsSync] initial sync building indexes m31301| 2014-11-26T14:34:04.346-0500 I REPL [rsSync] oplog sync 3 of 3 m31300| 2014-11-26T14:34:04.348-0500 I QUERY [conn4] query local.oplog.rs query: { query: {}, orderby: { $natural: -1 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31301| 2014-11-26T14:34:04.348-0500 I QUERY [rsSync] query admin.system.roles planSummary: EOF ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31301| 2014-11-26T14:34:04.348-0500 I REPL [rsSync] initial sync finishing up m31301| 2014-11-26T14:34:04.348-0500 I REPL [rsSync] replSet set minValid=54762b27:1 m31301| 2014-11-26T14:34:04.349-0500 I REPL [rsSync] initial sync done m31300| 2014-11-26T14:34:04.352-0500 I NETWORK [conn4] end connection 10.33.141.202:60611 (2 connections now open) m31301| 2014-11-26T14:34:04.352-0500 I REPL [ReplicationExecutor] transition to RECOVERING m31301| 2014-11-26T14:34:04.353-0500 I REPL [ReplicationExecutor] transition to SECONDARY m31300| 2014-11-26T14:34:04.517-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:04.517-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:34:04.518-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:34:04.518-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:34:04.519-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:34:04.520-0500 I QUERY [conn1] command admin.$cmd command: isMaster { isMaster: 1.0 } keyUpdates:0 reslen:401 0ms m31100| 2014-11-26T14:34:04.522-0500 I ACCESS [conn1] Unauthorized not authorized on admin to execute command { insert: "foo", documents: [ { x: 1.0, _id: ObjectId('54762b2c5cf5867836012f33') } ], ordered: true } m31100| 2014-11-26T14:34:04.523-0500 I QUERY [conn1] command admin.$cmd command: isMaster { insert: "foo", documents: [ { x: 1.0, _id: ObjectId('54762b2c5cf5867836012f33') } ], ordered: true } keyUpdates:0 reslen:205 0ms m31100| 2014-11-26T14:34:04.525-0500 I QUERY [conn1] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D45464175697A475375793136376D4848743247375871677830416D66514C692B) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:34:04.539-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D45464175697A475375793136376D4848743247375871677830416D66514C692B7733705374483033523079302B38514361764F4D503267395770745165...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:34:04.539-0500 I ACCESS [conn1] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:34:04.539-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31101| 2014-11-26T14:34:04.541-0500 I QUERY [conn1] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D3666726F35564333734C446F594E796268722F4F6C6D3975705A766267584F73) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31101| 2014-11-26T14:34:04.553-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D3666726F35564333734C446F594E796268722F4F6C6D3975705A766267584F73777379712F724538395658504473716C744D474D74394B3446614F4674...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31101| 2014-11-26T14:34:04.554-0500 I ACCESS [conn1] Successfully authenticated as principal __system on local m31101| 2014-11-26T14:34:04.554-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:34:04.555-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:34:04.555-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:34:04.556-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms ReplSetTest awaitReplication: starting: timestamp for primary, ip-10-33-141-202:31100, is Timestamp(1417030425, 1) m31100| 2014-11-26T14:34:04.556-0500 I QUERY [conn1] query local.system.replset planSummary: COLLSCAN ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:489 0ms ReplSetTest awaitReplication: checking secondaries against timestamp Timestamp(1417030425, 1) m31101| 2014-11-26T14:34:04.557-0500 I QUERY [conn1] query local.system.replset planSummary: COLLSCAN ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:489 0ms m31101| 2014-11-26T14:34:04.557-0500 I QUERY [conn1] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1.0 } keyUpdates:0 reslen:563 0ms ReplSetTest awaitReplication: checking secondary #1: ip-10-33-141-202:31101 m31101| 2014-11-26T14:34:04.558-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31101| 2014-11-26T14:34:04.558-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms ReplSetTest awaitReplication: secondary #1, ip-10-33-141-202:31101, is synced ReplSetTest awaitReplication: finished: all 1 secondaries synced at timestamp Timestamp(1417030425, 1) m31100| 2014-11-26T14:34:04.558-0500 I QUERY [conn1] command local.$cmd command: logout { logout: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31101| 2014-11-26T14:34:04.559-0500 I QUERY [conn1] command local.$cmd command: logout { logout: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:34:04.559-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:34:04.559-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31100| 2014-11-26T14:34:04.560-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31101| 2014-11-26T14:34:04.560-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31101| 2014-11-26T14:34:04.560-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms 2014-11-26T14:34:04.561-0500 I NETWORK starting new replica set monitor for replica set test-rs0 with seeds ip-10-33-141-202:31100,ip-10-33-141-202:31101 2014-11-26T14:34:04.561-0500 I NETWORK [ReplicaSetMonitorWatcher] starting m31100| 2014-11-26T14:34:04.562-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38074 #5 (3 connections now open) m31100| 2014-11-26T14:34:04.562-0500 I QUERY [conn5] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:34:04.562-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:34:04.563-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:34:04.563-0500 I QUERY [conn1] command admin.$cmd command: isMaster { isMaster: 1.0 } keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:34:04.564-0500 I ACCESS [conn1] Unauthorized not authorized on admin to execute command { insert: "foo", documents: [ { x: 1.0, _id: ObjectId('54762b2c5cf5867836012f34') } ], ordered: true } m31200| 2014-11-26T14:34:04.564-0500 I QUERY [conn1] command admin.$cmd command: isMaster { insert: "foo", documents: [ { x: 1.0, _id: ObjectId('54762b2c5cf5867836012f34') } ], ordered: true } keyUpdates:0 reslen:205 0ms m31200| 2014-11-26T14:34:04.566-0500 I QUERY [conn1] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D5351347143526543596F2F6C2B396E3377485235494F494B6B316D4863445462) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:34:04.579-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D5351347143526543596F2F6C2B396E3377485235494F494B6B316D48634454627256414B3764534D4F6D786578653258744E476969516263365866584E...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:34:04.579-0500 I ACCESS [conn1] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:34:04.579-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:34:04.581-0500 I QUERY [conn1] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D7A67727371482B2B66487446512F356B6C63304379316B4D52456A6F67386957) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:34:04.593-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D7A67727371482B2B66487446512F356B6C63304379316B4D52456A6F673869573379505932795562644575454E776A725A792F37433546416D4859316E...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:34:04.594-0500 I ACCESS [conn1] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:34:04.594-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:34:04.594-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:34:04.595-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:34:04.595-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms ReplSetTest awaitReplication: starting: timestamp for primary, ip-10-33-141-202:31200, is Timestamp(1417030431, 1) m31200| 2014-11-26T14:34:04.595-0500 I QUERY [conn1] query local.system.replset planSummary: COLLSCAN ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:489 0ms ReplSetTest awaitReplication: checking secondaries against timestamp Timestamp(1417030431, 1) m31201| 2014-11-26T14:34:04.596-0500 I QUERY [conn1] query local.system.replset planSummary: COLLSCAN ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:489 0ms m31201| 2014-11-26T14:34:04.596-0500 I QUERY [conn1] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1.0 } keyUpdates:0 reslen:563 0ms ReplSetTest awaitReplication: checking secondary #1: ip-10-33-141-202:31201 m31201| 2014-11-26T14:34:04.597-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31201| 2014-11-26T14:34:04.597-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms ReplSetTest awaitReplication: secondary #1, ip-10-33-141-202:31201, is synced ReplSetTest awaitReplication: finished: all 1 secondaries synced at timestamp Timestamp(1417030431, 1) m31200| 2014-11-26T14:34:04.597-0500 I QUERY [conn1] command local.$cmd command: logout { logout: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31201| 2014-11-26T14:34:04.597-0500 I QUERY [conn1] command local.$cmd command: logout { logout: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31200| 2014-11-26T14:34:04.598-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:34:04.598-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:34:04.598-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:34:04.599-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:34:04.599-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms 2014-11-26T14:34:04.599-0500 I NETWORK starting new replica set monitor for replica set test-rs1 with seeds ip-10-33-141-202:31200,ip-10-33-141-202:31201 m31200| 2014-11-26T14:34:04.600-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40532 #5 (3 connections now open) m31200| 2014-11-26T14:34:04.600-0500 I QUERY [conn5] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31300| 2014-11-26T14:34:04.600-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:04.601-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31300| 2014-11-26T14:34:04.601-0500 I QUERY [conn1] command admin.$cmd command: isMaster { isMaster: 1.0 } keyUpdates:0 reslen:401 0ms m31300| 2014-11-26T14:34:04.602-0500 I ACCESS [conn1] Unauthorized not authorized on admin to execute command { insert: "foo", documents: [ { x: 1.0, _id: ObjectId('54762b2c5cf5867836012f35') } ], ordered: true } m31300| 2014-11-26T14:34:04.602-0500 I QUERY [conn1] command admin.$cmd command: isMaster { insert: "foo", documents: [ { x: 1.0, _id: ObjectId('54762b2c5cf5867836012f35') } ], ordered: true } keyUpdates:0 reslen:205 0ms m31300| 2014-11-26T14:34:04.605-0500 I QUERY [conn1] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D3171464A6E734C6B425141716A54697577533864385074615745366A3859386D) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31300| 2014-11-26T14:34:04.617-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D3171464A6E734C6B425141716A54697577533864385074615745366A3859386D5A6B387258666C686A444A706D74582B6F6B3939447061584B52494B52...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31300| 2014-11-26T14:34:04.618-0500 I ACCESS [conn1] Successfully authenticated as principal __system on local m31300| 2014-11-26T14:34:04.618-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31301| 2014-11-26T14:34:04.619-0500 I QUERY [conn1] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4765534C397556535941537A6F533671416344683234386C79316F76706A564E) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31301| 2014-11-26T14:34:04.632-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4765534C397556535941537A6F533671416344683234386C79316F76706A564E4D6F584D6932484F796874706933466B4C61485874716E503676353731...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31301| 2014-11-26T14:34:04.632-0500 I ACCESS [conn1] Successfully authenticated as principal __system on local m31301| 2014-11-26T14:34:04.632-0500 I QUERY [conn1] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:34:04.633-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:04.633-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31300| 2014-11-26T14:34:04.634-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms ReplSetTest awaitReplication: starting: timestamp for primary, ip-10-33-141-202:31300, is Timestamp(1417030439, 1) m31300| 2014-11-26T14:34:04.634-0500 I QUERY [conn1] query local.system.replset planSummary: COLLSCAN ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:489 0ms ReplSetTest awaitReplication: checking secondaries against timestamp Timestamp(1417030439, 1) m31301| 2014-11-26T14:34:04.635-0500 I QUERY [conn1] query local.system.replset planSummary: COLLSCAN ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:489 0ms m31301| 2014-11-26T14:34:04.635-0500 I QUERY [conn1] command admin.$cmd command: replSetGetStatus { replSetGetStatus: 1.0 } keyUpdates:0 reslen:639 0ms ReplSetTest awaitReplication: checking secondary #1: ip-10-33-141-202:31301 m31301| 2014-11-26T14:34:04.635-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31301| 2014-11-26T14:34:04.636-0500 I QUERY [conn1] query local.oplog.rs query: { query: {}, orderby: { $natural: -1.0 } } planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms ReplSetTest awaitReplication: secondary #1, ip-10-33-141-202:31301, is synced ReplSetTest awaitReplication: finished: all 1 secondaries synced at timestamp Timestamp(1417030439, 1) m31300| 2014-11-26T14:34:04.636-0500 I QUERY [conn1] command local.$cmd command: logout { logout: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31301| 2014-11-26T14:34:04.636-0500 I QUERY [conn1] command local.$cmd command: logout { logout: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31300| 2014-11-26T14:34:04.636-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:04.637-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31300| 2014-11-26T14:34:04.637-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:04.638-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31301| 2014-11-26T14:34:04.638-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms 2014-11-26T14:34:04.638-0500 I NETWORK starting new replica set monitor for replica set test-rs2 with seeds ip-10-33-141-202:31300,ip-10-33-141-202:31301 m31300| 2014-11-26T14:34:04.639-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:60614 #5 (3 connections now open) m31300| 2014-11-26T14:34:04.639-0500 I QUERY [conn5] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms Resetting db path '/data/db/test-config0' 2014-11-26T14:34:04.642-0500 I - shell: started program (sh9633): /data/mongo/mongod --port 29000 --dbpath /data/db/test-config0 --keyFile jstests/libs/key1 --configsvr --nopreallocj --setParameter enableTestCommands=1 --storageEngine wiredTiger 2014-11-26T14:34:04.643-0500 W NETWORK Failed to connect to 127.0.0.1:29000, reason: errno:111 Connection refused m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] MongoDB starting : pid=9633 port=29000 dbpath=/data/db/test-config0 master=1 64-bit host=ip-10-33-141-202 m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] ** We suggest setting it to 'never' m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] db version v2.8.0-rc2-pre- m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] git version: 45790039049d7375beafe122622363d35ce990c2 m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] allocator: tcmalloc m29000| 2014-11-26T14:34:04.670-0500 I CONTROL [initandlisten] options: { net: { port: 29000 }, nopreallocj: true, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/db/test-config0", engine: "wiredTiger" } } m29000| 2014-11-26T14:34:04.670-0500 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7G,session_max=20000,extensions=[local=(entry=index_collator_extension)],statistics=(all),log=(enabled=true,archive=true,path=journal),checkpoint=(wait=60,log_size=2GB), m29000| 2014-11-26T14:34:04.715-0500 I REPL [initandlisten] ****** m29000| 2014-11-26T14:34:04.715-0500 I REPL [initandlisten] creating replication oplog of size: 5MB... m29000| 2014-11-26T14:34:04.791-0500 I REPL [initandlisten] ****** m29000| 2014-11-26T14:34:04.799-0500 I NETWORK [initandlisten] waiting for connections on port 29000 m29000| 2014-11-26T14:34:04.843-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:59874 #1 (1 connection now open) "ip-10-33-141-202:29000" m29000| 2014-11-26T14:34:04.844-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41455 #2 (2 connections now open) ShardingTest test : { "config" : "ip-10-33-141-202:29000", "shards" : [ connection to test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101, connection to test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201, connection to test-rs2/ip-10-33-141-202:31300,ip-10-33-141-202:31301 ] } 2014-11-26T14:34:04.846-0500 I - shell: started program (sh9651): /data/mongo/mongos --port 30999 --configdb ip-10-33-141-202:29000 --keyFile jstests/libs/key1 --chunkSize 50 --setParameter enableTestCommands=1 2014-11-26T14:34:04.847-0500 W NETWORK Failed to connect to 127.0.0.1:30999, reason: errno:111 Connection refused m30999| 2014-11-26T14:34:04.855-0500 W SHARDING running with 1 config server should be done only for testing purposes and is not recommended for production m30999| 2014-11-26T14:34:04.871-0500 I SHARDING [mongosMain] MongoS version 2.8.0-rc2-pre- starting: pid=9651 port=30999 64-bit host=ip-10-33-141-202 (--help for usage) m30999| 2014-11-26T14:34:04.871-0500 I CONTROL [mongosMain] db version v2.8.0-rc2-pre- m30999| 2014-11-26T14:34:04.871-0500 I CONTROL [mongosMain] git version: 45790039049d7375beafe122622363d35ce990c2 m30999| 2014-11-26T14:34:04.871-0500 I CONTROL [mongosMain] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 m30999| 2014-11-26T14:34:04.871-0500 I CONTROL [mongosMain] build info: Linux ip-10-33-141-202 3.4.43-43.43.amzn1.x86_64 #1 SMP Mon May 6 18:04:41 UTC 2013 x86_64 BOOST_LIB_VERSION=1_49 m30999| 2014-11-26T14:34:04.871-0500 I CONTROL [mongosMain] allocator: tcmalloc m30999| 2014-11-26T14:34:04.871-0500 I CONTROL [mongosMain] options: { net: { port: 30999 }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, sharding: { chunkSize: 50, configDB: "ip-10-33-141-202:29000" } } m29000| 2014-11-26T14:34:04.872-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41457 #3 (3 connections now open) m29000| 2014-11-26T14:34:04.887-0500 I ACCESS [conn3] Successfully authenticated as principal __system on local m29000| 2014-11-26T14:34:04.888-0500 I STORAGE [conn3] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:34:04.888-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41458 #4 (4 connections now open) m29000| 2014-11-26T14:34:04.903-0500 I ACCESS [conn4] Successfully authenticated as principal __system on local m29000| 2014-11-26T14:34:04.946-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41459 #5 (5 connections now open) m29000| 2014-11-26T14:34:04.961-0500 I ACCESS [conn5] Successfully authenticated as principal __system on local m30999| 2014-11-26T14:34:04.962-0500 I SHARDING [LockPinger] creating distributed lock ping thread for ip-10-33-141-202:29000 and process ip-10-33-141-202:30999:1417030444:1804289383 (sleeping for 30000ms) m30999| 2014-11-26T14:34:04.981-0500 I SHARDING [LockPinger] cluster ip-10-33-141-202:29000 pinged successfully at Wed Nov 26 14:34:04 2014 by distributed lock pinger 'ip-10-33-141-202:29000/ip-10-33-141-202:30999:1417030444:1804289383', sleeping for 30000ms m30999| 2014-11-26T14:34:04.981-0500 I SHARDING [mongosMain] distributed lock 'configUpgrade/ip-10-33-141-202:30999:1417030444:1804289383' acquired, ts : 54762b2cba042ce88d252a51 m30999| 2014-11-26T14:34:04.981-0500 I SHARDING [mongosMain] starting upgrade of config server from v0 to v6 m30999| 2014-11-26T14:34:04.981-0500 I SHARDING [mongosMain] starting next upgrade step from v0 to v6 m30999| 2014-11-26T14:34:04.981-0500 I SHARDING [mongosMain] about to log new metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:04-54762b2cba042ce88d252a52", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030444981), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 6 } } m29000| 2014-11-26T14:34:04.991-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41460 #6 (6 connections now open) m29000| 2014-11-26T14:34:05.006-0500 I ACCESS [conn6] Successfully authenticated as principal __system on local m29000| 2014-11-26T14:34:05.006-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 2014-11-26T14:34:05.047-0500 W NETWORK Failed to connect to 127.0.0.1:30999, reason: errno:111 Connection refused m30999| 2014-11-26T14:34:05.089-0500 I SHARDING [mongosMain] writing initial config version at v6 m29000| 2014-11-26T14:34:05.090-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:34:05.154-0500 I SHARDING [mongosMain] about to log new metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:05-54762b2dba042ce88d252a54", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030445154), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 6 } } m29000| 2014-11-26T14:34:05.154-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31301| 2014-11-26T14:34:05.170-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31300", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:158 0ms m31300| 2014-11-26T14:34:05.170-0500 I REPL [ReplicationExecutor] Member ip-10-33-141-202:31301 is now in state SECONDARY m30999| 2014-11-26T14:34:05.206-0500 I SHARDING [mongosMain] upgrade of config server to v6 successful m30999| 2014-11-26T14:34:05.206-0500 I SHARDING [mongosMain] distributed lock 'configUpgrade/ip-10-33-141-202:30999:1417030444:1804289383' unlocked. m29000| 2014-11-26T14:34:05.207-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31301| 2014-11-26T14:34:05.209-0500 D REPL [rsBackgroundSync] replset bgsync fetch queue set to: 54762b27:1 0 m31301| 2014-11-26T14:34:05.209-0500 I REPL [ReplicationExecutor] could not find member to sync from m31300| 2014-11-26T14:34:05.209-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms 2014-11-26T14:34:05.248-0500 W NETWORK Failed to connect to 127.0.0.1:30999, reason: errno:111 Connection refused m29000| 2014-11-26T14:34:05.283-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:34:05.383-0500 I INDEX [conn6] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } m31201| 2014-11-26T14:34:05.383-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m29000| 2014-11-26T14:34:05.383-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:34:05.386-0500 I INDEX [conn6] build index done. scanned 0 total records. 0 secs m29000| 2014-11-26T14:34:05.387-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31200| 2014-11-26T14:34:05.411-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m29000| 2014-11-26T14:34:05.440-0500 I INDEX [conn6] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } m29000| 2014-11-26T14:34:05.440-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:34:05.444-0500 I INDEX [conn6] build index done. scanned 0 total records. 0 secs m29000| 2014-11-26T14:34:05.444-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 2014-11-26T14:34:05.448-0500 W NETWORK Failed to connect to 127.0.0.1:30999, reason: errno:111 Connection refused m29000| 2014-11-26T14:34:05.493-0500 I INDEX [conn6] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } m29000| 2014-11-26T14:34:05.493-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:34:05.497-0500 I INDEX [conn6] build index done. scanned 0 total records. 0 secs m29000| 2014-11-26T14:34:05.498-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:34:05.551-0500 I INDEX [conn6] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } m29000| 2014-11-26T14:34:05.551-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:34:05.556-0500 I INDEX [conn6] build index done. scanned 0 total records. 0 secs m29000| 2014-11-26T14:34:05.556-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m31101| 2014-11-26T14:34:05.576-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:34:05.618-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m29000| 2014-11-26T14:34:05.632-0500 I INDEX [conn6] build index on: config.locks properties: { v: 1, unique: true, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } m29000| 2014-11-26T14:34:05.632-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:34:05.639-0500 I INDEX [conn6] build index done. scanned 1 total records. 0 secs m29000| 2014-11-26T14:34:05.639-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 2014-11-26T14:34:05.649-0500 W NETWORK Failed to connect to 127.0.0.1:30999, reason: errno:111 Connection refused m29000| 2014-11-26T14:34:05.801-0500 I QUERY [conn6] command admin.$cmd command: fsync { fsync: true } ntoreturn:1 keyUpdates:0 reslen:51 161ms m29000| 2014-11-26T14:34:05.811-0500 I INDEX [conn6] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } m29000| 2014-11-26T14:34:05.811-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:34:05.821-0500 I INDEX [conn6] build index done. scanned 1 total records. 0 secs m29000| 2014-11-26T14:34:05.822-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 2014-11-26T14:34:05.849-0500 W NETWORK Failed to connect to 127.0.0.1:30999, reason: errno:111 Connection refused m29000| 2014-11-26T14:34:06.008-0500 I QUERY [conn6] command admin.$cmd command: fsync { fsync: true } ntoreturn:1 keyUpdates:0 reslen:51 186ms m29000| 2014-11-26T14:34:06.017-0500 I INDEX [conn6] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } m29000| 2014-11-26T14:34:06.017-0500 I INDEX [conn6] building index using bulk method m29000| 2014-11-26T14:34:06.028-0500 I INDEX [conn6] build index done. scanned 1 total records. 0 secs m30999| 2014-11-26T14:34:06.029-0500 I SHARDING [Balancer] about to contact config servers and shards m30999| 2014-11-26T14:34:06.029-0500 I NETWORK [mongosMain] waiting for connections on port 30999 m30999| 2014-11-26T14:34:06.029-0500 I SHARDING [Balancer] config servers and shards contacted successfully m30999| 2014-11-26T14:34:06.029-0500 I SHARDING [Balancer] balancer id: ip-10-33-141-202:30999 started at Nov 26 14:34:06 m30999| 2014-11-26T14:34:06.049-0500 I SHARDING [Balancer] distributed lock 'balancer/ip-10-33-141-202:30999:1417030444:1804289383' acquired, ts : 54762b2eba042ce88d252a56 m30999| 2014-11-26T14:34:06.050-0500 I NETWORK [mongosMain] connection accepted from 127.0.0.1:39707 #1 (1 connection now open) m29000| 2014-11-26T14:34:06.051-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41467 #7 (7 connections now open) m29000| 2014-11-26T14:34:06.065-0500 I STORAGE [conn6] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:34:06.065-0500 I ACCESS [conn7] Successfully authenticated as principal __system on local m30999| 2014-11-26T14:34:06.066-0500 I SHARDING [conn1] couldn't find database [admin] in config db m29000| 2014-11-26T14:34:06.066-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41468 #8 (8 connections now open) m29000| 2014-11-26T14:34:06.081-0500 I ACCESS [conn8] Successfully authenticated as principal __system on local m29000| 2014-11-26T14:34:06.081-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:34:06.287-0500 I QUERY [conn6] command admin.$cmd command: fsync { fsync: true } ntoreturn:1 keyUpdates:0 reslen:51 222ms m30999| 2014-11-26T14:34:06.288-0500 I SHARDING [Balancer] distributed lock 'balancer/ip-10-33-141-202:30999:1417030444:1804289383' unlocked. m29000| 2014-11-26T14:34:06.329-0500 I QUERY [conn8] command admin.$cmd command: fsync { fsync: true } ntoreturn:1 keyUpdates:0 reslen:51 248ms m30999| 2014-11-26T14:34:06.338-0500 I SHARDING [conn1] put [admin] on: config:ip-10-33-141-202:29000 m30999| 2014-11-26T14:34:06.338-0500 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access m30999| 2014-11-26T14:34:06.339-0500 I ACCESS [conn1] authenticate db: admin { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" } m29000| 2014-11-26T14:34:06.341-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 Waiting for active hosts... Waiting for the balancer lock... Waiting again for active hosts after balancer is off... ShardingTest undefined going to add shard : test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101 m30999| 2014-11-26T14:34:06.399-0500 I NETWORK [conn1] starting new replica set monitor for replica set test-rs0 with seeds ip-10-33-141-202:31100,ip-10-33-141-202:31101 m30999| 2014-11-26T14:34:06.399-0500 I NETWORK [ReplicaSetMonitorWatcher] starting m31100| 2014-11-26T14:34:06.400-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38093 #6 (4 connections now open) m31100| 2014-11-26T14:34:06.402-0500 I QUERY [conn6] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4134694A6E41344C352B6A684C797A6D6C637432557061686A575553594A4B62) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:34:06.415-0500 I QUERY [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4134694A6E41344C352B6A684C797A6D6C637432557061686A575553594A4B624179685076726C765A5A5155685136762B6E394B476D39336D62345176...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:34:06.415-0500 I ACCESS [conn6] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:34:06.415-0500 I QUERY [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:34:06.415-0500 I QUERY [conn6] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31100| 2014-11-26T14:34:06.415-0500 I QUERY [conn6] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31100| 2014-11-26T14:34:06.416-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38094 #7 (5 connections now open) m31100| 2014-11-26T14:34:06.417-0500 I QUERY [conn7] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D434D3464467755424A6C6330795A68703354365A517646474330435078515467) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:34:06.430-0500 I QUERY [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D434D3464467755424A6C6330795A68703354365A5176464743304350785154676E68466C4556514B42546358496555317473454C456166393952554570...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:34:06.430-0500 I ACCESS [conn7] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:34:06.430-0500 I QUERY [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:34:06.430-0500 I QUERY [conn7] command admin.$cmd command: getLastError { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:110 0ms m31100| 2014-11-26T14:34:06.430-0500 I QUERY [conn7] command admin.$cmd command: getLastError { isdbgrid: 1 } ntoreturn:1 keyUpdates:0 reslen:113 0ms m31100| 2014-11-26T14:34:06.431-0500 I QUERY [conn7] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31100| 2014-11-26T14:34:06.431-0500 D STORAGE [conn7] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:34:06.431-0500 D STORAGE [conn7] looking up metadata for: local.me @ 0:1 m31100| 2014-11-26T14:34:06.431-0500 D STORAGE [conn7] looking up metadata for: local.oplog.rs @ 0:4 m31100| 2014-11-26T14:34:06.431-0500 D STORAGE [conn7] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:34:06.431-0500 D STORAGE [conn7] looking up metadata for: local.startup_log @ 0:2 m31100| 2014-11-26T14:34:06.431-0500 D STORAGE [conn7] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:34:06.431-0500 D STORAGE [conn7] looking up metadata for: local.system.replset @ 0:3 m31100| 2014-11-26T14:34:06.432-0500 I QUERY [conn7] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 reslen:124 0ms m30999| 2014-11-26T14:34:06.432-0500 I SHARDING [conn1] going to add shard: { _id: "test-rs0", host: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101" } m29000| 2014-11-26T14:34:06.432-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:34:06.489-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:06-54762b2eba042ce88d252a58", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030446489), what: "addShard", ns: "", details: { name: "test-rs0", host: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101" } } m29000| 2014-11-26T14:34:06.489-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 { "shardAdded" : "test-rs0", "ok" : 1 } ShardingTest undefined going to add shard : test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201 m30999| 2014-11-26T14:34:06.563-0500 I NETWORK [conn1] starting new replica set monitor for replica set test-rs1 with seeds ip-10-33-141-202:31200,ip-10-33-141-202:31201 m31200| 2014-11-26T14:34:06.563-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40552 #6 (4 connections now open) m31200| 2014-11-26T14:34:06.565-0500 I QUERY [conn6] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D525665504A63736152733453654B696D6344594E5550415349796D62465A4744) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:34:06.578-0500 I QUERY [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D525665504A63736152733453654B696D6344594E5550415349796D62465A47442B2F2B5144584A74436F4B4F69495348695170326C7A715571467A564E...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:34:06.578-0500 I ACCESS [conn6] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:34:06.578-0500 I QUERY [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:34:06.578-0500 I QUERY [conn6] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:34:06.578-0500 I QUERY [conn6] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:34:06.579-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40553 #7 (5 connections now open) m31200| 2014-11-26T14:34:06.581-0500 I QUERY [conn7] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4A2B6D734D4A5774466E736231702F344C5563447452513743354D4B50752B45) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:34:06.594-0500 I QUERY [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D4A2B6D734D4A5774466E736231702F344C5563447452513743354D4B50752B4567575A73727662635670336733616F61325544612B6C62456C78496134...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:34:06.594-0500 I ACCESS [conn7] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:34:06.594-0500 I QUERY [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:34:06.594-0500 I QUERY [conn7] command admin.$cmd command: getLastError { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:110 0ms m31200| 2014-11-26T14:34:06.594-0500 I QUERY [conn7] command admin.$cmd command: getLastError { isdbgrid: 1 } ntoreturn:1 keyUpdates:0 reslen:113 0ms m31200| 2014-11-26T14:34:06.594-0500 I QUERY [conn7] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:34:06.595-0500 D STORAGE [conn7] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:34:06.595-0500 D STORAGE [conn7] looking up metadata for: local.me @ 0:1 m31200| 2014-11-26T14:34:06.595-0500 D STORAGE [conn7] looking up metadata for: local.oplog.rs @ 0:4 m31200| 2014-11-26T14:34:06.595-0500 D STORAGE [conn7] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:34:06.595-0500 D STORAGE [conn7] looking up metadata for: local.startup_log @ 0:2 m31200| 2014-11-26T14:34:06.595-0500 D STORAGE [conn7] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:34:06.595-0500 D STORAGE [conn7] looking up metadata for: local.system.replset @ 0:3 m31200| 2014-11-26T14:34:06.595-0500 I QUERY [conn7] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 reslen:124 0ms m30999| 2014-11-26T14:34:06.596-0500 I SHARDING [conn1] going to add shard: { _id: "test-rs1", host: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201" } m29000| 2014-11-26T14:34:06.596-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:34:06.647-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:06-54762b2eba042ce88d252a59", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030446647), what: "addShard", ns: "", details: { name: "test-rs1", host: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201" } } m29000| 2014-11-26T14:34:06.647-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 { "shardAdded" : "test-rs1", "ok" : 1 } ShardingTest undefined going to add shard : test-rs2/ip-10-33-141-202:31300,ip-10-33-141-202:31301 m30999| 2014-11-26T14:34:06.707-0500 I NETWORK [conn1] starting new replica set monitor for replica set test-rs2 with seeds ip-10-33-141-202:31300,ip-10-33-141-202:31301 m31300| 2014-11-26T14:34:06.708-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:60635 #6 (4 connections now open) m31300| 2014-11-26T14:34:06.709-0500 I QUERY [conn6] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D7A49376273662F2B314C7A4452574764784E2F4F56564365595A533776306A46) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31300| 2014-11-26T14:34:06.722-0500 I QUERY [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D7A49376273662F2B314C7A4452574764784E2F4F56564365595A533776306A467543444F42356D665A2B442B5A376658316B73436776475953516C7862...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31300| 2014-11-26T14:34:06.722-0500 I ACCESS [conn6] Successfully authenticated as principal __system on local m31300| 2014-11-26T14:34:06.723-0500 I QUERY [conn6] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:34:06.723-0500 I QUERY [conn6] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31300| 2014-11-26T14:34:06.723-0500 I QUERY [conn6] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31300| 2014-11-26T14:34:06.723-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:60636 #7 (5 connections now open) m31300| 2014-11-26T14:34:06.725-0500 I QUERY [conn7] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D462F65437A6B58597530394C70616636677A4F6D4D427856784F6D4B46593537) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31300| 2014-11-26T14:34:06.738-0500 I QUERY [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D462F65437A6B58597530394C70616636677A4F6D4D427856784F6D4B465935374B55454865686B694D726F4470797A3175316E47684878325A4A567A57...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31300| 2014-11-26T14:34:06.738-0500 I ACCESS [conn7] Successfully authenticated as principal __system on local m31300| 2014-11-26T14:34:06.738-0500 I QUERY [conn7] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31300| 2014-11-26T14:34:06.738-0500 I QUERY [conn7] command admin.$cmd command: getLastError { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:110 0ms m31300| 2014-11-26T14:34:06.738-0500 I QUERY [conn7] command admin.$cmd command: getLastError { isdbgrid: 1 } ntoreturn:1 keyUpdates:0 reslen:113 0ms m31300| 2014-11-26T14:34:06.738-0500 I QUERY [conn7] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31300| 2014-11-26T14:34:06.739-0500 D STORAGE [conn7] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:34:06.739-0500 D STORAGE [conn7] looking up metadata for: local.me @ 0:1 m31300| 2014-11-26T14:34:06.739-0500 D STORAGE [conn7] looking up metadata for: local.oplog.rs @ 0:4 m31300| 2014-11-26T14:34:06.739-0500 D STORAGE [conn7] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:34:06.739-0500 D STORAGE [conn7] looking up metadata for: local.startup_log @ 0:2 m31300| 2014-11-26T14:34:06.739-0500 D STORAGE [conn7] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:34:06.739-0500 D STORAGE [conn7] looking up metadata for: local.system.replset @ 0:3 m31300| 2014-11-26T14:34:06.739-0500 I QUERY [conn7] command admin.$cmd command: listDatabases { listDatabases: 1 } ntoreturn:1 keyUpdates:0 reslen:124 0ms m30999| 2014-11-26T14:34:06.740-0500 I SHARDING [conn1] going to add shard: { _id: "test-rs2", host: "test-rs2/ip-10-33-141-202:31300,ip-10-33-141-202:31301" } m29000| 2014-11-26T14:34:06.740-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:34:06.796-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:06-54762b2eba042ce88d252a5a", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030446796), what: "addShard", ns: "", details: { name: "test-rs2", host: "test-rs2/ip-10-33-141-202:31300,ip-10-33-141-202:31301" } } m29000| 2014-11-26T14:34:06.796-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 { "shardAdded" : "test-rs2", "ok" : 1 } ---- Setting up initial admin user... ---- m30999| 2014-11-26T14:34:06.870-0500 I SHARDING [conn1] distributed lock 'authorizationData/ip-10-33-141-202:30999:1417030444:1804289383' acquired, ts : 54762b2eba042ce88d252a5b m29000| 2014-11-26T14:34:06.870-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:34:06.952-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:34:07.025-0500 I INDEX [conn8] build index on: admin.system.users properties: { v: 1, unique: true, key: { user: 1, db: 1 }, name: "user_1_db_1", ns: "admin.system.users" } m29000| 2014-11-26T14:34:07.025-0500 I INDEX [conn8] building index using bulk method m29000| 2014-11-26T14:34:07.029-0500 I INDEX [conn8] build index done. scanned 0 total records. 0 secs m30999| 2014-11-26T14:34:07.031-0500 I SHARDING [conn1] distributed lock 'authorizationData/ip-10-33-141-202:30999:1417030444:1804289383' unlocked. Successfully added user: { "user" : "adminUser", "roles" : [ "root" ] } m30999| 2014-11-26T14:34:07.047-0500 I ACCESS [conn1] Successfully authenticated as principal adminUser on admin m29000| 2014-11-26T14:34:07.048-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 Waiting for active hosts... Waiting for the balancer lock... Waiting again for active hosts after balancer is off... m30999| 2014-11-26T14:34:07.129-0500 I SHARDING [conn1] couldn't find database [fooUnsharded] in config db m31100| 2014-11-26T14:34:07.130-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m31100| 2014-11-26T14:34:07.130-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m31200| 2014-11-26T14:34:07.131-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m31300| 2014-11-26T14:34:07.132-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m29000| 2014-11-26T14:34:07.132-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:34:07.163-0500 I SHARDING [conn1] put [fooUnsharded] on: test-rs0:test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101 m31100| 2014-11-26T14:34:07.164-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38099 #8 (6 connections now open) m31100| 2014-11-26T14:34:07.166-0500 I QUERY [conn8] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D64702B6E7A397056792B53386133594B4166544C46534163735773376342684C) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31301| 2014-11-26T14:34:07.171-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31300", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:34:07.179-0500 I QUERY [conn8] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D64702B6E7A397056792B53386133594B4166544C46534163735773376342684C5A796A374B63775159664A2B5053304D7439615348332B5157696A3156...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:34:07.179-0500 I ACCESS [conn8] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:34:07.179-0500 I QUERY [conn8] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:34:07.179-0500 I QUERY [conn8] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31100| 2014-11-26T14:34:07.180-0500 D STORAGE [conn8] stored meta data for fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.180-0500 D STORAGE [conn8] WiredTigerKVEngine::createRecordStore uri: table:collection-7--118320920160305333 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:34:07.183-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.183-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.184-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.184-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.184-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.184-0500 D STORAGE [conn8] create uri: table:index-8--118320920160305333 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "fooUnsharded.barUnsharded" } m31100| 2014-11-26T14:34:07.190-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.190-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.190-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.190-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.190-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.190-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.190-0500 D STORAGE [conn8] fooUnsharded.barUnsharded: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:34:07.190-0500 D STORAGE [conn8] looking up metadata for: fooUnsharded.barUnsharded @ 0:5 m31100| 2014-11-26T14:34:07.190-0500 I WRITE [conn8] insert fooUnsharded.barUnsharded query: { _id: ObjectId('54762b2f5cf5867836012f38'), some: "doc" } ninserted:1 keyUpdates:0 10ms m31100| 2014-11-26T14:34:07.191-0500 I QUERY [conn8] command fooUnsharded.$cmd command: insert { insert: "barUnsharded", documents: [ { _id: ObjectId('54762b2f5cf5867836012f38'), some: "doc" } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 10ms m31100| 2014-11-26T14:34:07.192-0500 I WRITE [conn8] remove fooUnsharded.barUnsharded ndeleted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:34:07.192-0500 I QUERY [conn8] command fooUnsharded.$cmd command: delete { delete: "barUnsharded", deletes: [ { q: {}, limit: 0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms { "ok" : 0, "errmsg" : "it is already the primary" } m30999| 2014-11-26T14:34:07.194-0500 I SHARDING [conn1] couldn't find database [fooSharded] in config db m31100| 2014-11-26T14:34:07.194-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m31100| 2014-11-26T14:34:07.195-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m31200| 2014-11-26T14:34:07.195-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m31300| 2014-11-26T14:34:07.196-0500 I QUERY [conn7] command admin.$cmd command: serverStatus { serverStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:14531 0ms m29000| 2014-11-26T14:34:07.196-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m31300| 2014-11-26T14:34:07.209-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m30999| 2014-11-26T14:34:07.254-0500 I SHARDING [conn1] put [fooSharded] on: test-rs0:test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101 m30999| 2014-11-26T14:34:07.255-0500 I COMMAND [conn1] enabling sharding on: fooSharded m29000| 2014-11-26T14:34:07.255-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:34:07.307-0500 I COMMAND [conn1] Moving fooSharded primary from: test-rs0:test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101 to: test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201 m30999| 2014-11-26T14:34:07.308-0500 I SHARDING [conn1] distributed lock 'fooSharded-movePrimary/ip-10-33-141-202:30999:1417030444:1804289383' acquired, ts : 54762b2fba042ce88d252a5c m30999| 2014-11-26T14:34:07.308-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:07-54762b2fba042ce88d252a5d", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030447308), what: "movePrimary.start", ns: "fooSharded", details: { database: "fooSharded", from: "test-rs0:test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", to: "test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", shardedCollections: [] } } m29000| 2014-11-26T14:34:07.308-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m31200| 2014-11-26T14:34:07.377-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:34:07.377-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38100 #9 (7 connections now open) m31200| 2014-11-26T14:34:07.377-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31100| 2014-11-26T14:34:07.379-0500 I QUERY [conn9] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D733030694871444C6546745A6D77335757426E3034534938334D445966485763) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:34:07.383-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:34:07.392-0500 I QUERY [conn9] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D733030694871444C6546745A6D77335757426E3034534938334D44596648576363717179446F754B4E35516F71756F497334724F4D662B417243314565...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:34:07.392-0500 I ACCESS [conn9] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:34:07.392-0500 I QUERY [conn9] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:34:07.392-0500 I QUERY [conn9] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31100| 2014-11-26T14:34:07.392-0500 I NETWORK [conn9] end connection 10.33.141.202:38100 (6 connections now open) m31200| 2014-11-26T14:34:07.393-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31101| 2014-11-26T14:34:07.393-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:54037 #4 (3 connections now open) m31200| 2014-11-26T14:34:07.393-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31101 (10.33.141.202) m31101| 2014-11-26T14:34:07.395-0500 I QUERY [conn4] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6968524D72626364612F3376415973623753422B546F36426E57762F2F5A4D77) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31101| 2014-11-26T14:34:07.407-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D6968524D72626364612F3376415973623753422B546F36426E57762F2F5A4D77354B5137617A4C5044636748504F75366456666E64787A446C72727A43...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31101| 2014-11-26T14:34:07.408-0500 I ACCESS [conn4] Successfully authenticated as principal __system on local m31101| 2014-11-26T14:34:07.408-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31101| 2014-11-26T14:34:07.408-0500 I QUERY [conn4] command admin.$cmd command: _isSelf { _isSelf: 1 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31200| 2014-11-26T14:34:07.408-0500 I NETWORK [conn7] starting new replica set monitor for replica set test-rs0 with seeds ip-10-33-141-202:31100,ip-10-33-141-202:31101 m31200| 2014-11-26T14:34:07.408-0500 D COMMAND [ReplicaSetMonitorWatcher] BackgroundJob starting: ReplicaSetMonitorWatcher m31200| 2014-11-26T14:34:07.408-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:31101 m31200| 2014-11-26T14:34:07.408-0500 I NETWORK [ReplicaSetMonitorWatcher] starting m31101| 2014-11-26T14:34:07.408-0500 I NETWORK [conn4] end connection 10.33.141.202:54037 (2 connections now open) m31200| 2014-11-26T14:34:07.408-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31101| 2014-11-26T14:34:07.409-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:54038 #5 (3 connections now open) m31200| 2014-11-26T14:34:07.409-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31101 (10.33.141.202) m31200| 2014-11-26T14:34:07.409-0500 D NETWORK [conn7] connected connection! m31101| 2014-11-26T14:34:07.409-0500 I QUERY [conn5] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:34:07.409-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:31100 m31200| 2014-11-26T14:34:07.409-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:34:07.409-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38103 #10 (7 connections now open) m31200| 2014-11-26T14:34:07.409-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31200| 2014-11-26T14:34:07.409-0500 D NETWORK [conn7] connected connection! m31100| 2014-11-26T14:34:07.410-0500 I QUERY [conn10] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:34:07.410-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:31100 m31200| 2014-11-26T14:34:07.410-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:34:07.410-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38104 #11 (8 connections now open) m31200| 2014-11-26T14:34:07.410-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31200| 2014-11-26T14:34:07.410-0500 D NETWORK [conn7] connected connection! m31100| 2014-11-26T14:34:07.412-0500 I QUERY [conn11] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D3555494F6D6B31656A6545384A557653706A492F506F7059574253685A4B5072) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:34:07.413-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31100| 2014-11-26T14:34:07.425-0500 I QUERY [conn11] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D3555494F6D6B31656A6545384A557653706A492F506F7059574253685A4B507275576D6D76796747523537614C2F784F4565516B5364314F7864433168...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:34:07.425-0500 I ACCESS [conn11] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:34:07.425-0500 I QUERY [conn11] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:34:07.425-0500 I QUERY [conn11] command fooSharded.$cmd command: listCollections { listCollections: 1, filter: {} } ntoreturn:1 keyUpdates:0 reslen:55 0ms m31200| 2014-11-26T14:34:07.425-0500 I QUERY [conn7] command fooSharded.$cmd command: clone { clone: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", collsToIgnore: [] } ntoreturn:1 keyUpdates:0 reslen:55 49ms m31100| 2014-11-26T14:34:07.426-0500 I NETWORK [conn11] end connection 10.33.141.202:38104 (7 connections now open) m29000| 2014-11-26T14:34:07.426-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:34:07.473-0500 I COMMAND [conn1] movePrimary dropping database on test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101, no sharded collections in fooSharded m31100| 2014-11-26T14:34:07.473-0500 I QUERY [conn7] command fooSharded.$cmd command: dropDatabase { dropDatabase: 1 } ntoreturn:1 keyUpdates:0 reslen:37 0ms m30999| 2014-11-26T14:34:07.473-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:07-54762b2fba042ce88d252a5e", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030447473), what: "movePrimary", ns: "fooSharded", details: { database: "fooSharded", from: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", to: "test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", shardedCollections: [] } } m29000| 2014-11-26T14:34:07.474-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:34:07.519-0500 I SHARDING [conn1] distributed lock 'fooSharded-movePrimary/ip-10-33-141-202:30999:1417030444:1804289383' unlocked. { "primary " : "test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", "ok" : 1 } m31200| 2014-11-26T14:34:07.520-0500 I QUERY [conn7] command fooSharded.$cmd command: listCollections { listCollections: 1, filter: { name: "barSharded" } } ntoreturn:1 keyUpdates:0 reslen:55 0ms m31200| 2014-11-26T14:34:07.520-0500 I QUERY [conn7] command fooSharded.$cmd command: listIndexes { listIndexes: "barSharded" } ntoreturn:1 keyUpdates:0 reslen:71 0ms m31200| 2014-11-26T14:34:07.520-0500 I QUERY [conn7] command fooSharded.$cmd command: count { count: "barSharded", query: {} } planSummary: EOF ntoreturn:1 keyUpdates:0 reslen:44 0ms m31200| 2014-11-26T14:34:07.521-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40562 #8 (6 connections now open) m31200| 2014-11-26T14:34:07.523-0500 I QUERY [conn8] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D35647754387A4B4749624E6A2B58596F55766A3671665863583337704F4F4656) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:34:07.535-0500 I QUERY [conn8] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D35647754387A4B4749624E6A2B58596F55766A3671665863583337704F4F465639507A7236643272617A462B725A48754A5336546D69445A7145597666...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:34:07.536-0500 I ACCESS [conn8] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:34:07.536-0500 I QUERY [conn8] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:34:07.536-0500 I QUERY [conn8] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:34:07.536-0500 D STORAGE [conn8] stored meta data for fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.536-0500 D STORAGE [conn8] WiredTigerKVEngine::createRecordStore uri: table:collection-7--4532563397751070484 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31200| 2014-11-26T14:34:07.547-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.547-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.547-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.547-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.547-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.547-0500 D STORAGE [conn8] create uri: table:index-8--4532563397751070484 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "fooSharded.barSharded" } m31200| 2014-11-26T14:34:07.552-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.552-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.552-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.552-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.552-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.552-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.552-0500 D STORAGE [conn8] fooSharded.barSharded: clearing plan cache - collection info cache reset m31200| 2014-11-26T14:34:07.553-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.553-0500 D STORAGE [conn8] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:07.553-0500 I WRITE [conn8] insert fooSharded.system.indexes query: { ns: "fooSharded.barSharded", key: { _id: 1.0 }, name: "_id_1" } ninserted:0 keyUpdates:0 16ms m31200| 2014-11-26T14:34:07.553-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "system.indexes", documents: [ { ns: "fooSharded.barSharded", key: { _id: 1.0 }, name: "_id_1" } ], ordered: true, metadata: { shardName: "test-rs1", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 16ms m31200| 2014-11-26T14:34:07.553-0500 I QUERY [conn7] command fooSharded.$cmd command: count { count: "barSharded", query: {} } planSummary: COUNT ntoreturn:1 keyUpdates:0 reslen:44 0ms m30999| 2014-11-26T14:34:07.553-0500 I COMMAND [conn1] CMD: shardcollection: { shardCollection: "fooSharded.barSharded", key: { _id: 1.0 } } m30999| 2014-11-26T14:34:07.553-0500 I SHARDING [conn1] enable sharding on: fooSharded.barSharded with shard key: { _id: 1.0 } m30999| 2014-11-26T14:34:07.553-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:07-54762b2fba042ce88d252a5f", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030447553), what: "shardCollection.start", ns: "fooSharded.barSharded", details: { shardKey: { _id: 1.0 }, collection: "fooSharded.barSharded", primary: "test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", initShards: [], numChunks: 1 } } m29000| 2014-11-26T14:34:07.554-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m31101| 2014-11-26T14:34:07.576-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31100| 2014-11-26T14:34:07.618-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31101| 2014-11-26T14:34:07.619-0500 I REPL [ReplicationExecutor] syncing from: ip-10-33-141-202:31100 m31101| 2014-11-26T14:34:07.620-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:34:07.620-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38106 #12 (8 connections now open) m31101| 2014-11-26T14:34:07.620-0500 D NETWORK [rsBackgroundSync] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31100| 2014-11-26T14:34:07.622-0500 I QUERY [conn12] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D467949482F6C4B646B39775570793234564B4D684F565174494D70544664524F) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:34:07.628-0500 I QUERY [conn7] command fooSharded.$cmd command: count { count: "barSharded", query: {} } planSummary: COUNT ntoreturn:1 keyUpdates:0 reslen:44 0ms m30999| 2014-11-26T14:34:07.628-0500 I SHARDING [conn1] going to create 1 chunk(s) for: fooSharded.barSharded using new epoch 54762b2fba042ce88d252a60 m29000| 2014-11-26T14:34:07.629-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m31100| 2014-11-26T14:34:07.635-0500 I QUERY [conn12] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D467949482F6C4B646B39775570793234564B4D684F565174494D70544664524F6246395956637633566434764652624F33514F4E50486A5A6663792B74...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:34:07.635-0500 I ACCESS [conn12] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:34:07.635-0500 I QUERY [conn12] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:34:07.635-0500 I QUERY [conn12] query local.oplog.rs planSummary: COLLSCAN ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:1 keyUpdates:0 nreturned:1 reslen:106 0ms m31101| 2014-11-26T14:34:07.635-0500 D REPL [SyncSourceFeedback] resetting connection in sync source feedback m31101| 2014-11-26T14:34:07.635-0500 I REPL [SyncSourceFeedback] replset setting syncSourceFeedback to ip-10-33-141-202:31100 m31100| 2014-11-26T14:34:07.635-0500 I QUERY [conn12] query local.oplog.rs query: { ts: { $gte: Timestamp 1417030425000|1 } } planSummary: COLLSCAN cursorid:18566677883 ntoreturn:0 ntoskip:0 nscanned:0 nscannedObjects:4 keyUpdates:0 nreturned:4 reslen:436 0ms m31101| 2014-11-26T14:34:07.636-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31101| 2014-11-26T14:34:07.636-0500 D STORAGE [repl writer worker 15] create collection fooUnsharded.barUnsharded {} m31100| 2014-11-26T14:34:07.636-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38107 #13 (9 connections now open) m31101| 2014-11-26T14:34:07.636-0500 D STORAGE [repl writer worker 15] stored meta data for fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:34:07.636-0500 D NETWORK [SyncSourceFeedback] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31101| 2014-11-26T14:34:07.636-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createRecordStore uri: table:collection-9--377709408879965486 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:34:07.638-0500 I QUERY [conn13] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D3768574F4C6F4B6158517A4945474B785143383535392F6A7A2F703465624443) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31101| 2014-11-26T14:34:07.643-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:34:07.644-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:34:07.644-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:34:07.644-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:34:07.644-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:34:07.644-0500 D STORAGE [repl writer worker 15] create uri: table:index-10--377709408879965486 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "fooUnsharded.barUnsharded" } m31101| 2014-11-26T14:34:07.651-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:34:07.651-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:34:07.651-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:34:07.651-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:34:07.651-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:34:07.651-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31101| 2014-11-26T14:34:07.651-0500 D STORAGE [repl writer worker 15] fooUnsharded.barUnsharded: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:34:07.651-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooUnsharded.barUnsharded @ 0:6 m31100| 2014-11-26T14:34:07.652-0500 I QUERY [conn13] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D3768574F4C6F4B6158517A4945474B785143383535392F6A7A2F703465624443524536533759684E716B32735972302F474958462B6B312B703967666A...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:34:07.652-0500 I ACCESS [conn13] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:34:07.652-0500 I QUERY [conn13] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31101| 2014-11-26T14:34:07.652-0500 D REPL [SyncSourceFeedback] handshaking upstream updater m31100| 2014-11-26T14:34:07.652-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, handshake: { handshake: ObjectId('54762b19285250b145f645f2'), member: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:34:07.652-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b19285250b145f645f2'), optime: Timestamp 1417030447000|3, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:34:07.652-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b19285250b145f645f2'), optime: Timestamp 1417030447000|3, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m30999| 2014-11-26T14:34:07.684-0500 I SHARDING [conn1] ChunkManager: time to load chunks for fooSharded.barSharded: 0ms sequenceNumber: 2 version: 1|0||54762b2fba042ce88d252a60 based on: (empty) m29000| 2014-11-26T14:34:07.685-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:34:07.769-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m31200| 2014-11-26T14:34:07.821-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40565 #9 (7 connections now open) m31200| 2014-11-26T14:34:07.822-0500 I QUERY [conn9] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D525674372B416B456A454D7A69574C59495775507A6A416E58492F794C346441) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:34:07.835-0500 I QUERY [conn9] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D525674372B416B456A454D7A69574C59495775507A6A416E58492F794C346441475374437335774248417A434C377A717536305731594658756A4D3457...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:34:07.836-0500 I ACCESS [conn9] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:34:07.836-0500 I QUERY [conn9] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:34:07.836-0500 D SHARDING [conn9] entering shard mode for connection m31200| 2014-11-26T14:34:07.836-0500 I QUERY [conn9] command admin.$cmd command: setShardVersion { setShardVersion: "fooSharded.barSharded", configdb: "ip-10-33-141-202:29000", shard: "test-rs1", shardHost: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", version: Timestamp 1000|0, versionEpoch: ObjectId('54762b2fba042ce88d252a60') } ntoreturn:1 keyUpdates:0 reslen:92 0ms m31200| 2014-11-26T14:34:07.836-0500 I SHARDING [conn9] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers m31200| 2014-11-26T14:34:07.837-0500 D SHARDING [conn9] config string : ip-10-33-141-202:29000 m31200| 2014-11-26T14:34:07.837-0500 I SHARDING [conn9] remote client 10.33.141.202:40565 initialized this host (test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201) as shard test-rs1 m31200| 2014-11-26T14:34:07.837-0500 D SHARDING [conn9] metadata change requested for fooSharded.barSharded, from shard version 0|0||000000000000000000000000 to 1|0||54762b2fba042ce88d252a60, need to verify with config server m31200| 2014-11-26T14:34:07.837-0500 I SHARDING [conn9] remotely refreshing metadata for fooSharded.barSharded with requested shard version 1|0||54762b2fba042ce88d252a60, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000 m31200| 2014-11-26T14:34:07.837-0500 D NETWORK [conn9] creating new connection to:ip-10-33-141-202:29000 m31200| 2014-11-26T14:34:07.837-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:34:07.838-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41485 #9 (9 connections now open) m31200| 2014-11-26T14:34:07.838-0500 D NETWORK [conn9] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31200| 2014-11-26T14:34:07.838-0500 D NETWORK [conn9] connected connection! m29000| 2014-11-26T14:34:07.852-0500 I ACCESS [conn9] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:34:07.853-0500 I SHARDING [conn9] collection fooSharded.barSharded was previously unsharded, new metadata loaded with shard version 1|0||54762b2fba042ce88d252a60 m31200| 2014-11-26T14:34:07.853-0500 I SHARDING [conn9] collection version was loaded at version 1|0||54762b2fba042ce88d252a60, took 15ms m31200| 2014-11-26T14:34:07.853-0500 I QUERY [conn9] command admin.$cmd command: setShardVersion { setShardVersion: "fooSharded.barSharded", configdb: "ip-10-33-141-202:29000", shard: "test-rs1", shardHost: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", version: Timestamp 1000|0, versionEpoch: ObjectId('54762b2fba042ce88d252a60'), authoritative: true } ntoreturn:1 keyUpdates:0 reslen:146 16ms m30999| 2014-11-26T14:34:07.853-0500 I SHARDING [conn1] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:07-54762b2fba042ce88d252a61", server: "ip-10-33-141-202", clientAddr: "N/A", time: new Date(1417030447853), what: "shardCollection", ns: "fooSharded.barSharded", details: { version: "1|0||54762b2fba042ce88d252a60" } } m29000| 2014-11-26T14:34:07.853-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:34:07.915-0500 I COMMAND [conn1] splitting chunk [{ _id: MinKey },{ _id: MaxKey }) in collection fooSharded.barSharded on shard test-rs1 m31200| 2014-11-26T14:34:07.915-0500 I SHARDING [conn7] received splitChunk request: { splitChunk: "fooSharded.barSharded", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "test-rs1", splitKeys: [ { _id: 0.0 } ], shardId: "fooSharded.barSharded-_id_MinKey", configdb: "ip-10-33-141-202:29000", epoch: ObjectId('54762b2fba042ce88d252a60') } m31200| 2014-11-26T14:34:07.915-0500 D SHARDING [conn7] created new distributed lock for fooSharded.barSharded on ip-10-33-141-202:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m31200| 2014-11-26T14:34:07.915-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:29000 m31200| 2014-11-26T14:34:07.916-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:34:07.916-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41486 #10 (10 connections now open) m31200| 2014-11-26T14:34:07.916-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31200| 2014-11-26T14:34:07.916-0500 D NETWORK [conn7] connected connection! m29000| 2014-11-26T14:34:07.931-0500 I ACCESS [conn10] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:34:07.932-0500 D SHARDING [conn7] trying to acquire new distributed lock for fooSharded.barSharded on ip-10-33-141-202:29000 ( lock timeout : 900000, ping interval : 30000, process : ip-10-33-141-202:31200:1417030447:1473176912 ) m31200| 2014-11-26T14:34:07.932-0500 I SHARDING [LockPinger] creating distributed lock ping thread for ip-10-33-141-202:29000 and process ip-10-33-141-202:31200:1417030447:1473176912 (sleeping for 30000ms) m31200| 2014-11-26T14:34:07.932-0500 D NETWORK [LockPinger] creating new connection to:ip-10-33-141-202:29000 m31200| 2014-11-26T14:34:07.932-0500 D SHARDING [conn7] inserting initial doc in config.locks for lock fooSharded.barSharded m31200| 2014-11-26T14:34:07.932-0500 D SHARDING [conn7] about to acquire distributed lock 'fooSharded.barSharded/ip-10-33-141-202:31200:1417030447:1473176912' m31200| 2014-11-26T14:34:07.933-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:34:07.933-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41487 #11 (11 connections now open) m31200| 2014-11-26T14:34:07.933-0500 D NETWORK [LockPinger] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31200| 2014-11-26T14:34:07.933-0500 D NETWORK [LockPinger] connected connection! m31200| 2014-11-26T14:34:07.933-0500 I SHARDING [conn7] distributed lock 'fooSharded.barSharded/ip-10-33-141-202:31200:1417030447:1473176912' acquired, ts : 54762b2f2c08972cefc9db69 m31200| 2014-11-26T14:34:07.933-0500 I SHARDING [conn7] remotely refreshing metadata for fooSharded.barSharded based on current shard version 1|0||54762b2fba042ce88d252a60, current metadata version is 1|0||54762b2fba042ce88d252a60 m31200| 2014-11-26T14:34:07.934-0500 I SHARDING [conn7] metadata of collection fooSharded.barSharded already up to date (shard version : 1|0||54762b2fba042ce88d252a60, took 0ms) m31200| 2014-11-26T14:34:07.934-0500 I SHARDING [conn7] splitChunk accepted at version 1|0||54762b2fba042ce88d252a60 m31200| 2014-11-26T14:34:07.934-0500 D SHARDING [conn7] before split on { min: { _id: MinKey }, max: { _id: MaxKey } } m31200| 2014-11-26T14:34:07.934-0500 D SHARDING [conn7] splitChunk update: { applyOps: [ { op: "u", b: true, ns: "config.chunks", o: { _id: "fooSharded.barSharded-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('54762b2fba042ce88d252a60'), ns: "fooSharded.barSharded", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "test-rs1" }, o2: { _id: "fooSharded.barSharded-_id_MinKey" } }, { op: "u", b: true, ns: "config.chunks", o: { _id: "fooSharded.barSharded-_id_0.0", lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('54762b2fba042ce88d252a60'), ns: "fooSharded.barSharded", min: { _id: 0.0 }, max: { _id: MaxKey }, shard: "test-rs1" }, o2: { _id: "fooSharded.barSharded-_id_0.0" } } ], preCondition: [ { ns: "config.chunks", q: { query: { ns: "fooSharded.barSharded" }, orderby: { lastmod: -1 } }, res: { lastmod: Timestamp 1000|0 } } ] } m31200| 2014-11-26T14:34:07.935-0500 I SHARDING [conn7] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:07-54762b2f2c08972cefc9db6a", server: "ip-10-33-141-202", clientAddr: "10.33.141.202:40553", time: new Date(1417030447935), what: "split", ns: "fooSharded.barSharded", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey } }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('54762b2fba042ce88d252a60') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('54762b2fba042ce88d252a60') } } } m31200| 2014-11-26T14:34:07.935-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:29000 m31200| 2014-11-26T14:34:07.935-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:34:07.935-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41488 #12 (12 connections now open) m31200| 2014-11-26T14:34:07.935-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31200| 2014-11-26T14:34:07.935-0500 D NETWORK [conn7] connected connection! m29000| 2014-11-26T14:34:07.949-0500 I ACCESS [conn11] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:34:07.950-0500 I SHARDING [LockPinger] cluster ip-10-33-141-202:29000 pinged successfully at Wed Nov 26 14:34:07 2014 by distributed lock pinger 'ip-10-33-141-202:29000/ip-10-33-141-202:31200:1417030447:1473176912', sleeping for 30000ms m29000| 2014-11-26T14:34:07.951-0500 I ACCESS [conn12] Successfully authenticated as principal __system on local m29000| 2014-11-26T14:34:07.951-0500 I STORAGE [conn12] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:34:08.090-0500 I QUERY [conn12] command admin.$cmd command: fsync { fsync: true } ntoreturn:1 keyUpdates:0 reslen:51 139ms m31200| 2014-11-26T14:34:08.091-0500 I SHARDING [conn7] distributed lock 'fooSharded.barSharded/ip-10-33-141-202:31200:1417030447:1473176912' unlocked. m31200| 2014-11-26T14:34:08.091-0500 I QUERY [conn7] command admin.$cmd command: splitChunk { splitChunk: "fooSharded.barSharded", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "test-rs1", splitKeys: [ { _id: 0.0 } ], shardId: "fooSharded.barSharded-_id_MinKey", configdb: "ip-10-33-141-202:29000", epoch: ObjectId('54762b2fba042ce88d252a60') } ntoreturn:1 keyUpdates:0 reslen:37 175ms m30999| 2014-11-26T14:34:08.091-0500 I SHARDING [conn1] ChunkManager: time to load chunks for fooSharded.barSharded: 0ms sequenceNumber: 3 version: 1|2||54762b2fba042ce88d252a60 based on: 1|0||54762b2fba042ce88d252a60 m30999| 2014-11-26T14:34:08.092-0500 I COMMAND [conn1] CMD: movechunk: { moveChunk: "fooSharded.barSharded", find: { _id: -1.0 }, to: "test-rs0" } m30999| 2014-11-26T14:34:08.092-0500 I SHARDING [conn1] moving chunk ns: fooSharded.barSharded moving ( ns: fooSharded.barSharded, shard: test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201, lastmod: 1|1||000000000000000000000000, min: { _id: MinKey }, max: { _id: 0.0 }) test-rs1:test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201 -> test-rs0:test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101 m31200| 2014-11-26T14:34:08.093-0500 D SHARDING [conn7] found 3 shards listed on config server(s): ip-10-33-141-202:29000 (10.33.141.202) m31200| 2014-11-26T14:34:08.093-0500 I SHARDING [conn7] received moveChunk request: { moveChunk: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", to: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", fromShard: "test-rs1", toShard: "test-rs0", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "fooSharded.barSharded-_id_MinKey", configdb: "ip-10-33-141-202:29000", secondaryThrottle: true, waitForDelete: false, maxTimeMS: 0, epoch: ObjectId('54762b2fba042ce88d252a60') } m31200| 2014-11-26T14:34:08.093-0500 D SHARDING [conn7] created new distributed lock for fooSharded.barSharded on ip-10-33-141-202:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m31200| 2014-11-26T14:34:08.093-0500 D SHARDING [conn7] trying to acquire new distributed lock for fooSharded.barSharded on ip-10-33-141-202:29000 ( lock timeout : 900000, ping interval : 30000, process : ip-10-33-141-202:31200:1417030447:1473176912 ) m31200| 2014-11-26T14:34:08.093-0500 D SHARDING [conn7] about to acquire distributed lock 'fooSharded.barSharded/ip-10-33-141-202:31200:1417030447:1473176912' m31200| 2014-11-26T14:34:08.094-0500 I SHARDING [conn7] distributed lock 'fooSharded.barSharded/ip-10-33-141-202:31200:1417030447:1473176912' acquired, ts : 54762b302c08972cefc9db6b m31200| 2014-11-26T14:34:08.094-0500 I SHARDING [conn7] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:08-54762b302c08972cefc9db6c", server: "ip-10-33-141-202", clientAddr: "10.33.141.202:40553", time: new Date(1417030448094), what: "moveChunk.start", ns: "fooSharded.barSharded", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "test-rs1", to: "test-rs0" } } m29000| 2014-11-26T14:34:08.094-0500 I STORAGE [conn12] CMD fsync: sync:1 lock:0 m31200| 2014-11-26T14:34:08.163-0500 I SHARDING [conn7] remotely refreshing metadata for fooSharded.barSharded based on current shard version 1|2||54762b2fba042ce88d252a60, current metadata version is 1|2||54762b2fba042ce88d252a60 m31200| 2014-11-26T14:34:08.163-0500 I SHARDING [conn7] metadata of collection fooSharded.barSharded already up to date (shard version : 1|2||54762b2fba042ce88d252a60, took 0ms) m31200| 2014-11-26T14:34:08.163-0500 I SHARDING [conn7] moveChunk request accepted at version 1|2||54762b2fba042ce88d252a60 m31200| 2014-11-26T14:34:08.163-0500 I SHARDING [conn7] moveChunk number of documents: 0 m31200| 2014-11-26T14:34:08.163-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:31100 m31200| 2014-11-26T14:34:08.164-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:34:08.164-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38113 #14 (10 connections now open) m31200| 2014-11-26T14:34:08.164-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31200| 2014-11-26T14:34:08.164-0500 D NETWORK [conn7] connected connection! m31100| 2014-11-26T14:34:08.166-0500 I QUERY [conn14] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6D497163567832334D46565365522B7A4C325048616E4A727536584E5A56722F) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:34:08.178-0500 I QUERY [conn14] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D6D497163567832334D46565365522B7A4C325048616E4A727536584E5A56722F716D6536393848635935594D495576376A7767586E5661667634514854...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:34:08.179-0500 I ACCESS [conn14] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:34:08.179-0500 I QUERY [conn14] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:34:08.179-0500 I SHARDING [conn14] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers m31100| 2014-11-26T14:34:08.179-0500 D SHARDING [conn14] config string : ip-10-33-141-202:29000 m31100| 2014-11-26T14:34:08.179-0500 I SHARDING [conn14] remote client 10.33.141.202:38113 initialized this host as shard test-rs0 m31100| 2014-11-26T14:34:08.179-0500 I SHARDING [conn14] remotely refreshing metadata for fooSharded.barSharded, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000 m31100| 2014-11-26T14:34:08.179-0500 D NETWORK [conn14] creating new connection to:ip-10-33-141-202:29000 m31100| 2014-11-26T14:34:08.180-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:34:08.180-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41490 #13 (13 connections now open) m31100| 2014-11-26T14:34:08.180-0500 D NETWORK [conn14] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31100| 2014-11-26T14:34:08.180-0500 D NETWORK [conn14] connected connection! m29000| 2014-11-26T14:34:08.194-0500 I ACCESS [conn13] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:34:08.195-0500 I SHARDING [conn14] collection fooSharded.barSharded was previously unsharded, new metadata loaded with shard version 0|0||54762b2fba042ce88d252a60 m31100| 2014-11-26T14:34:08.195-0500 I SHARDING [conn14] collection version was loaded at version 1|2||54762b2fba042ce88d252a60, took 15ms m31100| 2014-11-26T14:34:08.195-0500 I QUERY [conn14] command admin.$cmd command: _recvChunkStart { _recvChunkStart: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", fromShardName: "test-rs1", toShardName: "test-rs0", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, configServer: "ip-10-33-141-202:29000", secondaryThrottle: true } ntoreturn:1 keyUpdates:0 reslen:47 16ms m31100| 2014-11-26T14:34:08.195-0500 I SHARDING [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 0.0 } for collection fooSharded.barSharded from test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201 at epoch 54762b2fba042ce88d252a60 m31100| 2014-11-26T14:34:08.195-0500 I NETWORK [migrateThread] starting new replica set monitor for replica set test-rs1 with seeds ip-10-33-141-202:31200,ip-10-33-141-202:31201 m31100| 2014-11-26T14:34:08.195-0500 D NETWORK [migrateThread] creating new connection to:ip-10-33-141-202:31200 m31100| 2014-11-26T14:34:08.195-0500 D COMMAND [ReplicaSetMonitorWatcher] BackgroundJob starting: ReplicaSetMonitorWatcher m31100| 2014-11-26T14:34:08.195-0500 I NETWORK [ReplicaSetMonitorWatcher] starting m31100| 2014-11-26T14:34:08.196-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31200| 2014-11-26T14:34:08.196-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40572 #10 (8 connections now open) m31100| 2014-11-26T14:34:08.196-0500 D NETWORK [migrateThread] connected to server ip-10-33-141-202:31200 (10.33.141.202) m31100| 2014-11-26T14:34:08.196-0500 D NETWORK [migrateThread] connected connection! m31100| 2014-11-26T14:34:08.196-0500 I QUERY [conn14] command admin.$cmd command: _recvChunkStatus { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:314 0ms m31200| 2014-11-26T14:34:08.196-0500 I SHARDING [conn7] moveChunk data transfer progress: { active: true, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| 2014-11-26T14:34:08.198-0500 I QUERY [conn10] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D57793165626F7144707642766C7475663467786A4B76446C66744A786E666935) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:34:08.199-0500 I QUERY [conn14] command admin.$cmd command: _recvChunkStatus { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:314 0ms m31200| 2014-11-26T14:34:08.199-0500 I SHARDING [conn7] moveChunk data transfer progress: { active: true, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| 2014-11-26T14:34:08.203-0500 I QUERY [conn14] command admin.$cmd command: _recvChunkStatus { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:314 0ms m31200| 2014-11-26T14:34:08.203-0500 I SHARDING [conn7] moveChunk data transfer progress: { active: true, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| 2014-11-26T14:34:08.211-0500 I QUERY [conn10] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D57793165626F7144707642766C7475663467786A4B76446C66744A786E666935424439424D58477632474F5672337476755744644E7049566855566D70...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:34:08.211-0500 I ACCESS [conn10] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:34:08.211-0500 I QUERY [conn10] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:34:08.211-0500 I QUERY [conn14] command admin.$cmd command: _recvChunkStatus { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:314 0ms m31200| 2014-11-26T14:34:08.211-0500 I QUERY [conn10] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31200| 2014-11-26T14:34:08.212-0500 I SHARDING [conn7] moveChunk data transfer progress: { active: true, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| 2014-11-26T14:34:08.212-0500 I QUERY [conn10] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:401 0ms m31100| 2014-11-26T14:34:08.212-0500 D NETWORK [migrateThread] creating new connection to:ip-10-33-141-202:31200 m31100| 2014-11-26T14:34:08.213-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31200| 2014-11-26T14:34:08.213-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:40573 #11 (9 connections now open) m31100| 2014-11-26T14:34:08.213-0500 D NETWORK [migrateThread] connected to server ip-10-33-141-202:31200 (10.33.141.202) m31100| 2014-11-26T14:34:08.213-0500 D NETWORK [migrateThread] connected connection! m31200| 2014-11-26T14:34:08.214-0500 I QUERY [conn11] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D3735694B6367374A437035624C45414A642F76327A7837347557774676686662) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:34:08.227-0500 I QUERY [conn11] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D3735694B6367374A437035624C45414A642F76327A7837347557774676686662586949436736577A574C6F4666423358414F6A7342352F684157675058...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31200| 2014-11-26T14:34:08.227-0500 I ACCESS [conn11] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:34:08.227-0500 I QUERY [conn11] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:34:08.228-0500 I QUERY [conn11] command admin.$cmd command: getLastError { getlasterror: 1 } ntoreturn:1 keyUpdates:0 reslen:110 0ms m31100| 2014-11-26T14:34:08.228-0500 I QUERY [conn14] command admin.$cmd command: _recvChunkStatus { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:314 0ms m31200| 2014-11-26T14:34:08.228-0500 I SHARDING [conn7] moveChunk data transfer progress: { active: true, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| 2014-11-26T14:34:08.228-0500 I QUERY [conn11] query fooSharded.system.namespaces query: { name: "fooSharded.barSharded" } planSummary: EOF ntoreturn:1 ntoskip:0 nscanned:0 nscannedObjects:0 keyUpdates:0 nreturned:0 reslen:20 0ms m31100| 2014-11-26T14:34:08.228-0500 D STORAGE [migrateThread] create collection fooSharded.barSharded {} m31100| 2014-11-26T14:34:08.228-0500 D STORAGE [migrateThread] stored meta data for fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:34:08.228-0500 D STORAGE [migrateThread] WiredTigerKVEngine::createRecordStore uri: table:collection-9--118320920160305333 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:34:08.232-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31200| 2014-11-26T14:34:08.232-0500 D STORAGE [conn11] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:08.232-0500 D STORAGE [conn11] looking up metadata for: fooSharded.barSharded @ 0:5 m31200| 2014-11-26T14:34:08.232-0500 I QUERY [conn11] command fooSharded.$cmd command: listIndexes { listIndexes: "barSharded" } ntoreturn:1 keyUpdates:0 reslen:130 0ms m31100| 2014-11-26T14:34:08.232-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:34:08.232-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:34:08.232-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:34:08.232-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:34:08.232-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:34:08.233-0500 D STORAGE [migrateThread] create uri: table:index-10--118320920160305333 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "fooSharded.barSharded" } m31100| 2014-11-26T14:34:08.234-0500 I QUERY [conn12] getmore local.oplog.rs cursorid:18566677883 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:120 596ms m31101| 2014-11-26T14:34:08.234-0500 D STORAGE [repl writer worker 15] create collection fooSharded.barSharded {} m31101| 2014-11-26T14:34:08.234-0500 D STORAGE [repl writer worker 15] stored meta data for fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:34:08.235-0500 D STORAGE [repl writer worker 15] WiredTigerKVEngine::createRecordStore uri: table:collection-11--377709408879965486 config: type=file,memory_page_max=100m,block_compressor=snappy,,app_metadata=(),key_format=q,value_format=u m31100| 2014-11-26T14:34:08.238-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:34:08.238-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:34:08.238-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:34:08.238-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:34:08.238-0500 I INDEX [migrateThread] build index on: fooSharded.barSharded properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "fooSharded.barSharded" } m31100| 2014-11-26T14:34:08.238-0500 I INDEX [migrateThread] building index using bulk method m31100| 2014-11-26T14:34:08.238-0500 D STORAGE [migrateThread] fooSharded.barSharded: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:34:08.238-0500 D INDEX [migrateThread] bulk commit starting for index: _id_ m31101| 2014-11-26T14:34:08.239-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:34:08.239-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:34:08.239-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:34:08.239-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:34:08.239-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31100| 2014-11-26T14:34:08.239-0500 D INDEX [migrateThread] done building bottom layer, going to commit m31101| 2014-11-26T14:34:08.239-0500 D STORAGE [repl writer worker 15] create uri: table:index-12--377709408879965486 config: type=file,leaf_page_max=16k,,key_format=u,value_format=u,collator=mongo_index,app_metadata={ "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "fooSharded.barSharded" } m31100| 2014-11-26T14:34:08.244-0500 I INDEX [migrateThread] build index done. scanned 0 total records. 0 secs m31100| 2014-11-26T14:34:08.244-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:34:08.245-0500 D STORAGE [migrateThread] looking up metadata for: fooSharded.barSharded @ 0:6 m31100| 2014-11-26T14:34:08.245-0500 D STORAGE [migrateThread] fooSharded.barSharded: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:34:08.245-0500 D STORAGE [migrateThread] fooSharded.barSharded: clearing plan cache - collection info cache reset m31100| 2014-11-26T14:34:08.245-0500 I SHARDING [migrateThread] Deleter starting delete for: fooSharded.barSharded from { _id: MinKey } -> { _id: 0.0 }, with opId: 149 m31100| 2014-11-26T14:34:08.245-0500 D SHARDING [migrateThread] begin removal of { : MinKey } to { : 0.0 } in fooSharded.barSharded with write concern: { w: 2, wtimeout: 60000 } m31101| 2014-11-26T14:34:08.245-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:34:08.245-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:34:08.245-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:34:08.245-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:34:08.245-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:34:08.245-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31101| 2014-11-26T14:34:08.245-0500 D STORAGE [repl writer worker 15] fooSharded.barSharded: clearing plan cache - collection info cache reset m31101| 2014-11-26T14:34:08.245-0500 D STORAGE [repl writer worker 15] looking up metadata for: fooSharded.barSharded @ 0:7 m31100| 2014-11-26T14:34:08.245-0500 I SHARDING [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| 2014-11-26T14:34:08.245-0500 D SHARDING [migrateThread] end removal of { : MinKey } to { : 0.0 } in fooSharded.barSharded (took 0ms) m31100| 2014-11-26T14:34:08.245-0500 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for fooSharded.barSharded from { _id: MinKey } -> { _id: 0.0 } m31100| 2014-11-26T14:34:08.245-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b19285250b145f645f2'), optime: Timestamp 1417030448000|1, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:34:08.247-0500 I QUERY [conn12] getmore local.oplog.rs cursorid:18566677883 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:192 10ms m31100| 2014-11-26T14:34:08.247-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b19285250b145f645f2'), optime: Timestamp 1417030448000|2, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:34:08.247-0500 D SHARDING [migrateThread] rangeDeleter took 0 seconds waiting for deletes to be replicated to majority nodes m31200| 2014-11-26T14:34:08.247-0500 I QUERY [conn11] command admin.$cmd command: _migrateClone { _migrateClone: 1 } ntoreturn:1 keyUpdates:0 reslen:51 0ms m31200| 2014-11-26T14:34:08.248-0500 I QUERY [conn11] command admin.$cmd command: _transferMods { _transferMods: 1 } ntoreturn:1 keyUpdates:0 reslen:51 0ms m31100| 2014-11-26T14:34:08.248-0500 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section m31100| 2014-11-26T14:34:08.248-0500 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'fooSharded.barSharded' { _id: MinKey } -> { _id: 0.0 } m31200| 2014-11-26T14:34:08.248-0500 I QUERY [conn11] command admin.$cmd command: _transferMods { _transferMods: 1 } ntoreturn:1 keyUpdates:0 reslen:51 0ms m31200| 2014-11-26T14:34:08.258-0500 I QUERY [conn11] command admin.$cmd command: _transferMods { _transferMods: 1 } ntoreturn:1 keyUpdates:0 reslen:51 0ms m31100| 2014-11-26T14:34:08.260-0500 I QUERY [conn14] command admin.$cmd command: _recvChunkStatus { _recvChunkStatus: 1 } ntoreturn:1 keyUpdates:0 reslen:315 0ms m31200| 2014-11-26T14:34:08.260-0500 I SHARDING [conn7] moveChunk data transfer progress: { active: true, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| 2014-11-26T14:34:08.260-0500 I SHARDING [conn7] About to check if it is safe to enter critical section m31200| 2014-11-26T14:34:08.260-0500 I SHARDING [conn7] About to enter migrate critical section m31200| 2014-11-26T14:34:08.260-0500 I SHARDING [conn7] moveChunk setting version to: 2|0||54762b2fba042ce88d252a60 m31200| 2014-11-26T14:34:08.261-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:31100 m31200| 2014-11-26T14:34:08.261-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31100| 2014-11-26T14:34:08.261-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38117 #15 (11 connections now open) m31200| 2014-11-26T14:34:08.261-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31200| 2014-11-26T14:34:08.261-0500 D NETWORK [conn7] connected connection! m31100| 2014-11-26T14:34:08.263-0500 I QUERY [conn15] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D756456564F526B5779417777692F34626139367A3538537068786855734E3437) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31200| 2014-11-26T14:34:08.268-0500 I QUERY [conn11] command admin.$cmd command: _transferMods { _transferMods: 1 } ntoreturn:1 keyUpdates:0 reslen:51 0ms m31100| 2014-11-26T14:34:08.276-0500 I QUERY [conn15] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D756456564F526B5779417777692F34626139367A3538537068786855734E3437653247565845544C48447446437A546530765353656F5870414A303149...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:34:08.276-0500 I ACCESS [conn15] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:34:08.276-0500 I QUERY [conn15] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31200| 2014-11-26T14:34:08.279-0500 I QUERY [conn11] command admin.$cmd command: _transferMods { _transferMods: 1 } ntoreturn:1 keyUpdates:0 reslen:51 0ms m31100| 2014-11-26T14:34:08.279-0500 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'fooSharded.barSharded' { _id: MinKey } -> { _id: 0.0 } m31100| 2014-11-26T14:34:08.279-0500 I SHARDING [migrateThread] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:08-54762b30331da6b15b6573ac", server: "ip-10-33-141-202", clientAddr: ":27017", time: new Date(1417030448279), what: "moveChunk.to", ns: "fooSharded.barSharded", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step 1 of 5: 49, step 2 of 5: 2, step 3 of 5: 0, step 4 of 5: 0, step 5 of 5: 31, note: "success" } } m31100| 2014-11-26T14:34:08.279-0500 D NETWORK [migrateThread] creating new connection to:ip-10-33-141-202:29000 m31100| 2014-11-26T14:34:08.279-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:34:08.280-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41494 #14 (14 connections now open) m31100| 2014-11-26T14:34:08.280-0500 D NETWORK [migrateThread] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31100| 2014-11-26T14:34:08.280-0500 D NETWORK [migrateThread] connected connection! m29000| 2014-11-26T14:34:08.294-0500 I ACCESS [conn14] Successfully authenticated as principal __system on local m29000| 2014-11-26T14:34:08.294-0500 I STORAGE [conn14] CMD fsync: sync:1 lock:0 m31100| 2014-11-26T14:34:08.348-0500 I QUERY [conn15] command admin.$cmd command: _recvChunkCommit { _recvChunkCommit: 1 } ntoreturn:1 keyUpdates:0 reslen:313 71ms m31200| 2014-11-26T14:34:08.348-0500 I SHARDING [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m31200| 2014-11-26T14:34:08.348-0500 I SHARDING [conn7] moveChunk updating self version to: 2|1||54762b2fba042ce88d252a60 through { _id: 0.0 } -> { _id: MaxKey } for collection 'fooSharded.barSharded' m31200| 2014-11-26T14:34:08.348-0500 D NETWORK [conn7] creating new connection to:ip-10-33-141-202:29000 m31200| 2014-11-26T14:34:08.348-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m29000| 2014-11-26T14:34:08.348-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41495 #15 (15 connections now open) m31200| 2014-11-26T14:34:08.349-0500 D NETWORK [conn7] connected to server ip-10-33-141-202:29000 (10.33.141.202) m31200| 2014-11-26T14:34:08.349-0500 D NETWORK [conn7] connected connection! m29000| 2014-11-26T14:34:08.363-0500 I ACCESS [conn15] Successfully authenticated as principal __system on local m31200| 2014-11-26T14:34:08.364-0500 I SHARDING [conn7] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:08-54762b302c08972cefc9db6d", server: "ip-10-33-141-202", clientAddr: "10.33.141.202:40553", time: new Date(1417030448364), what: "moveChunk.commit", ns: "fooSharded.barSharded", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "test-rs1", to: "test-rs0", cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 } } m29000| 2014-11-26T14:34:08.364-0500 I STORAGE [conn12] CMD fsync: sync:1 lock:0 m31200| 2014-11-26T14:34:08.441-0500 I SHARDING [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m31200| 2014-11-26T14:34:08.441-0500 I SHARDING [conn7] MigrateFromStatus::done coll lock for fooSharded.barSharded acquired m31200| 2014-11-26T14:34:08.441-0500 I SHARDING [conn7] forking for cleanup of chunk data m31200| 2014-11-26T14:34:08.442-0500 I SHARDING [RangeDeleter] Deleter starting delete for: fooSharded.barSharded from { _id: MinKey } -> { _id: 0.0 }, with opId: 5 m31200| 2014-11-26T14:34:08.442-0500 D SHARDING [RangeDeleter] begin removal of { : MinKey } to { : 0.0 } in fooSharded.barSharded with write concern: { w: 2, wtimeout: 60000 } m31200| 2014-11-26T14:34:08.442-0500 I SHARDING [RangeDeleter] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31200| 2014-11-26T14:34:08.442-0500 D SHARDING [RangeDeleter] end removal of { : MinKey } to { : 0.0 } in fooSharded.barSharded (took 0ms) m31200| 2014-11-26T14:34:08.442-0500 I SHARDING [RangeDeleter] rangeDeleter deleted 0 documents for fooSharded.barSharded from { _id: MinKey } -> { _id: 0.0 } m31200| 2014-11-26T14:34:08.442-0500 D SHARDING [RangeDeleter] rangeDeleter took 0 seconds waiting for deletes to be replicated to majority nodes m31200| 2014-11-26T14:34:08.442-0500 I SHARDING [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m31200| 2014-11-26T14:34:08.442-0500 I SHARDING [conn7] MigrateFromStatus::done coll lock for fooSharded.barSharded acquired m31200| 2014-11-26T14:34:08.442-0500 I SHARDING [conn7] distributed lock 'fooSharded.barSharded/ip-10-33-141-202:31200:1417030447:1473176912' unlocked. m31200| 2014-11-26T14:34:08.442-0500 I SHARDING [conn7] about to log metadata event: { _id: "ip-10-33-141-202-2014-11-26T19:34:08-54762b302c08972cefc9db6e", server: "ip-10-33-141-202", clientAddr: "10.33.141.202:40553", time: new Date(1417030448442), what: "moveChunk.from", ns: "fooSharded.barSharded", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step 1 of 6: 0, step 2 of 6: 70, step 3 of 6: 31, step 4 of 6: 65, step 5 of 6: 181, step 6 of 6: 0, to: "test-rs0", from: "test-rs1", note: "success" } } m29000| 2014-11-26T14:34:08.443-0500 I STORAGE [conn12] CMD fsync: sync:1 lock:0 m31200| 2014-11-26T14:34:08.502-0500 I QUERY [conn7] command admin.$cmd command: moveChunk { moveChunk: "fooSharded.barSharded", from: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", to: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", fromShard: "test-rs1", toShard: "test-rs0", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "fooSharded.barSharded-_id_MinKey", configdb: "ip-10-33-141-202:29000", secondaryThrottle: true, waitForDelete: false, maxTimeMS: 0, epoch: ObjectId('54762b2fba042ce88d252a60') } ntoreturn:1 keyUpdates:0 reslen:37 409ms m30999| 2014-11-26T14:34:08.503-0500 I SHARDING [conn1] ChunkManager: time to load chunks for fooSharded.barSharded: 0ms sequenceNumber: 4 version: 2|1||54762b2fba042ce88d252a60 based on: 1|2||54762b2fba042ce88d252a60 --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("54762b2dba042ce88d252a53") } shards: { "_id" : "test-rs0", "host" : "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101" } { "_id" : "test-rs1", "host" : "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201" } { "_id" : "test-rs2", "host" : "test-rs2/ip-10-33-141-202:31300,ip-10-33-141-202:31301" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "fooUnsharded", "partitioned" : false, "primary" : "test-rs0" } { "_id" : "fooSharded", "partitioned" : true, "primary" : "test-rs1" } fooSharded.barSharded shard key: { "_id" : 1 } chunks: test-rs0 1 test-rs1 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : 0 } on : test-rs0 Timestamp(2, 0) { "_id" : 0 } -->> { "_id" : { "$maxKey" : 1 } } on : test-rs1 Timestamp(2, 1) ---- Setting up database users... ---- m30999| 2014-11-26T14:34:08.524-0500 I SHARDING [conn1] distributed lock 'authorizationData/ip-10-33-141-202:30999:1417030444:1804289383' acquired, ts : 54762b30ba042ce88d252a62 m29000| 2014-11-26T14:34:08.524-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:34:08.595-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:34:08.627-0500 I SHARDING [conn1] distributed lock 'authorizationData/ip-10-33-141-202:30999:1417030444:1804289383' unlocked. Successfully added user: { "user" : "shardedDBUser", "roles" : [ "readWrite" ] } m30999| 2014-11-26T14:34:08.642-0500 I SHARDING [conn1] distributed lock 'authorizationData/ip-10-33-141-202:30999:1417030444:1804289383' acquired, ts : 54762b30ba042ce88d252a63 m29000| 2014-11-26T14:34:08.642-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m29000| 2014-11-26T14:34:08.728-0500 I STORAGE [conn8] CMD fsync: sync:1 lock:0 m30999| 2014-11-26T14:34:08.762-0500 I SHARDING [conn1] distributed lock 'authorizationData/ip-10-33-141-202:30999:1417030444:1804289383' unlocked. Successfully added user: { "user" : "unshardedDBUser", "roles" : [ "readWrite" ] } ---- Inserting initial data... ---- m30999| 2014-11-26T14:34:08.763-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49017 #2 (2 connections now open) m30999| 2014-11-26T14:34:08.778-0500 I ACCESS [conn2] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:34:08.794-0500 I ACCESS [conn2] Successfully authenticated as principal unshardedDBUser on fooUnsharded m30999| 2014-11-26T14:34:08.808-0500 I ACCESS [conn2] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:34:08.823-0500 I ACCESS [conn2] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31100| 2014-11-26T14:34:08.824-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: -1.0 } ninserted:0 keyUpdates:0 exception: stale shard version detected before write, received 2|0||54762b2fba042ce88d252a60 but local version is 0|0||54762b2fba042ce88d252a60 code:63 0ms m31100| 2014-11-26T14:34:08.824-0500 D SHARDING [conn8] metadata version update requested for fooSharded.barSharded, from shard version 0|0||54762b2fba042ce88d252a60 to 2|0||54762b2fba042ce88d252a60, need to verify with config server m31100| 2014-11-26T14:34:08.824-0500 I SHARDING [conn8] remotely refreshing metadata for fooSharded.barSharded with requested shard version 2|0||54762b2fba042ce88d252a60 based on current shard version 0|0||54762b2fba042ce88d252a60, current metadata version is 1|2||54762b2fba042ce88d252a60 m31100| 2014-11-26T14:34:08.824-0500 I SHARDING [conn8] updating metadata for fooSharded.barSharded from shard version 0|0||54762b2fba042ce88d252a60 to shard version 2|0||54762b2fba042ce88d252a60 m31100| 2014-11-26T14:34:08.824-0500 I SHARDING [conn8] collection version was loaded at version 2|1||54762b2fba042ce88d252a60, took 0ms m31100| 2014-11-26T14:34:08.824-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: -1.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 2000|0, ObjectId('54762b2fba042ce88d252a60') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:329 0ms m30999| 2014-11-26T14:34:08.826-0500 I SHARDING [conn2] ChunkManager: time to load chunks for fooSharded.barSharded: 0ms sequenceNumber: 5 version: 2|1||54762b2fba042ce88d252a60 based on: (empty) m30999| 2014-11-26T14:34:08.826-0500 I SHARDING [conn2] ChunkManager: time to load chunks for fooSharded.barSharded: 0ms sequenceNumber: 6 version: 2|1||54762b2fba042ce88d252a60 based on: 2|1||54762b2fba042ce88d252a60 m30999| 2014-11-26T14:34:08.826-0500 W SHARDING [conn2] chunk manager reload forced for collection 'fooSharded.barSharded', config version is 2|1||54762b2fba042ce88d252a60 m31100| 2014-11-26T14:34:08.827-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: -1.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:34:08.827-0500 I QUERY [conn12] getmore local.oplog.rs cursorid:18566677883 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:116 577ms m30999| 2014-11-26T14:34:08.827-0500 I NETWORK [conn2] scoped connection to ip-10-33-141-202:29000 not being returned to the pool m31100| 2014-11-26T14:34:08.827-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: -1.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 2000|0, ObjectId('54762b2fba042ce88d252a60') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m29000| 2014-11-26T14:34:08.827-0500 I NETWORK [conn3] end connection 10.33.141.202:41457 (14 connections now open) m31100| 2014-11-26T14:34:08.827-0500 I QUERY [conn7] command admin.$cmd command: splitVector { splitVector: "fooSharded.barSharded", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 885921, maxSplitPoints: 0, maxChunkObjects: 250000 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31100| 2014-11-26T14:34:08.827-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b19285250b145f645f2'), optime: Timestamp 1417030448000|3, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31200| 2014-11-26T14:34:08.828-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: 1.0 } ninserted:1 keyUpdates:0 0ms m31200| 2014-11-26T14:34:08.829-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: 1.0 } ], ordered: true, metadata: { shardName: "test-rs1", shardVersion: [ Timestamp 2000|1, ObjectId('54762b2fba042ce88d252a60') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m30999| 2014-11-26T14:34:08.829-0500 I NETWORK [conn2] scoped connection to ip-10-33-141-202:29000 not being returned to the pool m29000| 2014-11-26T14:34:08.829-0500 I NETWORK [conn4] end connection 10.33.141.202:41458 (13 connections now open) m31200| 2014-11-26T14:34:08.829-0500 I QUERY [conn7] command admin.$cmd command: splitVector { splitVector: "fooSharded.barSharded", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 885917, maxSplitPoints: 0, maxChunkObjects: 250000 } ntoreturn:1 keyUpdates:0 reslen:53 0ms m31100| 2014-11-26T14:34:08.831-0500 I WRITE [conn8] insert fooUnsharded.barUnsharded query: { _id: 1.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:34:08.831-0500 I QUERY [conn8] command fooUnsharded.$cmd command: insert { insert: "barUnsharded", documents: [ { _id: 1.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31100| 2014-11-26T14:34:08.831-0500 I QUERY [conn12] getmore local.oplog.rs cursorid:18566677883 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:120 2ms ---- Stopping primary of third shard... ---- m31100| 2014-11-26T14:34:08.832-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b19285250b145f645f2'), optime: Timestamp 1417030448000|4, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m30999| 2014-11-26T14:34:08.832-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49018 #3 (3 connections now open) m30999| 2014-11-26T14:34:08.846-0500 I ACCESS [conn3] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:34:08.861-0500 I ACCESS [conn3] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31300| 2014-11-26T14:34:08.861-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31301| 2014-11-26T14:34:08.862-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms ReplSetTest n: 0 ports: [ 31300, 31301 ] 31300 number ReplSetTest stop *** Shutting down mongod in port 31300 *** m31300| 2014-11-26T14:34:08.862-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m31300| 2014-11-26T14:34:08.863-0500 I REPL [signalProcessingThread] Stopping replication applier threads m31300| 2014-11-26T14:34:09.209-0500 I STORAGE [conn2] got request after shutdown() m31301| 2014-11-26T14:34:09.209-0500 D NETWORK [ReplExecNetThread-6] SocketException: remote: 10.33.141.202:31300 error: 9001 socket exception [CLOSED] server [10.33.141.202:31300] m31301| 2014-11-26T14:34:09.209-0500 I NETWORK [ReplExecNetThread-6] DBClientCursor::init call() failed m31301| 2014-11-26T14:34:09.209-0500 D - [ReplExecNetThread-6] User Assertion: 10276:DBClientBase::findN: transport error: ip-10-33-141-202:31300 ns: admin.$cmd query: { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } m31301| 2014-11-26T14:34:09.209-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location10276 DBClientBase::findN: transport error: ip-10-33-141-202:31300 ns: admin.$cmd query: { replSetHeartbeat: "test-rs2", pv: 1, v: 1, from: "ip-10-33-141-202:31301", fromId: 1, checkEmpty: false } m31301| 2014-11-26T14:34:09.209-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 1; 0ms have already elapsed m31301| 2014-11-26T14:34:09.210-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:34:09.210-0500 D NETWORK [ReplExecNetThread-1] connected to server ip-10-33-141-202:31300 (10.33.141.202) m31300| 2014-11-26T14:34:09.280-0500 I COMMAND [signalProcessingThread] now exiting m31300| 2014-11-26T14:34:09.280-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m31300| 2014-11-26T14:34:09.280-0500 I NETWORK [signalProcessingThread] closing listening socket: 19 m31301| 2014-11-26T14:34:09.280-0500 I NETWORK [ReplExecNetThread-1] Socket recv() errno:104 Connection reset by peer 10.33.141.202:31300 m31301| 2014-11-26T14:34:09.281-0500 I NETWORK [ReplExecNetThread-1] SocketException: remote: 10.33.141.202:31300 error: 9001 socket exception [RECV_ERROR] server [10.33.141.202:31300] m31301| 2014-11-26T14:34:09.281-0500 I NETWORK [ReplExecNetThread-1] DBClientCursor::init call() failed m31301| 2014-11-26T14:34:09.281-0500 D - [ReplExecNetThread-1] User Assertion: 10276:DBClientBase::findN: transport error: ip-10-33-141-202:31300 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4230567A68426B72695376444272434B594C2B6948486B79544A4F5664306B2B) } m31301| 2014-11-26T14:34:09.281-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location10276 DBClientBase::findN: transport error: ip-10-33-141-202:31300 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D4230567A68426B72695376444272434B594C2B6948486B79544A4F5664306B2B) } m31301| 2014-11-26T14:34:09.281-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 0; 72ms have already elapsed m31301| 2014-11-26T14:34:09.281-0500 I NETWORK [conn3] end connection 10.33.141.202:40981 (1 connection now open) m31300| 2014-11-26T14:34:09.280-0500 I NETWORK [signalProcessingThread] closing listening socket: 20 m31300| 2014-11-26T14:34:09.280-0500 I NETWORK [signalProcessingThread] closing listening socket: 26 m31300| 2014-11-26T14:34:09.280-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31300.sock m31300| 2014-11-26T14:34:09.281-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m31300| 2014-11-26T14:34:09.281-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m31300| 2014-11-26T14:34:09.281-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me m31300| 2014-11-26T14:34:09.281-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs m31300| 2014-11-26T14:34:09.281-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log m31300| 2014-11-26T14:34:09.281-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset m31300| 2014-11-26T14:34:09.281-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog m31300| 2014-11-26T14:34:09.281-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m31300| 2014-11-26T14:34:09.281-0500 I NETWORK [conn1] end connection 127.0.0.1:50926 (3 connections now open) m31300| 2014-11-26T14:34:09.281-0500 I NETWORK [conn6] end connection 10.33.141.202:60635 (3 connections now open) m31300| 2014-11-26T14:34:09.281-0500 I NETWORK [conn7] end connection 10.33.141.202:60636 (3 connections now open) m31300| 2014-11-26T14:34:09.281-0500 I NETWORK [conn5] end connection 10.33.141.202:60614 (3 connections now open) m31301| 2014-11-26T14:34:09.281-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:34:09.281-0500 W NETWORK [ReplExecNetThread-0] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:34:09.282-0500 D - [ReplExecNetThread-0] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:09.282-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31300| 2014-11-26T14:34:09.309-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 m31201| 2014-11-26T14:34:09.383-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31200", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:120 0ms m31200| 2014-11-26T14:34:09.413-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m31101| 2014-11-26T14:34:09.576-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:158 0ms m31100| 2014-11-26T14:34:09.619-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms 2014-11-26T14:34:09.862-0500 I - shell: stopped mongo program on port 31300 ReplSetTest stop *** Mongod in port 31300 shutdown with code (0) *** ---- Testing active connection with third primary down... ---- m31100| 2014-11-26T14:34:09.864-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:38124 #16 (12 connections now open) m31100| 2014-11-26T14:34:09.865-0500 I QUERY [conn16] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D76362F33556333454938566D79536F5942755374713374564D4B64696175337A) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31100| 2014-11-26T14:34:09.878-0500 I QUERY [conn16] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D76362F33556333454938566D79536F5942755374713374564D4B64696175337A2F754F47646E502B66583552477A6A656C4B4D4A47786F7654636D6A37...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31100| 2014-11-26T14:34:09.878-0500 I ACCESS [conn16] Successfully authenticated as principal __system on local m31100| 2014-11-26T14:34:09.879-0500 I QUERY [conn16] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31100| 2014-11-26T14:34:09.879-0500 D SHARDING [conn16] entering shard mode for connection m31100| 2014-11-26T14:34:09.879-0500 I QUERY [conn16] command admin.$cmd command: setShardVersion { setShardVersion: "fooSharded.barSharded", configdb: "ip-10-33-141-202:29000", shard: "test-rs0", shardHost: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", version: Timestamp 2000|0, versionEpoch: ObjectId('54762b2fba042ce88d252a60') } ntoreturn:1 keyUpdates:0 reslen:251 0ms m31100| 2014-11-26T14:34:09.879-0500 I QUERY [conn16] command admin.$cmd command: setShardVersion { setShardVersion: "fooSharded.barSharded", configdb: "ip-10-33-141-202:29000", shard: "test-rs0", shardHost: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", version: Timestamp 2000|0, versionEpoch: ObjectId('54762b2fba042ce88d252a60'), authoritative: true } ntoreturn:1 keyUpdates:0 reslen:146 0ms m31100| 2014-11-26T14:34:09.879-0500 I QUERY [conn16] query fooSharded.barSharded query: { _id: -1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m31200| 2014-11-26T14:34:09.880-0500 I QUERY [conn9] command admin.$cmd command: setShardVersion { setShardVersion: "fooSharded.barSharded", configdb: "ip-10-33-141-202:29000", shard: "test-rs1", shardHost: "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", version: Timestamp 2000|1, versionEpoch: ObjectId('54762b2fba042ce88d252a60') } ntoreturn:1 keyUpdates:0 reslen:146 0ms m31200| 2014-11-26T14:34:09.880-0500 I QUERY [conn9] query fooSharded.barSharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m31100| 2014-11-26T14:34:09.881-0500 I QUERY [conn16] command admin.$cmd command: setShardVersion { setShardVersion: "fooUnsharded.barUnsharded", configdb: "ip-10-33-141-202:29000", shard: "test-rs0", shardHost: "test-rs0/ip-10-33-141-202:31100,ip-10-33-141-202:31101", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000') } ntoreturn:1 keyUpdates:0 reslen:146 0ms m31100| 2014-11-26T14:34:09.881-0500 I QUERY [conn16] query fooUnsharded.barUnsharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m31100| 2014-11-26T14:34:09.882-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: -2.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:34:09.882-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: -2.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 2000|0, ObjectId('54762b2fba042ce88d252a60') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31200| 2014-11-26T14:34:09.883-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: 2.0 } ninserted:1 keyUpdates:0 0ms m31200| 2014-11-26T14:34:09.883-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: 2.0 } ], ordered: true, metadata: { shardName: "test-rs1", shardVersion: [ Timestamp 2000|1, ObjectId('54762b2fba042ce88d252a60') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31100| 2014-11-26T14:34:09.884-0500 I WRITE [conn8] insert fooUnsharded.barUnsharded query: { _id: 2.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:34:09.884-0500 I QUERY [conn8] command fooUnsharded.$cmd command: insert { insert: "barUnsharded", documents: [ { _id: 2.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31100| 2014-11-26T14:34:09.884-0500 I QUERY [conn12] getmore local.oplog.rs cursorid:18566677883 ntoreturn:0 keyUpdates:0 nreturned:2 reslen:216 1050ms ---- Testing idle connection with third primary down... ---- m31100| 2014-11-26T14:34:09.885-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b19285250b145f645f2'), optime: Timestamp 1417030449000|2, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:34:09.885-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: -3.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:34:09.885-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: -3.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 2000|0, ObjectId('54762b2fba042ce88d252a60') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31200| 2014-11-26T14:34:09.886-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: 3.0 } ninserted:1 keyUpdates:0 0ms m31200| 2014-11-26T14:34:09.886-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: 3.0 } ], ordered: true, metadata: { shardName: "test-rs1", shardVersion: [ Timestamp 2000|1, ObjectId('54762b2fba042ce88d252a60') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31100| 2014-11-26T14:34:09.886-0500 I QUERY [conn12] getmore local.oplog.rs cursorid:18566677883 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:116 0ms m31100| 2014-11-26T14:34:09.887-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b19285250b145f645f2'), optime: Timestamp 1417030449000|3, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:34:09.887-0500 I WRITE [conn8] insert fooUnsharded.barUnsharded query: { _id: 3.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:34:09.887-0500 I QUERY [conn8] command fooUnsharded.$cmd command: insert { insert: "barUnsharded", documents: [ { _id: 3.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31100| 2014-11-26T14:34:09.888-0500 I QUERY [conn16] query fooSharded.barSharded query: { _id: -1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m31200| 2014-11-26T14:34:09.889-0500 I QUERY [conn9] query fooSharded.barSharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m31100| 2014-11-26T14:34:09.889-0500 I QUERY [conn12] getmore local.oplog.rs cursorid:18566677883 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:120 0ms m31100| 2014-11-26T14:34:09.889-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b19285250b145f645f2'), optime: Timestamp 1417030449000|4, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m31100| 2014-11-26T14:34:09.889-0500 I QUERY [conn16] query fooUnsharded.barUnsharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms ---- Testing new connections with third primary down... ---- m30999| 2014-11-26T14:34:09.890-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49022 #4 (4 connections now open) m30999| 2014-11-26T14:34:09.905-0500 I ACCESS [conn4] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:34:09.920-0500 I ACCESS [conn4] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31100| 2014-11-26T14:34:09.920-0500 I QUERY [conn16] query fooSharded.barSharded query: { _id: -1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m30999| 2014-11-26T14:34:09.921-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49023 #5 (5 connections now open) m30999| 2014-11-26T14:34:09.936-0500 I ACCESS [conn5] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:34:09.950-0500 I ACCESS [conn5] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31200| 2014-11-26T14:34:09.951-0500 I QUERY [conn9] query fooSharded.barSharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m30999| 2014-11-26T14:34:09.952-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49024 #6 (6 connections now open) m30999| 2014-11-26T14:34:09.967-0500 I ACCESS [conn6] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:34:09.981-0500 I ACCESS [conn6] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31100| 2014-11-26T14:34:09.982-0500 I QUERY [conn16] query fooUnsharded.barUnsharded query: { _id: 1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m30999| 2014-11-26T14:34:09.983-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49025 #7 (7 connections now open) m30999| 2014-11-26T14:34:09.997-0500 I ACCESS [conn7] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:34:10.012-0500 I ACCESS [conn7] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31100| 2014-11-26T14:34:10.013-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: -4.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:34:10.013-0500 I QUERY [conn12] getmore local.oplog.rs cursorid:18566677883 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:116 122ms m31100| 2014-11-26T14:34:10.013-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: -4.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 2000|0, ObjectId('54762b2fba042ce88d252a60') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31100| 2014-11-26T14:34:10.014-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b19285250b145f645f2'), optime: Timestamp 1417030450000|1, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m30999| 2014-11-26T14:34:10.014-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49026 #8 (8 connections now open) m30999| 2014-11-26T14:34:10.029-0500 I ACCESS [conn8] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:34:10.044-0500 I ACCESS [conn8] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31200| 2014-11-26T14:34:10.045-0500 I WRITE [conn8] insert fooSharded.barSharded query: { _id: 4.0 } ninserted:1 keyUpdates:0 0ms m31200| 2014-11-26T14:34:10.045-0500 I QUERY [conn8] command fooSharded.$cmd command: insert { insert: "barSharded", documents: [ { _id: 4.0 } ], ordered: true, metadata: { shardName: "test-rs1", shardVersion: [ Timestamp 2000|1, ObjectId('54762b2fba042ce88d252a60') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m30999| 2014-11-26T14:34:10.046-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49027 #9 (9 connections now open) m30999| 2014-11-26T14:34:10.060-0500 I ACCESS [conn9] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:34:10.075-0500 I ACCESS [conn9] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31100| 2014-11-26T14:34:10.076-0500 I WRITE [conn8] insert fooUnsharded.barUnsharded query: { _id: 4.0 } ninserted:1 keyUpdates:0 0ms m31100| 2014-11-26T14:34:10.076-0500 I QUERY [conn12] getmore local.oplog.rs cursorid:18566677883 ntoreturn:0 keyUpdates:0 nreturned:1 reslen:120 60ms m31100| 2014-11-26T14:34:10.076-0500 I QUERY [conn8] command fooUnsharded.$cmd command: insert { insert: "barUnsharded", documents: [ { _id: 4.0 } ], ordered: true, metadata: { shardName: "test-rs0", shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ], session: 0 } } ntoreturn:1 keyUpdates:0 reslen:80 0ms m31100| 2014-11-26T14:34:10.076-0500 I QUERY [conn13] command admin.$cmd command: replSetUpdatePosition { replSetUpdatePosition: 1, optimes: [ { _id: ObjectId('54762b19285250b145f645f2'), optime: Timestamp 1417030450000|2, memberID: 1, cfgver: 1, config: { _id: 1, host: "ip-10-33-141-202:31101", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } } ] } ntoreturn:1 keyUpdates:0 reslen:37 0ms m30999| 2014-11-26T14:34:10.081-0500 W - [conn4] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 10.33.141.202:49022 m30999| 2014-11-26T14:34:10.081-0500 W - [conn5] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 10.33.141.202:49023 m30999| 2014-11-26T14:34:10.081-0500 W - [conn6] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 10.33.141.202:49024 m30999| 2014-11-26T14:34:10.081-0500 W - [conn7] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 10.33.141.202:49025 m30999| 2014-11-26T14:34:10.081-0500 W - [conn8] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 10.33.141.202:49026 m30999| 2014-11-26T14:34:10.085-0500 I - [conn4] m30999| 0xc071b9 0xb943cc 0xbc1f77 0xbc29ba 0xbc29c9 0xbc2a15 0xbb7a39 0xbbabaf 0x7f58d3368c6b 0x7f58d23fe5ed m30999| ----- BEGIN BACKTRACE ----- m30999| {"backtrace":[{"b":"400000","o":"8071B9"},{"b":"400000","o":"7943CC"},{"b":"400000","o":"7C1F77"},{"b":"400000","o":"7C29BA"},{"b":"400000","o":"7C29C9"},{"b":"400000","o":"7C2A15"},{"b":"400000","o":"7B7A39"},{"b":"400000","o":"7BABAF"},{"b":"7F58D3361000","o":"7C6B"},{"b":"7F58D231C000","o":"E25ED"}],"processInfo":{ "mongodbVersion" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2", "uname" : { "sysname" : "Linux", "release" : "3.4.43-43.43.amzn1.x86_64", "version" : "#1 SMP Mon May 6 18:04:41 UTC 2013", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFF04AFF000", "elfType" : 3, "buildId" : "29B1BE128D1CD74EF11FFB8546C70D9BD5691168" }, { "b" : "7F58D3361000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "CD5AAC30FD9161B40651639583A8600AFEDC9C4C" }, { "b" : "7F58D30FB000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AB341F36095E832872A333DD8418D88879D3CE3A" }, { "b" : "7F58D2D37000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "2E24651788AF4247D2358B7AE73FD0E42EF4123C" }, { "b" : "7F58D2B2F000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "71D3B1475C8376D90DB02C1BC9D44C662B588B44" }, { "b" : "7F58D292B000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1F0D8E5A3A05C51AB017DD3B25DCA5A84691EA29" }, { "b" : "7F58D26A8000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "A7844DD3B5847BF8480B549FD96EF34C7AA10CB6" }, { "b" : "7F58D231C000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "93179477188BD673E8EECF305C7D14B3824DBDE5" }, { "b" : "7F58D357D000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1690D895D998DA3903D3327815C41143B8131168" }, { "b" : "7F58D20D9000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "9DF61878D8918F25CC74AD01F417FDB051DFE3DA" }, { "b" : "7F58D1DF3000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "6F1DB0F811D1B210520443442D4437BC43BF9A80" }, { "b" : "7F58D1BF0000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "1A6E97644CC9149C2E1871C6AE1DB51975E78A41" }, { "b" : "7F58D19C5000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "F7DF34078FD7BFD684FE46D5F677EEDA1D9B9DC9" }, { "b" : "7F58D17AE000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "E492542502DF88A2F752AD77D1905D13FF1AC6FF" }, { "b" : "7F58D15A3000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "381960ACAB9C39461D58BDE7B272C4F61BB3582F" }, { "b" : "7F58D139F000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "BF48CD5658DE95CE058C4B828E81C97E2AE19643" }, { "b" : "7F58D1184000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "0B8C3A6D8A1FF1E638C0EC551635FD4F5393B258" }, { "b" : "7F58D0F63000", "path" : "/usr/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "803D7EF21A989677D056E52BAEB9AB5B154FB9D9" } ] }} m30999| mongos(_ZN5mongo15printStackTraceERSo+0x29) [0xc071b9] m30999| mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12C) [0xb943cc] m30999| mongos(_ZN5mongo6Socket15handleRecvErrorEii+0x917) [0xbc1f77] m30999| mongos(_ZN5mongo6Socket5_recvEPci+0x6A) [0xbc29ba] m30999| mongos(_ZN5mongo6Socket11unsafe_recvEPci+0x9) [0xbc29c9] m30999| mongos(_ZN5mongo6Socket4recvEPci+0x35) [0xbc2a15] m30999| mongos(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xA9) [0xbb7a39] m30999| mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x3EF) [0xbbabaf] m30999| libpthread.so.0(+0x7C6B) [0x7f58d3368c6b] m30999| libc.so.6(clone+0x6D) [0x7f58d23fe5ed] m30999| ----- END BACKTRACE ----- m30999| 2014-11-26T14:34:10.085-0500 I NETWORK [conn4] end connection 10.33.141.202:49022 (8 connections now open) m30999| 2014-11-26T14:34:10.089-0500 I - [conn7] m30999| 0xc071b9 0xb943cc 0xbc1f77 0xbc29ba 0xbc29c9 0xbc2a15 0xbb7a39 0xbbabaf 0x7f58d3368c6b 0x7f58d23fe5ed m30999| ----- BEGIN BACKTRACE ----- m30999| {"backtrace":[{"b":"400000","o":"8071B9"},{"b":"400000","o":"7943CC"},{"b":"400000","o":"7C1F77"},{"b":"400000","o":"7C29BA"},{"b":"400000","o":"7C29C9"},{"b":"400000","o":"7C2A15"},{"b":"400000","o":"7B7A39"},{"b":"400000","o":"7BABAF"},{"b":"7F58D3361000","o":"7C6B"},{"b":"7F58D231C000","o":"E25ED"}],"processInfo":{ "mongodbVersion" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2", "uname" : { "sysname" : "Linux", "release" : "3.4.43-43.43.amzn1.x86_64", "version" : "#1 SMP Mon May 6 18:04:41 UTC 2013", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFF04AFF000", "elfType" : 3, "buildId" : "29B1BE128D1CD74EF11FFB8546C70D9BD5691168" }, { "b" : "7F58D3361000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "CD5AAC30FD9161B40651639583A8600AFEDC9C4C" }, { "b" : "7F58D30FB000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AB341F36095E832872A333DD8418D88879D3CE3A" }, { "b" : "7F58D2D37000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "2E24651788AF4247D2358B7AE73FD0E42EF4123C" }, { "b" : "7F58D2B2F000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "71D3B1475C8376D90DB02C1BC9D44C662B588B44" }, { "b" : "7F58D292B000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1F0D8E5A3A05C51AB017DD3B25DCA5A84691EA29" }, { "b" : "7F58D26A8000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "A7844DD3B5847BF8480B549FD96EF34C7AA10CB6" }, { "b" : "7F58D231C000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "93179477188BD673E8EECF305C7D14B3824DBDE5" }, { "b" : "7F58D357D000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1690D895D998DA3903D3327815C41143B8131168" }, { "b" : "7F58D20D9000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "9DF61878D8918F25CC74AD01F417FDB051DFE3DA" }, { "b" : "7F58D1DF3000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "6F1DB0F811D1B210520443442D4437BC43BF9A80" }, { "b" : "7F58D1BF0000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "1A6E97644CC9149C2E1871C6AE1DB51975E78A41" }, { "b" : "7F58D19C5000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "F7DF34078FD7BFD684FE46D5F677EEDA1D9B9DC9" }, { "b" : "7F58D17AE000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "E492542502DF88A2F752AD77D1905D13FF1AC6FF" }, { "b" : "7F58D15A3000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "381960ACAB9C39461D58BDE7B272C4F61BB3582F" }, { "b" : "7F58D139F000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "BF48CD5658DE95CE058C4B828E81C97E2AE19643" }, { "b" : "7F58D1184000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "0B8C3A6D8A1FF1E638C0EC551635FD4F5393B258" }, { "b" : "7F58D0F63000", "path" : "/usr/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "803D7EF21A989677D056E52BAEB9AB5B154FB9D9" } ] }} m30999| mongos(_ZN5mongo15printStackTraceERSo+0x29) [0xc071b9] m30999| mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12C) [0xb943cc] m30999| mongos(_ZN5mongo6Socket15handleRecvErrorEii+0x917) [0xbc1f77] m30999| mongos(_ZN5mongo6Socket5_recvEPci+0x6A) [0xbc29ba] m30999| mongos(_ZN5mongo6Socket11unsafe_recvEPci+0x9) [0xbc29c9] m30999| mongos(_ZN5mongo6Socket4recvEPci+0x35) [0xbc2a15] m30999| mongos(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xA9) [0xbb7a39] m30999| mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x3EF) [0xbbabaf] m30999| libpthread.so.0(+0x7C6B) [0x7f58d3368c6b] m30999| libc.so.6(clone+0x6D) [0x7f58d23fe5ed] m30999| ----- END BACKTRACE ----- m30999| 2014-11-26T14:34:10.089-0500 I NETWORK [conn7] end connection 10.33.141.202:49025 (7 connections now open) m30999| 2014-11-26T14:34:10.093-0500 I - [conn5] m30999| 0xc071b9 0xb943cc 0xbc1f77 0xbc29ba 0xbc29c9 0xbc2a15 0xbb7a39 0xbbabaf 0x7f58d3368c6b 0x7f58d23fe5ed m30999| ----- BEGIN BACKTRACE ----- m30999| {"backtrace":[{"b":"400000","o":"8071B9"},{"b":"400000","o":"7943CC"},{"b":"400000","o":"7C1F77"},{"b":"400000","o":"7C29BA"},{"b":"400000","o":"7C29C9"},{"b":"400000","o":"7C2A15"},{"b":"400000","o":"7B7A39"},{"b":"400000","o":"7BABAF"},{"b":"7F58D3361000","o":"7C6B"},{"b":"7F58D231C000","o":"E25ED"}],"processInfo":{ "mongodbVersion" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2", "uname" : { "sysname" : "Linux", "release" : "3.4.43-43.43.amzn1.x86_64", "version" : "#1 SMP Mon May 6 18:04:41 UTC 2013", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFF04AFF000", "elfType" : 3, "buildId" : "29B1BE128D1CD74EF11FFB8546C70D9BD5691168" }, { "b" : "7F58D3361000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "CD5AAC30FD9161B40651639583A8600AFEDC9C4C" }, { "b" : "7F58D30FB000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AB341F36095E832872A333DD8418D88879D3CE3A" }, { "b" : "7F58D2D37000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "2E24651788AF4247D2358B7AE73FD0E42EF4123C" }, { "b" : "7F58D2B2F000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "71D3B1475C8376D90DB02C1BC9D44C662B588B44" }, { "b" : "7F58D292B000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1F0D8E5A3A05C51AB017DD3B25DCA5A84691EA29" }, { "b" : "7F58D26A8000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "A7844DD3B5847BF8480B549FD96EF34C7AA10CB6" }, { "b" : "7F58D231C000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "93179477188BD673E8EECF305C7D14B3824DBDE5" }, { "b" : "7F58D357D000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1690D895D998DA3903D3327815C41143B8131168" }, { "b" : "7F58D20D9000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "9DF61878D8918F25CC74AD01F417FDB051DFE3DA" }, { "b" : "7F58D1DF3000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "6F1DB0F811D1B210520443442D4437BC43BF9A80" }, { "b" : "7F58D1BF0000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "1A6E97644CC9149C2E1871C6AE1DB51975E78A41" }, { "b" : "7F58D19C5000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "F7DF34078FD7BFD684FE46D5F677EEDA1D9B9DC9" }, { "b" : "7F58D17AE000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "E492542502DF88A2F752AD77D1905D13FF1AC6FF" }, { "b" : "7F58D15A3000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "381960ACAB9C39461D58BDE7B272C4F61BB3582F" }, { "b" : "7F58D139F000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "BF48CD5658DE95CE058C4B828E81C97E2AE19643" }, { "b" : "7F58D1184000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "0B8C3A6D8A1FF1E638C0EC551635FD4F5393B258" }, { "b" : "7F58D0F63000", "path" : "/usr/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "803D7EF21A989677D056E52BAEB9AB5B154FB9D9" } ] }} m30999| mongos(_ZN5mongo15printStackTraceERSo+0x29) [0xc071b9] m30999| mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12C) [0xb943cc] m30999| mongos(_ZN5mongo6Socket15handleRecvErrorEii+0x917) [0xbc1f77] m30999| mongos(_ZN5mongo6Socket5_recvEPci+0x6A) [0xbc29ba] m30999| mongos(_ZN5mongo6Socket11unsafe_recvEPci+0x9) [0xbc29c9] m30999| mongos(_ZN5mongo6Socket4recvEPci+0x35) [0xbc2a15] m30999| mongos(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xA9) [0xbb7a39] m30999| mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x3EF) [0xbbabaf] m30999| libpthread.so.0(+0x7C6B) [0x7f58d3368c6b] m30999| libc.so.6(clone+0x6D) [0x7f58d23fe5ed] m30999| ----- END BACKTRACE ----- m30999| 2014-11-26T14:34:10.093-0500 I NETWORK [conn5] end connection 10.33.141.202:49023 (6 connections now open) ---- Stopping primary of second shard... ---- m30999| 2014-11-26T14:34:10.096-0500 I NETWORK [mongosMain] connection accepted from 10.33.141.202:49028 #10 (7 connections now open) m30999| 2014-11-26T14:34:10.097-0500 I - [conn8] m30999| 0xc071b9 0xb943cc 0xbc1f77 0xbc29ba 0xbc29c9 0xbc2a15 0xbb7a39 0xbbabaf 0x7f58d3368c6b 0x7f58d23fe5ed m30999| ----- BEGIN BACKTRACE ----- m30999| {"backtrace":[{"b":"400000","o":"8071B9"},{"b":"400000","o":"7943CC"},{"b":"400000","o":"7C1F77"},{"b":"400000","o":"7C29BA"},{"b":"400000","o":"7C29C9"},{"b":"400000","o":"7C2A15"},{"b":"400000","o":"7B7A39"},{"b":"400000","o":"7BABAF"},{"b":"7F58D3361000","o":"7C6B"},{"b":"7F58D231C000","o":"E25ED"}],"processInfo":{ "mongodbVersion" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2", "uname" : { "sysname" : "Linux", "release" : "3.4.43-43.43.amzn1.x86_64", "version" : "#1 SMP Mon May 6 18:04:41 UTC 2013", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFF04AFF000", "elfType" : 3, "buildId" : "29B1BE128D1CD74EF11FFB8546C70D9BD5691168" }, { "b" : "7F58D3361000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "CD5AAC30FD9161B40651639583A8600AFEDC9C4C" }, { "b" : "7F58D30FB000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AB341F36095E832872A333DD8418D88879D3CE3A" }, { "b" : "7F58D2D37000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "2E24651788AF4247D2358B7AE73FD0E42EF4123C" }, { "b" : "7F58D2B2F000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "71D3B1475C8376D90DB02C1BC9D44C662B588B44" }, { "b" : "7F58D292B000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1F0D8E5A3A05C51AB017DD3B25DCA5A84691EA29" }, { "b" : "7F58D26A8000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "A7844DD3B5847BF8480B549FD96EF34C7AA10CB6" }, { "b" : "7F58D231C000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "93179477188BD673E8EECF305C7D14B3824DBDE5" }, { "b" : "7F58D357D000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1690D895D998DA3903D3327815C41143B8131168" }, { "b" : "7F58D20D9000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "9DF61878D8918F25CC74AD01F417FDB051DFE3DA" }, { "b" : "7F58D1DF3000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "6F1DB0F811D1B210520443442D4437BC43BF9A80" }, { "b" : "7F58D1BF0000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "1A6E97644CC9149C2E1871C6AE1DB51975E78A41" }, { "b" : "7F58D19C5000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "F7DF34078FD7BFD684FE46D5F677EEDA1D9B9DC9" }, { "b" : "7F58D17AE000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "E492542502DF88A2F752AD77D1905D13FF1AC6FF" }, { "b" : "7F58D15A3000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "381960ACAB9C39461D58BDE7B272C4F61BB3582F" }, { "b" : "7F58D139F000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "BF48CD5658DE95CE058C4B828E81C97E2AE19643" }, { "b" : "7F58D1184000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "0B8C3A6D8A1FF1E638C0EC551635FD4F5393B258" }, { "b" : "7F58D0F63000", "path" : "/usr/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "803D7EF21A989677D056E52BAEB9AB5B154FB9D9" } ] }} m30999| mongos(_ZN5mongo15printStackTraceERSo+0x29) [0xc071b9] m30999| mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12C) [0xb943cc] m30999| mongos(_ZN5mongo6Socket15handleRecvErrorEii+0x917) [0xbc1f77] m30999| mongos(_ZN5mongo6Socket5_recvEPci+0x6A) [0xbc29ba] m30999| mongos(_ZN5mongo6Socket11unsafe_recvEPci+0x9) [0xbc29c9] m30999| mongos(_ZN5mongo6Socket4recvEPci+0x35) [0xbc2a15] m30999| mongos(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xA9) [0xbb7a39] m30999| mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x3EF) [0xbbabaf] m30999| libpthread.so.0(+0x7C6B) [0x7f58d3368c6b] m30999| libc.so.6(clone+0x6D) [0x7f58d23fe5ed] m30999| ----- END BACKTRACE ----- m30999| 2014-11-26T14:34:10.097-0500 I NETWORK [conn8] end connection 10.33.141.202:49026 (6 connections now open) m30999| 2014-11-26T14:34:10.100-0500 I - [conn6] m30999| 0xc071b9 0xb943cc 0xbc1f77 0xbc29ba 0xbc29c9 0xbc2a15 0xbb7a39 0xbbabaf 0x7f58d3368c6b 0x7f58d23fe5ed m30999| ----- BEGIN BACKTRACE ----- m30999| {"backtrace":[{"b":"400000","o":"8071B9"},{"b":"400000","o":"7943CC"},{"b":"400000","o":"7C1F77"},{"b":"400000","o":"7C29BA"},{"b":"400000","o":"7C29C9"},{"b":"400000","o":"7C2A15"},{"b":"400000","o":"7B7A39"},{"b":"400000","o":"7BABAF"},{"b":"7F58D3361000","o":"7C6B"},{"b":"7F58D231C000","o":"E25ED"}],"processInfo":{ "mongodbVersion" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2", "uname" : { "sysname" : "Linux", "release" : "3.4.43-43.43.amzn1.x86_64", "version" : "#1 SMP Mon May 6 18:04:41 UTC 2013", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFF04AFF000", "elfType" : 3, "buildId" : "29B1BE128D1CD74EF11FFB8546C70D9BD5691168" }, { "b" : "7F58D3361000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "CD5AAC30FD9161B40651639583A8600AFEDC9C4C" }, { "b" : "7F58D30FB000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AB341F36095E832872A333DD8418D88879D3CE3A" }, { "b" : "7F58D2D37000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "2E24651788AF4247D2358B7AE73FD0E42EF4123C" }, { "b" : "7F58D2B2F000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "71D3B1475C8376D90DB02C1BC9D44C662B588B44" }, { "b" : "7F58D292B000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1F0D8E5A3A05C51AB017DD3B25DCA5A84691EA29" }, { "b" : "7F58D26A8000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "A7844DD3B5847BF8480B549FD96EF34C7AA10CB6" }, { "b" : "7F58D231C000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "93179477188BD673E8EECF305C7D14B3824DBDE5" }, { "b" : "7F58D357D000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1690D895D998DA3903D3327815C41143B8131168" }, { "b" : "7F58D20D9000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "9DF61878D8918F25CC74AD01F417FDB051DFE3DA" }, { "b" : "7F58D1DF3000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "6F1DB0F811D1B210520443442D4437BC43BF9A80" }, { "b" : "7F58D1BF0000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "1A6E97644CC9149C2E1871C6AE1DB51975E78A41" }, { "b" : "7F58D19C5000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "F7DF34078FD7BFD684FE46D5F677EEDA1D9B9DC9" }, { "b" : "7F58D17AE000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "E492542502DF88A2F752AD77D1905D13FF1AC6FF" }, { "b" : "7F58D15A3000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "381960ACAB9C39461D58BDE7B272C4F61BB3582F" }, { "b" : "7F58D139F000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "BF48CD5658DE95CE058C4B828E81C97E2AE19643" }, { "b" : "7F58D1184000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "0B8C3A6D8A1FF1E638C0EC551635FD4F5393B258" }, { "b" : "7F58D0F63000", "path" : "/usr/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "803D7EF21A989677D056E52BAEB9AB5B154FB9D9" } ] }} m30999| mongos(_ZN5mongo15printStackTraceERSo+0x29) [0xc071b9] m30999| mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12C) [0xb943cc] m30999| mongos(_ZN5mongo6Socket15handleRecvErrorEii+0x917) [0xbc1f77] m30999| mongos(_ZN5mongo6Socket5_recvEPci+0x6A) [0xbc29ba] m30999| mongos(_ZN5mongo6Socket11unsafe_recvEPci+0x9) [0xbc29c9] m30999| mongos(_ZN5mongo6Socket4recvEPci+0x35) [0xbc2a15] m30999| mongos(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xA9) [0xbb7a39] m30999| mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x3EF) [0xbbabaf] m30999| libpthread.so.0(+0x7C6B) [0x7f58d3368c6b] m30999| libc.so.6(clone+0x6D) [0x7f58d23fe5ed] m30999| ----- END BACKTRACE ----- m30999| 2014-11-26T14:34:10.100-0500 I NETWORK [conn6] end connection 10.33.141.202:49024 (5 connections now open) m30999| 2014-11-26T14:34:10.112-0500 I ACCESS [conn10] Successfully authenticated as principal shardedDBUser on fooSharded m30999| 2014-11-26T14:34:10.127-0500 I ACCESS [conn10] Successfully authenticated as principal unshardedDBUser on fooUnsharded m31200| 2014-11-26T14:34:10.128-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:34:10.128-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms m31200| 2014-11-26T14:34:10.129-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:401 0ms m31201| 2014-11-26T14:34:10.129-0500 I QUERY [conn1] command admin.$cmd command: isMaster { ismaster: 1.0 } keyUpdates:0 reslen:377 0ms ReplSetTest n: 0 ports: [ 31200, 31201 ] 31200 number ReplSetTest stop *** Shutting down mongod in port 31200 *** m31200| 2014-11-26T14:34:10.130-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m31200| 2014-11-26T14:34:10.130-0500 I REPL [signalProcessingThread] Stopping replication applier threads m31201| 2014-11-26T14:34:10.411-0500 I REPL [ReplicationExecutor] syncing from: ip-10-33-141-202:31200 m31201| 2014-11-26T14:34:10.412-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:34:10.412-0500 D NETWORK [rsBackgroundSync] connected to server ip-10-33-141-202:31200 (10.33.141.202) m31200| 2014-11-26T14:34:10.573-0500 I COMMAND [signalProcessingThread] now exiting m31200| 2014-11-26T14:34:10.573-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m31200| 2014-11-26T14:34:10.573-0500 I NETWORK [signalProcessingThread] closing listening socket: 13 m31200| 2014-11-26T14:34:10.573-0500 I NETWORK [signalProcessingThread] closing listening socket: 14 m31200| 2014-11-26T14:34:10.573-0500 I NETWORK [signalProcessingThread] closing listening socket: 20 m31200| 2014-11-26T14:34:10.573-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31200.sock m31200| 2014-11-26T14:34:10.573-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m31200| 2014-11-26T14:34:10.573-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m31200| 2014-11-26T14:34:10.573-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: fooSharded.barSharded m31200| 2014-11-26T14:34:10.573-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me m31200| 2014-11-26T14:34:10.573-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs m31200| 2014-11-26T14:34:10.573-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log m31200| 2014-11-26T14:34:10.573-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset m31200| 2014-11-26T14:34:10.573-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog m31200| 2014-11-26T14:34:10.573-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m31200| 2014-11-26T14:34:10.573-0500 I NETWORK [conn1] end connection 127.0.0.1:50666 (8 connections now open) m31200| 2014-11-26T14:34:10.573-0500 I NETWORK [conn2] end connection 10.33.141.202:40519 (8 connections now open) m31200| 2014-11-26T14:34:10.573-0500 I NETWORK [conn6] end connection 10.33.141.202:40552 (8 connections now open) m31200| 2014-11-26T14:34:10.573-0500 I NETWORK [conn7] end connection 10.33.141.202:40553 (8 connections now open) m31200| 2014-11-26T14:34:10.573-0500 I NETWORK [conn8] end connection 10.33.141.202:40562 (8 connections now open) m31200| 2014-11-26T14:34:10.574-0500 I NETWORK [conn10] end connection 10.33.141.202:40572 (8 connections now open) m31200| 2014-11-26T14:34:10.574-0500 I NETWORK [conn11] end connection 10.33.141.202:40573 (8 connections now open) m31200| 2014-11-26T14:34:10.574-0500 I NETWORK [conn5] end connection 10.33.141.202:40532 (8 connections now open) m31200| 2014-11-26T14:34:10.574-0500 I NETWORK [conn9] end connection 10.33.141.202:40565 (8 connections now open) m31201| 2014-11-26T14:34:10.574-0500 I NETWORK [conn3] end connection 10.33.141.202:53734 (1 connection now open) m31101| 2014-11-26T14:34:10.574-0500 I NETWORK [conn5] end connection 10.33.141.202:54038 (2 connections now open) m29000| 2014-11-26T14:34:10.574-0500 I NETWORK [conn12] end connection 10.33.141.202:41488 (12 connections now open) m31201| 2014-11-26T14:34:10.574-0500 I NETWORK [rsBackgroundSync] Socket recv() errno:104 Connection reset by peer 10.33.141.202:31200 m31201| 2014-11-26T14:34:10.574-0500 I NETWORK [rsBackgroundSync] SocketException: remote: 10.33.141.202:31200 error: 9001 socket exception [RECV_ERROR] server [10.33.141.202:31200] m31201| 2014-11-26T14:34:10.574-0500 I NETWORK [rsBackgroundSync] DBClientCursor::init call() failed m31201| 2014-11-26T14:34:10.574-0500 D - [rsBackgroundSync] User Assertion: 10276:DBClientBase::findN: transport error: ip-10-33-141-202:31200 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6552556B7259795434737856764C655143657A4643364A5939706E424E6A6D44) } m31201| 2014-11-26T14:34:10.574-0500 I ACCESS [rsBackgroundSync] can't authenticate to ip-10-33-141-202:31200 (10.33.141.202) failed as internal user, error: DBClientBase::findN: transport error: ip-10-33-141-202:31200 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6552556B7259795434737856764C655143657A4643364A5939706E424E6A6D44) } m31201| 2014-11-26T14:34:10.574-0500 I REPL [rsBackgroundSync] repl: m29000| 2014-11-26T14:34:10.574-0500 I NETWORK [conn10] end connection 10.33.141.202:41486 (12 connections now open) m31201| 2014-11-26T14:34:10.574-0500 I REPL [ReplicationExecutor] could not find member to sync from m31100| 2014-11-26T14:34:10.574-0500 I NETWORK [conn10] end connection 10.33.141.202:38103 (11 connections now open) m29000| 2014-11-26T14:34:10.574-0500 I NETWORK [conn11] end connection 10.33.141.202:41487 (10 connections now open) m31100| 2014-11-26T14:34:10.574-0500 I NETWORK [conn14] end connection 10.33.141.202:38113 (10 connections now open) m29000| 2014-11-26T14:34:10.574-0500 I NETWORK [conn9] end connection 10.33.141.202:41485 (9 connections now open) m29000| 2014-11-26T14:34:10.575-0500 I NETWORK [conn15] end connection 10.33.141.202:41495 (8 connections now open) m31100| 2014-11-26T14:34:10.575-0500 I NETWORK [conn15] end connection 10.33.141.202:38117 (9 connections now open) m31200| 2014-11-26T14:34:10.615-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 2014-11-26T14:34:11.130-0500 I - shell: stopped mongo program on port 31200 ReplSetTest stop *** Mongod in port 31200 shutdown with code (0) *** ---- Testing active connection with second primary down... ---- m31100| 2014-11-26T14:34:11.131-0500 I QUERY [conn16] query fooSharded.barSharded query: { _id: -1.0 } planSummary: IDHACK ntoskip:0 nscanned:1 nscannedObjects:1 idhack:1 keyUpdates:0 nreturned:1 reslen:38 0ms m30999| 2014-11-26T14:34:11.133-0500 W - [conn2] DBException thrown :: caused by :: 9001 socket exception [CLOSED] for 10.33.141.202:31200 m30999| 2014-11-26T14:34:11.140-0500 I - [conn2] m30999| 0xc071b9 0xb943cc 0xbc1f77 0xbc29ba 0xbc29c9 0xbc2a15 0xbb7a39 0x7b1c15 0x7c6152 0x7d0cfd 0x7de6c7 0xb08613 0xa67093 0xa66489 0xaf5edd 0xa81c8f 0xb07c2d 0xaf5461 0x7695a8 0xbbabd1 0x7f58d3368c6b 0x7f58d23fe5ed m30999| ----- BEGIN BACKTRACE ----- m30999| {"backtrace":[{"b":"400000","o":"8071B9"},{"b":"400000","o":"7943CC"},{"b":"400000","o":"7C1F77"},{"b":"400000","o":"7C29BA"},{"b":"400000","o":"7C29C9"},{"b":"400000","o":"7C2A15"},{"b":"400000","o":"7B7A39"},{"b":"400000","o":"3B1C15"},{"b":"400000","o":"3C6152"},{"b":"400000","o":"3D0CFD"},{"b":"400000","o":"3DE6C7"},{"b":"400000","o":"708613"},{"b":"400000","o":"667093"},{"b":"400000","o":"666489"},{"b":"400000","o":"6F5EDD"},{"b":"400000","o":"681C8F"},{"b":"400000","o":"707C2D"},{"b":"400000","o":"6F5461"},{"b":"400000","o":"3695A8"},{"b":"400000","o":"7BABD1"},{"b":"7F58D3361000","o":"7C6B"},{"b":"7F58D231C000","o":"E25ED"}],"processInfo":{ "mongodbVersion" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2", "uname" : { "sysname" : "Linux", "release" : "3.4.43-43.43.amzn1.x86_64", "version" : "#1 SMP Mon May 6 18:04:41 UTC 2013", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000" }, { "b" : "7FFF04AFF000", "elfType" : 3, "buildId" : "29B1BE128D1CD74EF11FFB8546C70D9BD5691168" }, { "b" : "7F58D3361000", "path" : "/lib64/libpthread.so.0", "elfType" : 3, "buildId" : "CD5AAC30FD9161B40651639583A8600AFEDC9C4C" }, { "b" : "7F58D30FB000", "path" : "/usr/lib64/libssl.so.10", "elfType" : 3, "buildId" : "AB341F36095E832872A333DD8418D88879D3CE3A" }, { "b" : "7F58D2D37000", "path" : "/lib64/libcrypto.so.10", "elfType" : 3, "buildId" : "2E24651788AF4247D2358B7AE73FD0E42EF4123C" }, { "b" : "7F58D2B2F000", "path" : "/lib64/librt.so.1", "elfType" : 3, "buildId" : "71D3B1475C8376D90DB02C1BC9D44C662B588B44" }, { "b" : "7F58D292B000", "path" : "/lib64/libdl.so.2", "elfType" : 3, "buildId" : "1F0D8E5A3A05C51AB017DD3B25DCA5A84691EA29" }, { "b" : "7F58D26A8000", "path" : "/lib64/libm.so.6", "elfType" : 3, "buildId" : "A7844DD3B5847BF8480B549FD96EF34C7AA10CB6" }, { "b" : "7F58D231C000", "path" : "/lib64/libc.so.6", "elfType" : 3, "buildId" : "93179477188BD673E8EECF305C7D14B3824DBDE5" }, { "b" : "7F58D357D000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "1690D895D998DA3903D3327815C41143B8131168" }, { "b" : "7F58D20D9000", "path" : "/lib64/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "9DF61878D8918F25CC74AD01F417FDB051DFE3DA" }, { "b" : "7F58D1DF3000", "path" : "/lib64/libkrb5.so.3", "elfType" : 3, "buildId" : "6F1DB0F811D1B210520443442D4437BC43BF9A80" }, { "b" : "7F58D1BF0000", "path" : "/lib64/libcom_err.so.2", "elfType" : 3, "buildId" : "1A6E97644CC9149C2E1871C6AE1DB51975E78A41" }, { "b" : "7F58D19C5000", "path" : "/lib64/libk5crypto.so.3", "elfType" : 3, "buildId" : "F7DF34078FD7BFD684FE46D5F677EEDA1D9B9DC9" }, { "b" : "7F58D17AE000", "path" : "/lib64/libz.so.1", "elfType" : 3, "buildId" : "E492542502DF88A2F752AD77D1905D13FF1AC6FF" }, { "b" : "7F58D15A3000", "path" : "/lib64/libkrb5support.so.0", "elfType" : 3, "buildId" : "381960ACAB9C39461D58BDE7B272C4F61BB3582F" }, { "b" : "7F58D139F000", "path" : "/lib64/libkeyutils.so.1", "elfType" : 3, "buildId" : "BF48CD5658DE95CE058C4B828E81C97E2AE19643" }, { "b" : "7F58D1184000", "path" : "/lib64/libresolv.so.2", "elfType" : 3, "buildId" : "0B8C3A6D8A1FF1E638C0EC551635FD4F5393B258" }, { "b" : "7F58D0F63000", "path" : "/usr/lib64/libselinux.so.1", "elfType" : 3, "buildId" : "803D7EF21A989677D056E52BAEB9AB5B154FB9D9" } ] }} m30999| mongos(_ZN5mongo15printStackTraceERSo+0x29) [0xc071b9] m30999| mongos(_ZN5mongo11DBException13traceIfNeededERKS0_+0x12C) [0xb943cc] m30999| mongos(_ZN5mongo6Socket15handleRecvErrorEii+0x917) [0xbc1f77] m30999| mongos(_ZN5mongo6Socket5_recvEPci+0x6A) [0xbc29ba] m30999| mongos(_ZN5mongo6Socket11unsafe_recvEPci+0x9) [0xbc29c9] m30999| mongos(_ZN5mongo6Socket4recvEPci+0x35) [0xbc2a15] m30999| mongos(_ZN5mongo13MessagingPort4recvERNS_7MessageE+0xA9) [0xbb7a39] m30999| mongos(_ZN5mongo18DBClientConnection4recvERNS_7MessageE+0x15) [0x7b1c15] m30999| mongos(_ZN5mongo18DBClientReplicaSet4recvERNS_7MessageE+0x22) [0x7c6152] m30999| mongos(_ZN5mongo14DBClientCursor14initLazyFinishERb+0x2D) [0x7d0cfd] m30999| mongos(_ZN5mongo27ParallelSortClusteredCursor10finishInitEv+0x277) [0x7de6c7] m30999| mongos(_ZN5mongo8Strategy9commandOpERKSsRKNS_7BSONObjEiS2_S5_PSt6vectorINS0_13CommandResultESaIS7_EE+0x113) [0xb08613] m30999| mongos(_ZNK5mongo14ClusterFindCmd7explainEPNS_16OperationContextERKSsRKNS_7BSONObjENS_13ExplainCommon9VerbosityEPNS_14BSONObjBuilderE+0x253) [0xa67093] m30999| mongos(_ZN5mongo17ClusterExplainCmd3runEPNS_16OperationContextERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb+0x169) [0xa66489] m30999| mongos(_ZN5mongo7Command22execCommandClientBasicEPNS_16OperationContextEPS0_RNS_11ClientBasicEiPKcRNS_7BSONObjERNS_14BSONObjBuilderEb+0x3FD) [0xaf5edd] m30999| mongos(_ZN5mongo7Command20runAgainstRegisteredEPKcRNS_7BSONObjERNS_14BSONObjBuilderEi+0x22F) [0xa81c8f] m30999| mongos(_ZN5mongo8Strategy15clientCommandOpERNS_7RequestE+0x1BD) [0xb07c2d] m30999| mongos(_ZN5mongo7Request7processEi+0x591) [0xaf5461] m30999| mongos(_ZN5mongo21ShardedMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x58) [0x7695a8] m30999| mongos(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x411) [0xbbabd1] m30999| libpthread.so.0(+0x7C6B) [0x7f58d3368c6b] m30999| libc.so.6(clone+0x6D) [0x7f58d23fe5ed] m30999| ----- END BACKTRACE ----- m30999| 2014-11-26T14:34:11.140-0500 I NETWORK [conn2] DBClientCursor::init lazy say() failed m30999| 2014-11-26T14:34:11.140-0500 I NETWORK [conn2] DBClientCursor::init message from say() was empty m30999| 2014-11-26T14:34:11.140-0500 I NETWORK [conn2] slave no longer has secondary status: ip-10-33-141-202:31200 m31201| 2014-11-26T14:34:11.141-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:53806 #4 (2 connections now open) m31201| 2014-11-26T14:34:11.143-0500 I QUERY [conn4] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D477533574F61624B5A583578536B4A7977446748707665546C757166695A5576) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:34:11.156-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D477533574F61624B5A583578536B4A7977446748707665546C757166695A5576623136647756324F714A4873366B6F383678484579324D56755A775538...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:34:11.156-0500 I ACCESS [conn4] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:34:11.156-0500 I QUERY [conn4] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:34:11.157-0500 I QUERY [conn4] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:34:11.157-0500 I QUERY [conn4] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:34:11.157-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:53807 #5 (3 connections now open) m31201| 2014-11-26T14:34:11.159-0500 I QUERY [conn5] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D35453731565233415A4F576650514176585844662F6439337A375554564C4F42) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:34:11.172-0500 I QUERY [conn5] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D35453731565233415A4F576650514176585844662F6439337A375554564C4F424958436E2F34774376634578496746684C4F465768477178526E457044...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:34:11.172-0500 I ACCESS [conn5] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:34:11.172-0500 I QUERY [conn5] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:34:11.172-0500 I QUERY [conn5] command admin.$cmd command: isMaster { isMaster: 1 } ntoreturn:1 keyUpdates:0 reslen:377 0ms m31201| 2014-11-26T14:34:11.174-0500 I QUERY [conn5] command local.$cmd command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D736B734F66652F793442635A58554A506436436F504A586E71474D39332F7566) } ntoreturn:1 keyUpdates:0 reslen:179 0ms m31201| 2014-11-26T14:34:11.186-0500 I QUERY [conn5] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, 633D626977732C723D736B734F66652F793442635A58554A506436436F504A586E71474D39332F7566305A626A4E74637570414534424D7A6678664761437067616B71333357...), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:108 0ms m31201| 2014-11-26T14:34:11.187-0500 I ACCESS [conn5] Successfully authenticated as principal __system on local m31201| 2014-11-26T14:34:11.187-0500 I QUERY [conn5] command local.$cmd command: saslContinue { saslContinue: 1, payload: BinData(0, ), conversationId: 1 } ntoreturn:1 keyUpdates:0 reslen:78 0ms m31201| 2014-11-26T14:34:11.187-0500 I QUERY [conn5] command fooSharded.$cmd command: explain { explain: { find: "barSharded", filter: { _id: 1.0 }, options: { slaveOk: true } }, verbosity: "allPlansExecution" } ntoreturn:1 keyUpdates:0 reslen:659 0ms REN: exp: { "queryPlanner" : { "mongosPlannerVersion" : 1, "winningPlan" : { "stage" : "SINGLE_SHARD", "shards" : [ { "shardName" : "test-rs1", "connectionString" : "test-rs1/ip-10-33-141-202:31200,ip-10-33-141-202:31201", "serverInfo" : { "host" : "ip-10-33-141-202", "port" : 31201, "version" : "2.8.0-rc2-pre-", "gitVersion" : "45790039049d7375beafe122622363d35ce990c2" }, "plannerVersion" : 1, "parsedQuery" : { "_id" : { "$eq" : 1 } }, "winningPlan" : { "stage" : "EOF" }, "rejectedPlans" : [ ] } ] } }, "executionStats" : { "nReturned" : 0, "executionTimeMillis" : 54, "totalKeysExamined" : 0, "totalDocsExamined" : 0, "executionStages" : { "stage" : "SINGLE_SHARD", "nReturned" : 0, "executionTimeMillis" : 54, "totalKeysExamined" : 0, "totalDocsExamined" : 0, "totalChildMillis" : NumberLong(0), "shards" : [ { "shardName" : "test-rs1", "executionSuccess" : true, "executionStages" : { "stage" : "EOF", "nReturned" : 0, "executionTimeMillisEstimate" : 0, "works" : 1, "advanced" : 0, "needTime" : 0, "needFetch" : 0, "saveState" : 0, "restoreState" : 0, "isEOF" : 1, "invalidates" : 0 } } ] }, "allPlansExecution" : [ { "shardName" : "test-rs1", "allPlans" : [ ] } ] }, "ok" : 1 } assert: [1] != [0] are not equal : undefined Error: [1] != [0] are not equal : undefined at Error () at doassert (src/mongo/shell/assert.js:11:14) at Function.assert.eq (src/mongo/shell/assert.js:38:5) at /data/mongo/jstests/sharding/mongos_rs_auth_shard_failure_tolerance.js:155:12 2014-11-26T14:34:11.190-0500 I QUERY Error: [1] != [0] are not equal : undefined at Error () at doassert (src/mongo/shell/assert.js:11:14) at Function.assert.eq (src/mongo/shell/assert.js:38:5) at /data/mongo/jstests/sharding/mongos_rs_auth_shard_failure_tolerance.js:155:12 at src/mongo/shell/assert.js:13 failed to load: /data/mongo/jstests/sharding/mongos_rs_auth_shard_failure_tolerance.js m29000| 2014-11-26T14:34:11.190-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m29000| 2014-11-26T14:34:11.190-0500 I COMMAND [signalProcessingThread] now exiting m29000| 2014-11-26T14:34:11.190-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m29000| 2014-11-26T14:34:11.190-0500 I NETWORK [signalProcessingThread] closing listening socket: 28 m29000| 2014-11-26T14:34:11.190-0500 I NETWORK [signalProcessingThread] closing listening socket: 29 m29000| 2014-11-26T14:34:11.190-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-29000.sock m29000| 2014-11-26T14:34:11.190-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m29000| 2014-11-26T14:34:11.190-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m29000| 2014-11-26T14:34:11.190-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m29000| 2014-11-26T14:34:11.191-0500 I NETWORK [conn13] end connection 10.33.141.202:41490 (7 connections now open) m29000| 2014-11-26T14:34:11.191-0500 I NETWORK [conn8] end connection 10.33.141.202:41468 (7 connections now open) m29000| 2014-11-26T14:34:11.191-0500 I NETWORK [conn14] end connection 10.33.141.202:41494 (7 connections now open) m29000| 2014-11-26T14:34:11.191-0500 I NETWORK [conn6] end connection 10.33.141.202:41460 (7 connections now open) m29000| 2014-11-26T14:34:11.191-0500 I NETWORK [conn5] end connection 10.33.141.202:41459 (7 connections now open) m29000| 2014-11-26T14:34:11.191-0500 I NETWORK [conn7] end connection 10.33.141.202:41467 (7 connections now open) m29000| 2014-11-26T14:34:11.191-0500 I NETWORK [conn2] end connection 10.33.141.202:41455 (6 connections now open) m29000| 2014-11-26T14:34:11.191-0500 I NETWORK [conn1] end connection 127.0.0.1:59874 (6 connections now open) m29000| 2014-11-26T14:34:11.243-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 m31301| 2014-11-26T14:34:11.283-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:34:11.283-0500 W NETWORK [ReplExecNetThread-7] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:34:11.283-0500 D - [ReplExecNetThread-7] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:11.283-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:11.283-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 1; 1ms have already elapsed m31301| 2014-11-26T14:34:11.283-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:34:11.283-0500 W NETWORK [ReplExecNetThread-2] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:34:11.284-0500 D - [ReplExecNetThread-2] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:11.284-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:11.284-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 0; 2ms have already elapsed m31301| 2014-11-26T14:34:11.284-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:34:11.284-0500 W NETWORK [ReplExecNetThread-3] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:34:11.284-0500 D - [ReplExecNetThread-3] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:11.284-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:34:11.413-0500 D NETWORK [ReplExecNetThread-3] SocketException: remote: 10.33.141.202:31200 error: 9001 socket exception [CLOSED] server [10.33.141.202:31200] m31201| 2014-11-26T14:34:11.413-0500 I NETWORK [ReplExecNetThread-3] DBClientCursor::init call() failed m31201| 2014-11-26T14:34:11.413-0500 D - [ReplExecNetThread-3] User Assertion: 10276:DBClientBase::findN: transport error: ip-10-33-141-202:31200 ns: admin.$cmd query: { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } m31201| 2014-11-26T14:34:11.413-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31200; Location10276 DBClientBase::findN: transport error: ip-10-33-141-202:31200 ns: admin.$cmd query: { replSetHeartbeat: "test-rs1", pv: 1, v: 1, from: "ip-10-33-141-202:31201", fromId: 1, checkEmpty: false } m31201| 2014-11-26T14:34:11.413-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31200; trying again; Retries left: 1; 0ms have already elapsed m31201| 2014-11-26T14:34:11.414-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:34:11.414-0500 W NETWORK [ReplExecNetThread-7] Failed to connect to 10.33.141.202:31200, reason: errno:111 Connection refused m31201| 2014-11-26T14:34:11.414-0500 D - [ReplExecNetThread-7] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:34:11.414-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31200; Location18915 Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:34:11.414-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31200; trying again; Retries left: 0; 1ms have already elapsed m31201| 2014-11-26T14:34:11.415-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:34:11.415-0500 W NETWORK [ReplExecNetThread-4] Failed to connect to 10.33.141.202:31200, reason: errno:111 Connection refused m31201| 2014-11-26T14:34:11.415-0500 D - [ReplExecNetThread-4] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:34:11.415-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31200; Location18915 Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31101| 2014-11-26T14:34:11.577-0500 I QUERY [conn3] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31100", fromId: 0, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:158 0ms m31100| 2014-11-26T14:34:11.619-0500 I QUERY [conn2] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } ntoreturn:1 keyUpdates:0 reslen:142 0ms m30999| 2014-11-26T14:34:12.191-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m30999| 2014-11-26T14:34:12.191-0500 I SHARDING [signalProcessingThread] dbexit: rc:0 m31100| 2014-11-26T14:34:12.192-0500 I NETWORK [conn16] end connection 10.33.141.202:38124 (8 connections now open) m31100| 2014-11-26T14:34:12.192-0500 I NETWORK [conn7] end connection 10.33.141.202:38094 (8 connections now open) m31100| 2014-11-26T14:34:12.192-0500 I NETWORK [conn8] end connection 10.33.141.202:38099 (6 connections now open) m31100| 2014-11-26T14:34:12.192-0500 I NETWORK [conn6] end connection 10.33.141.202:38093 (5 connections now open) m31201| 2014-11-26T14:34:12.192-0500 I NETWORK [conn4] end connection 10.33.141.202:53806 (2 connections now open) m31201| 2014-11-26T14:34:12.192-0500 I NETWORK [conn5] end connection 10.33.141.202:53807 (2 connections now open) m31100| 2014-11-26T14:34:13.191-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m31100| 2014-11-26T14:34:13.192-0500 I REPL [signalProcessingThread] Stopping replication applier threads m31301| 2014-11-26T14:34:13.285-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:34:13.285-0500 W NETWORK [ReplExecNetThread-4] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:34:13.285-0500 D - [ReplExecNetThread-4] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:13.285-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:13.285-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 1; 1ms have already elapsed m31301| 2014-11-26T14:34:13.286-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:34:13.286-0500 W NETWORK [ReplExecNetThread-5] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:34:13.286-0500 D - [ReplExecNetThread-5] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:13.286-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:13.286-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 0; 2ms have already elapsed m31301| 2014-11-26T14:34:13.286-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:34:13.287-0500 W NETWORK [ReplExecNetThread-6] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:34:13.287-0500 D - [ReplExecNetThread-6] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:13.287-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:34:13.415-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:34:13.415-0500 W NETWORK [ReplExecNetThread-5] Failed to connect to 10.33.141.202:31200, reason: errno:111 Connection refused m31201| 2014-11-26T14:34:13.415-0500 D - [ReplExecNetThread-5] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:34:13.415-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31200; Location18915 Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:34:13.415-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31200; trying again; Retries left: 1; 0ms have already elapsed m31201| 2014-11-26T14:34:13.416-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:34:13.416-0500 W NETWORK [ReplExecNetThread-6] Failed to connect to 10.33.141.202:31200, reason: errno:111 Connection refused m31201| 2014-11-26T14:34:13.416-0500 D - [ReplExecNetThread-6] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:34:13.416-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31200; Location18915 Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:34:13.416-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31200; trying again; Retries left: 0; 1ms have already elapsed m31201| 2014-11-26T14:34:13.416-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31201| 2014-11-26T14:34:13.416-0500 W NETWORK [ReplExecNetThread-0] Failed to connect to 10.33.141.202:31200, reason: errno:111 Connection refused m31201| 2014-11-26T14:34:13.416-0500 D - [ReplExecNetThread-0] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:34:13.416-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31200; Location18915 Failed attempt to connect to ip-10-33-141-202:31200; couldn't connect to server ip-10-33-141-202:31200 (10.33.141.202), connection attempt failed m31100| 2014-11-26T14:34:13.619-0500 I STORAGE [conn2] got request after shutdown() m31101| 2014-11-26T14:34:13.619-0500 D NETWORK [ReplExecNetThread-6] SocketException: remote: 10.33.141.202:31100 error: 9001 socket exception [CLOSED] server [10.33.141.202:31100] m31101| 2014-11-26T14:34:13.619-0500 I NETWORK [ReplExecNetThread-6] DBClientCursor::init call() failed m31101| 2014-11-26T14:34:13.619-0500 D - [ReplExecNetThread-6] User Assertion: 10276:DBClientBase::findN: transport error: ip-10-33-141-202:31100 ns: admin.$cmd query: { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } m31101| 2014-11-26T14:34:13.620-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31100; Location10276 DBClientBase::findN: transport error: ip-10-33-141-202:31100 ns: admin.$cmd query: { replSetHeartbeat: "test-rs0", pv: 1, v: 1, from: "ip-10-33-141-202:31101", fromId: 1, checkEmpty: false } m31101| 2014-11-26T14:34:13.620-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31100; trying again; Retries left: 1; 1ms have already elapsed m31101| 2014-11-26T14:34:13.620-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31101| 2014-11-26T14:34:13.620-0500 D NETWORK [ReplExecNetThread-0] connected to server ip-10-33-141-202:31100 (10.33.141.202) m31100| 2014-11-26T14:34:13.691-0500 I COMMAND [signalProcessingThread] now exiting m31100| 2014-11-26T14:34:13.691-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m31100| 2014-11-26T14:34:13.691-0500 I NETWORK [signalProcessingThread] closing listening socket: 7 m31100| 2014-11-26T14:34:13.691-0500 I NETWORK [signalProcessingThread] closing listening socket: 8 m31100| 2014-11-26T14:34:13.691-0500 I NETWORK [signalProcessingThread] closing listening socket: 14 m31100| 2014-11-26T14:34:13.691-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31100.sock m31100| 2014-11-26T14:34:13.691-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m31100| 2014-11-26T14:34:13.691-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m31100| 2014-11-26T14:34:13.691-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: fooSharded.barSharded m31100| 2014-11-26T14:34:13.691-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: fooUnsharded.barUnsharded m31100| 2014-11-26T14:34:13.691-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me m31100| 2014-11-26T14:34:13.691-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs m31100| 2014-11-26T14:34:13.691-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log m31100| 2014-11-26T14:34:13.691-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset m31100| 2014-11-26T14:34:13.691-0500 I NETWORK [conn1] end connection 127.0.0.1:47375 (3 connections now open) m31100| 2014-11-26T14:34:13.691-0500 I NETWORK [conn5] end connection 10.33.141.202:38074 (3 connections now open) m31100| 2014-11-26T14:34:13.691-0500 I NETWORK [conn13] end connection 10.33.141.202:38107 (3 connections now open) m31101| 2014-11-26T14:34:13.691-0500 I NETWORK [ReplExecNetThread-0] Socket recv() errno:104 Connection reset by peer 10.33.141.202:31100 m31101| 2014-11-26T14:34:13.691-0500 I NETWORK [ReplExecNetThread-0] SocketException: remote: 10.33.141.202:31100 error: 9001 socket exception [RECV_ERROR] server [10.33.141.202:31100] m31101| 2014-11-26T14:34:13.691-0500 I NETWORK [ReplExecNetThread-0] DBClientCursor::init call() failed m31101| 2014-11-26T14:34:13.691-0500 D - [ReplExecNetThread-0] User Assertion: 10276:DBClientBase::findN: transport error: ip-10-33-141-202:31100 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6E49794261457464486B6354717A4934466F5A4A4539336A47676B6D764D4C4C) } m31101| 2014-11-26T14:34:13.691-0500 D NETWORK [rsBackgroundSync] SocketException: remote: 10.33.141.202:31100 error: 9001 socket exception [CLOSED] server [10.33.141.202:31100] m31101| 2014-11-26T14:34:13.691-0500 I NETWORK [conn3] end connection 10.33.141.202:53988 (1 connection now open) m31101| 2014-11-26T14:34:13.691-0500 D - [rsBackgroundSync] User Assertion: 10278:dbclient error communicating with server: ip-10-33-141-202:31100 m31101| 2014-11-26T14:34:13.691-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31100; Location10276 DBClientBase::findN: transport error: ip-10-33-141-202:31100 ns: local.$cmd query: { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: BinData(0, 6E2C2C6E3D5F5F73797374656D2C723D6E49794261457464486B6354717A4934466F5A4A4539336A47676B6D764D4C4C) } m31101| 2014-11-26T14:34:13.691-0500 E REPL [rsBackgroundSync] sync producer problem: 10278 dbclient error communicating with server: ip-10-33-141-202:31100 m31101| 2014-11-26T14:34:13.691-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31100; trying again; Retries left: 0; 72ms have already elapsed m31101| 2014-11-26T14:34:13.692-0500 I REPL [ReplicationExecutor] could not find member to sync from m31100| 2014-11-26T14:34:13.691-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog m31100| 2014-11-26T14:34:13.691-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m31101| 2014-11-26T14:34:13.692-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31101| 2014-11-26T14:34:13.692-0500 W NETWORK [ReplExecNetThread-7] Failed to connect to 10.33.141.202:31100, reason: errno:111 Connection refused m31101| 2014-11-26T14:34:13.692-0500 D - [ReplExecNetThread-7] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31100; couldn't connect to server ip-10-33-141-202:31100 (10.33.141.202), connection attempt failed m31101| 2014-11-26T14:34:13.692-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31100; Location18915 Failed attempt to connect to ip-10-33-141-202:31100; couldn't connect to server ip-10-33-141-202:31100 (10.33.141.202), connection attempt failed m31100| 2014-11-26T14:34:13.748-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 m31101| 2014-11-26T14:34:14.191-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m31101| 2014-11-26T14:34:14.192-0500 I REPL [signalProcessingThread] Stopping replication applier threads m31301| 2014-11-26T14:34:14.561-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:41059 #4 (2 connections now open) m31301| 2014-11-26T14:34:14.562-0500 I QUERY [conn4] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:34:15.079-0500 I COMMAND [signalProcessingThread] now exiting m31101| 2014-11-26T14:34:15.079-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m31101| 2014-11-26T14:34:15.079-0500 I NETWORK [signalProcessingThread] closing listening socket: 10 m31101| 2014-11-26T14:34:15.079-0500 I NETWORK [signalProcessingThread] closing listening socket: 11 m31101| 2014-11-26T14:34:15.079-0500 I NETWORK [signalProcessingThread] closing listening socket: 17 m31101| 2014-11-26T14:34:15.079-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31101.sock 2014-11-26T14:34:15.080-0500 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() errno:104 Connection reset by peer 10.33.141.202:31101 2014-11-26T14:34:15.080-0500 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 10.33.141.202:31101 error: 9001 socket exception [RECV_ERROR] server [10.33.141.202:31101] 2014-11-26T14:34:15.080-0500 I NETWORK [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed 2014-11-26T14:34:15.080-0500 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1417030454562799 microSec, clearing pool for ip-10-33-141-202:31101 of 0 connections m31101| 2014-11-26T14:34:15.080-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m31101| 2014-11-26T14:34:15.080-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m31101| 2014-11-26T14:34:15.080-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: fooSharded.barSharded m31101| 2014-11-26T14:34:15.080-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: fooUnsharded.barUnsharded m31101| 2014-11-26T14:34:15.080-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me m31101| 2014-11-26T14:34:15.080-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs m31101| 2014-11-26T14:34:15.080-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.replset.minvalid m31101| 2014-11-26T14:34:15.080-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log m31101| 2014-11-26T14:34:15.080-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset m31101| 2014-11-26T14:34:15.080-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog m31101| 2014-11-26T14:34:15.080-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m31101| 2014-11-26T14:34:15.080-0500 I NETWORK [conn1] end connection 127.0.0.1:36342 (0 connections now open) m31201| 2014-11-26T14:34:15.080-0500 I NETWORK [initandlisten] connection accepted from 10.33.141.202:53823 #6 (2 connections now open) m31201| 2014-11-26T14:34:15.081-0500 I QUERY [conn6] command admin.$cmd command: isMaster { ismaster: 1 } ntoreturn:1 keyUpdates:0 reslen:341 0ms m31101| 2014-11-26T14:34:15.137-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 m31201| 2014-11-26T14:34:15.191-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m31201| 2014-11-26T14:34:15.192-0500 I REPL [signalProcessingThread] Stopping replication applier threads m31301| 2014-11-26T14:34:15.287-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:34:15.288-0500 W NETWORK [ReplExecNetThread-1] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:34:15.288-0500 D - [ReplExecNetThread-1] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:15.288-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:15.288-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 1; 1ms have already elapsed m31301| 2014-11-26T14:34:15.288-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:34:15.288-0500 W NETWORK [ReplExecNetThread-0] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:34:15.288-0500 D - [ReplExecNetThread-0] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:15.288-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:15.288-0500 D REPL [ReplicationExecutor] Bad heartbeat response from ip-10-33-141-202:31300; trying again; Retries left: 0; 1ms have already elapsed m31301| 2014-11-26T14:34:15.289-0500 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG m31301| 2014-11-26T14:34:15.289-0500 W NETWORK [ReplExecNetThread-7] Failed to connect to 10.33.141.202:31300, reason: errno:111 Connection refused m31301| 2014-11-26T14:34:15.289-0500 D - [ReplExecNetThread-7] User Assertion: 18915:Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31301| 2014-11-26T14:34:15.289-0500 D REPL [ReplicationExecutor] Error in heartbeat request to ip-10-33-141-202:31300; Location18915 Failed attempt to connect to ip-10-33-141-202:31300; couldn't connect to server ip-10-33-141-202:31300 (10.33.141.202), connection attempt failed m31201| 2014-11-26T14:34:15.575-0500 I COMMAND [signalProcessingThread] now exiting m31201| 2014-11-26T14:34:15.575-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m31201| 2014-11-26T14:34:15.575-0500 I NETWORK [signalProcessingThread] closing listening socket: 16 m31201| 2014-11-26T14:34:15.575-0500 I NETWORK [signalProcessingThread] closing listening socket: 17 m31201| 2014-11-26T14:34:15.575-0500 I NETWORK [signalProcessingThread] closing listening socket: 23 m31201| 2014-11-26T14:34:15.575-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31201.sock m31201| 2014-11-26T14:34:15.575-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m31201| 2014-11-26T14:34:15.575-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m31201| 2014-11-26T14:34:15.575-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me m31201| 2014-11-26T14:34:15.575-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs m31201| 2014-11-26T14:34:15.575-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.replset.minvalid m31201| 2014-11-26T14:34:15.575-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log m31201| 2014-11-26T14:34:15.575-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset m31201| 2014-11-26T14:34:15.575-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog m31201| 2014-11-26T14:34:15.575-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m31201| 2014-11-26T14:34:15.575-0500 I NETWORK [conn1] end connection 127.0.0.1:44003 (1 connection now open) m31201| 2014-11-26T14:34:15.575-0500 I NETWORK [conn6] end connection 10.33.141.202:53823 (1 connection now open) m31201| 2014-11-26T14:34:15.617-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 m31301| 2014-11-26T14:34:16.192-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends m31301| 2014-11-26T14:34:16.192-0500 I REPL [signalProcessingThread] Stopping replication applier threads m31301| 2014-11-26T14:34:16.357-0500 I COMMAND [signalProcessingThread] now exiting m31301| 2014-11-26T14:34:16.357-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... m31301| 2014-11-26T14:34:16.357-0500 I NETWORK [signalProcessingThread] closing listening socket: 22 m31301| 2014-11-26T14:34:16.357-0500 I NETWORK [signalProcessingThread] closing listening socket: 23 m31301| 2014-11-26T14:34:16.357-0500 I NETWORK [signalProcessingThread] closing listening socket: 29 m31301| 2014-11-26T14:34:16.357-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-31301.sock m31301| 2014-11-26T14:34:16.357-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... m31301| 2014-11-26T14:34:16.357-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... m31301| 2014-11-26T14:34:16.357-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.me m31301| 2014-11-26T14:34:16.357-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.oplog.rs m31301| 2014-11-26T14:34:16.357-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.replset.minvalid m31301| 2014-11-26T14:34:16.357-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.startup_log m31301| 2014-11-26T14:34:16.357-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: local.system.replset m31301| 2014-11-26T14:34:16.357-0500 D STORAGE [signalProcessingThread] ~WiredTigerRecordStore for: _mdb_catalog m31301| 2014-11-26T14:34:16.357-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down m31301| 2014-11-26T14:34:16.357-0500 I NETWORK [conn1] end connection 127.0.0.1:49882 (1 connection now open) m31301| 2014-11-26T14:34:16.357-0500 I NETWORK [conn4] end connection 10.33.141.202:41059 (1 connection now open) m31301| 2014-11-26T14:34:16.401-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 32.1219 seconds 2014-11-26T14:34:17.194-0500 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35501 #3 (1 connection now open) 2014-11-26T14:34:17.194-0500 I NETWORK [conn3] end connection 127.0.0.1:35501 (0 connections now open) 2014-11-26T14:34:17.194-0500 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends 2014-11-26T14:34:17.194-0500 I COMMAND [signalProcessingThread] now exiting 2014-11-26T14:34:17.194-0500 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... 2014-11-26T14:34:17.194-0500 I NETWORK [signalProcessingThread] closing listening socket: 4 2014-11-26T14:34:17.194-0500 I NETWORK [signalProcessingThread] closing listening socket: 5 2014-11-26T14:34:17.194-0500 I NETWORK [signalProcessingThread] closing listening socket: 11 2014-11-26T14:34:17.194-0500 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27999.sock 2014-11-26T14:34:17.194-0500 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... 2014-11-26T14:34:17.194-0500 I NETWORK [signalProcessingThread] shutdown: going to close sockets... 2014-11-26T14:34:17.194-0500 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down 2014-11-26T14:34:17.229-0500 I COMMAND [signalProcessingThread] dbexit: rc: 0 test /data/mongo/jstests/sharding/mongos_rs_auth_shard_failure_tolerance.js exited with status 253 0 tests succeeded The following tests failed (with exit code): /data/mongo/jstests/sharding/mongos_rs_auth_shard_failure_tolerance.js 253