[js_test:auth] 2015-10-13T18:46:51.226-0400 Starting JSTest jstests/sharding/auth.js... [js_test:auth] 2015-10-13T18:46:51.235-0400 JSTest jstests/sharding/auth.js started with pid 11365. [js_test:auth] 2015-10-13T18:46:51.248-0400 MongoDB shell version: 3.1.10-pre- [js_test:auth] 2015-10-13T18:46:51.292-0400 /data/db/job1/mongorunner/ [js_test:auth] 2015-10-13T18:46:51.301-0400 ReplSetTest Starting Set [js_test:auth] 2015-10-13T18:46:51.301-0400 ReplSetTest n is : 0 [js_test:auth] 2015-10-13T18:46:51.301-0400 ReplSetTest n: 0 ports: [ 20260, 20261, 20262 ] 20260 number [js_test:auth] 2015-10-13T18:46:51.302-0400 { [js_test:auth] 2015-10-13T18:46:51.302-0400 "useHostName" : true, [js_test:auth] 2015-10-13T18:46:51.302-0400 "oplogSize" : 40, [js_test:auth] 2015-10-13T18:46:51.302-0400 "keyFile" : "jstests/libs/key1", [js_test:auth] 2015-10-13T18:46:51.302-0400 "port" : 20260, [js_test:auth] 2015-10-13T18:46:51.302-0400 "noprealloc" : "", [js_test:auth] 2015-10-13T18:46:51.302-0400 "smallfiles" : "", [js_test:auth] 2015-10-13T18:46:51.302-0400 "replSet" : "auth-configRS", [js_test:auth] 2015-10-13T18:46:51.302-0400 "dbpath" : "$set-$node", [js_test:auth] 2015-10-13T18:46:51.302-0400 "pathOpts" : { [js_test:auth] 2015-10-13T18:46:51.302-0400 "testName" : "auth", [js_test:auth] 2015-10-13T18:46:51.302-0400 "node" : 0, [js_test:auth] 2015-10-13T18:46:51.303-0400 "set" : "auth-configRS" [js_test:auth] 2015-10-13T18:46:51.303-0400 }, [js_test:auth] 2015-10-13T18:46:51.303-0400 "journal" : "", [js_test:auth] 2015-10-13T18:46:51.303-0400 "configsvr" : "", [js_test:auth] 2015-10-13T18:46:51.303-0400 "noJournalPrealloc" : undefined, [js_test:auth] 2015-10-13T18:46:51.303-0400 "storageEngine" : "wiredTiger", [js_test:auth] 2015-10-13T18:46:51.303-0400 "restart" : undefined [js_test:auth] 2015-10-13T18:46:51.303-0400 } [js_test:auth] 2015-10-13T18:46:51.303-0400 ReplSetTest Starting.... [js_test:auth] 2015-10-13T18:46:51.303-0400 Resetting db path '/data/db/job1/mongorunner/auth-configRS-0' [js_test:auth] 2015-10-13T18:46:51.304-0400 2015-10-13T18:46:51.304-0400 I - [thread1] shell: started program (sh11398): /media/ssd/mongo1/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 20260 --noprealloc --smallfiles --replSet auth-configRS --dbpath /data/db/job1/mongorunner/auth-configRS-0 --journal --configsvr --storageEngine wiredTiger --nopreallocj --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:46:51.305-0400 2015-10-13T18:46:51.305-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20260, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:46:51.321-0400 c20260| note: noprealloc may hurt performance in many applications [js_test:auth] 2015-10-13T18:46:51.369-0400 c20260| 2015-10-13T18:46:51.369-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:auth] 2015-10-13T18:46:51.506-0400 2015-10-13T18:46:51.505-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20260, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:46:51.694-0400 c20260| 2015-10-13T18:46:51.694-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:auth] 2015-10-13T18:46:51.694-0400 c20260| 2015-10-13T18:46:51.694-0400 I CONTROL [initandlisten] MongoDB starting : pid=11398 port=20260 dbpath=/data/db/job1/mongorunner/auth-configRS-0 64-bit host=ubuntu [js_test:auth] 2015-10-13T18:46:51.694-0400 c20260| 2015-10-13T18:46:51.694-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:51.694-0400 c20260| 2015-10-13T18:46:51.694-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:46:51.695-0400 c20260| 2015-10-13T18:46:51.694-0400 I CONTROL [initandlisten] ** Not recommended for production. [js_test:auth] 2015-10-13T18:46:51.695-0400 c20260| 2015-10-13T18:46:51.694-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:51.696-0400 c20260| 2015-10-13T18:46:51.695-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:51.696-0400 c20260| 2015-10-13T18:46:51.695-0400 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine. [js_test:auth] 2015-10-13T18:46:51.696-0400 c20260| 2015-10-13T18:46:51.695-0400 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems: [js_test:auth] 2015-10-13T18:46:51.696-0400 c20260| 2015-10-13T18:46:51.695-0400 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options] [js_test:auth] 2015-10-13T18:46:51.696-0400 c20260| 2015-10-13T18:46:51.695-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:51.697-0400 c20260| 2015-10-13T18:46:51.695-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. [js_test:auth] 2015-10-13T18:46:51.697-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:46:51.697-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:51.697-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. [js_test:auth] 2015-10-13T18:46:51.697-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:46:51.697-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:51.697-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:46:51.698-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:46:51.698-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:46:51.698-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] allocator: tcmalloc [js_test:auth] 2015-10-13T18:46:51.698-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] modules: subscription [js_test:auth] 2015-10-13T18:46:51.698-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] build environment: [js_test:auth] 2015-10-13T18:46:51.698-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] distarch: x86_64 [js_test:auth] 2015-10-13T18:46:51.698-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] target_arch: x86_64 [js_test:auth] 2015-10-13T18:46:51.698-0400 c20260| 2015-10-13T18:46:51.696-0400 I CONTROL [initandlisten] options: { net: { port: 20260 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "auth-configRS" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/db/job1/mongorunner/auth-configRS-0", engine: "wiredTiger", journal: { enabled: true }, mmapv1: { preallocDataFiles: false, smallFiles: true } } } [js_test:auth] 2015-10-13T18:46:51.706-0400 2015-10-13T18:46:51.706-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20260, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:46:51.822-0400 c20260| 2015-10-13T18:46:51.822-0400 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument Did not find replica set lastVote document in local.replset.election [js_test:auth] 2015-10-13T18:46:51.822-0400 c20260| 2015-10-13T18:46:51.822-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset [js_test:auth] 2015-10-13T18:46:51.822-0400 c20260| 2015-10-13T18:46:51.822-0400 I FTDC [initandlisten] Starting full-time diagnostic data capture with directory '/data/db/job1/mongorunner/auth-configRS-0/diagnostic.data' [js_test:auth] 2015-10-13T18:46:51.907-0400 2015-10-13T18:46:51.906-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20260, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:46:51.951-0400 c20260| 2015-10-13T18:46:51.951-0400 I NETWORK [initandlisten] waiting for connections on port 20260 [js_test:auth] 2015-10-13T18:46:52.107-0400 c20260| 2015-10-13T18:46:52.107-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55071 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:46:52.107-0400 c20260| 2015-10-13T18:46:52.107-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:46:52.108-0400 [ connection to ubuntu:20260 ] [js_test:auth] 2015-10-13T18:46:52.108-0400 ReplSetTest n is : 1 [js_test:auth] 2015-10-13T18:46:52.109-0400 ReplSetTest n: 1 ports: [ 20260, 20261, 20262 ] 20261 number [js_test:auth] 2015-10-13T18:46:52.109-0400 { [js_test:auth] 2015-10-13T18:46:52.109-0400 "useHostName" : true, [js_test:auth] 2015-10-13T18:46:52.109-0400 "oplogSize" : 40, [js_test:auth] 2015-10-13T18:46:52.109-0400 "keyFile" : "jstests/libs/key1", [js_test:auth] 2015-10-13T18:46:52.109-0400 "port" : 20261, [js_test:auth] 2015-10-13T18:46:52.109-0400 "noprealloc" : "", [js_test:auth] 2015-10-13T18:46:52.110-0400 "smallfiles" : "", [js_test:auth] 2015-10-13T18:46:52.110-0400 "replSet" : "auth-configRS", [js_test:auth] 2015-10-13T18:46:52.110-0400 "dbpath" : "$set-$node", [js_test:auth] 2015-10-13T18:46:52.110-0400 "pathOpts" : { [js_test:auth] 2015-10-13T18:46:52.110-0400 "testName" : "auth", [js_test:auth] 2015-10-13T18:46:52.110-0400 "node" : 1, [js_test:auth] 2015-10-13T18:46:52.110-0400 "set" : "auth-configRS" [js_test:auth] 2015-10-13T18:46:52.110-0400 }, [js_test:auth] 2015-10-13T18:46:52.110-0400 "journal" : "", [js_test:auth] 2015-10-13T18:46:52.110-0400 "configsvr" : "", [js_test:auth] 2015-10-13T18:46:52.110-0400 "noJournalPrealloc" : undefined, [js_test:auth] 2015-10-13T18:46:52.110-0400 "storageEngine" : "wiredTiger", [js_test:auth] 2015-10-13T18:46:52.110-0400 "restart" : undefined [js_test:auth] 2015-10-13T18:46:52.110-0400 } [js_test:auth] 2015-10-13T18:46:52.110-0400 ReplSetTest Starting.... [js_test:auth] 2015-10-13T18:46:52.110-0400 Resetting db path '/data/db/job1/mongorunner/auth-configRS-1' [js_test:auth] 2015-10-13T18:46:52.113-0400 2015-10-13T18:46:52.113-0400 I - [thread1] shell: started program (sh11592): /media/ssd/mongo1/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 20261 --noprealloc --smallfiles --replSet auth-configRS --dbpath /data/db/job1/mongorunner/auth-configRS-1 --journal --configsvr --storageEngine wiredTiger --nopreallocj --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:46:52.114-0400 2015-10-13T18:46:52.114-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20261, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:46:52.129-0400 c20261| note: noprealloc may hurt performance in many applications [js_test:auth] 2015-10-13T18:46:52.177-0400 c20261| 2015-10-13T18:46:52.177-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:auth] 2015-10-13T18:46:52.314-0400 2015-10-13T18:46:52.314-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20261, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:46:52.515-0400 2015-10-13T18:46:52.515-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20261, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:46:52.588-0400 c20261| 2015-10-13T18:46:52.588-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:auth] 2015-10-13T18:46:52.588-0400 c20261| 2015-10-13T18:46:52.588-0400 I CONTROL [initandlisten] MongoDB starting : pid=11592 port=20261 dbpath=/data/db/job1/mongorunner/auth-configRS-1 64-bit host=ubuntu [js_test:auth] 2015-10-13T18:46:52.589-0400 c20261| 2015-10-13T18:46:52.588-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:52.589-0400 c20261| 2015-10-13T18:46:52.588-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:46:52.589-0400 c20261| 2015-10-13T18:46:52.588-0400 I CONTROL [initandlisten] ** Not recommended for production. [js_test:auth] 2015-10-13T18:46:52.589-0400 c20261| 2015-10-13T18:46:52.588-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:52.590-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:52.590-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine. [js_test:auth] 2015-10-13T18:46:52.590-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems: [js_test:auth] 2015-10-13T18:46:52.590-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options] [js_test:auth] 2015-10-13T18:46:52.590-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:52.590-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. [js_test:auth] 2015-10-13T18:46:52.590-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:46:52.590-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:52.590-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. [js_test:auth] 2015-10-13T18:46:52.590-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:46:52.590-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:52.591-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:46:52.591-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:46:52.591-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:46:52.591-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] allocator: tcmalloc [js_test:auth] 2015-10-13T18:46:52.591-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] modules: subscription [js_test:auth] 2015-10-13T18:46:52.591-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] build environment: [js_test:auth] 2015-10-13T18:46:52.591-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] distarch: x86_64 [js_test:auth] 2015-10-13T18:46:52.591-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] target_arch: x86_64 [js_test:auth] 2015-10-13T18:46:52.592-0400 c20261| 2015-10-13T18:46:52.590-0400 I CONTROL [initandlisten] options: { net: { port: 20261 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "auth-configRS" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/db/job1/mongorunner/auth-configRS-1", engine: "wiredTiger", journal: { enabled: true }, mmapv1: { preallocDataFiles: false, smallFiles: true } } } [js_test:auth] 2015-10-13T18:46:52.715-0400 2015-10-13T18:46:52.715-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20261, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:46:52.739-0400 c20261| 2015-10-13T18:46:52.739-0400 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument Did not find replica set lastVote document in local.replset.election [js_test:auth] 2015-10-13T18:46:52.739-0400 c20261| 2015-10-13T18:46:52.739-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset [js_test:auth] 2015-10-13T18:46:52.739-0400 c20261| 2015-10-13T18:46:52.739-0400 I FTDC [initandlisten] Starting full-time diagnostic data capture with directory '/data/db/job1/mongorunner/auth-configRS-1/diagnostic.data' [js_test:auth] 2015-10-13T18:46:52.854-0400 c20261| 2015-10-13T18:46:52.854-0400 I NETWORK [initandlisten] waiting for connections on port 20261 [js_test:auth] 2015-10-13T18:46:52.916-0400 c20261| 2015-10-13T18:46:52.916-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:41341 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:46:52.916-0400 c20261| 2015-10-13T18:46:52.916-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:46:52.917-0400 [ connection to ubuntu:20260, connection to ubuntu:20261 ] [js_test:auth] 2015-10-13T18:46:52.917-0400 ReplSetTest n is : 2 [js_test:auth] 2015-10-13T18:46:52.917-0400 ReplSetTest n: 2 ports: [ 20260, 20261, 20262 ] 20262 number [js_test:auth] 2015-10-13T18:46:52.918-0400 { [js_test:auth] 2015-10-13T18:46:52.918-0400 "useHostName" : true, [js_test:auth] 2015-10-13T18:46:52.918-0400 "oplogSize" : 40, [js_test:auth] 2015-10-13T18:46:52.918-0400 "keyFile" : "jstests/libs/key1", [js_test:auth] 2015-10-13T18:46:52.918-0400 "port" : 20262, [js_test:auth] 2015-10-13T18:46:52.918-0400 "noprealloc" : "", [js_test:auth] 2015-10-13T18:46:52.918-0400 "smallfiles" : "", [js_test:auth] 2015-10-13T18:46:52.918-0400 "replSet" : "auth-configRS", [js_test:auth] 2015-10-13T18:46:52.918-0400 "dbpath" : "$set-$node", [js_test:auth] 2015-10-13T18:46:52.918-0400 "pathOpts" : { [js_test:auth] 2015-10-13T18:46:52.919-0400 "testName" : "auth", [js_test:auth] 2015-10-13T18:46:52.919-0400 "node" : 2, [js_test:auth] 2015-10-13T18:46:52.919-0400 "set" : "auth-configRS" [js_test:auth] 2015-10-13T18:46:52.919-0400 }, [js_test:auth] 2015-10-13T18:46:52.919-0400 "journal" : "", [js_test:auth] 2015-10-13T18:46:52.919-0400 "configsvr" : "", [js_test:auth] 2015-10-13T18:46:52.919-0400 "noJournalPrealloc" : undefined, [js_test:auth] 2015-10-13T18:46:52.919-0400 "storageEngine" : "wiredTiger", [js_test:auth] 2015-10-13T18:46:52.920-0400 "restart" : undefined [js_test:auth] 2015-10-13T18:46:52.920-0400 } [js_test:auth] 2015-10-13T18:46:52.920-0400 ReplSetTest Starting.... [js_test:auth] 2015-10-13T18:46:52.920-0400 Resetting db path '/data/db/job1/mongorunner/auth-configRS-2' [js_test:auth] 2015-10-13T18:46:52.925-0400 2015-10-13T18:46:52.925-0400 I - [thread1] shell: started program (sh11853): /media/ssd/mongo1/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 20262 --noprealloc --smallfiles --replSet auth-configRS --dbpath /data/db/job1/mongorunner/auth-configRS-2 --journal --configsvr --storageEngine wiredTiger --nopreallocj --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:46:52.925-0400 2015-10-13T18:46:52.925-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20262, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:46:52.940-0400 c20262| note: noprealloc may hurt performance in many applications [js_test:auth] 2015-10-13T18:46:52.989-0400 c20262| 2015-10-13T18:46:52.989-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:auth] 2015-10-13T18:46:53.126-0400 2015-10-13T18:46:53.126-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20262, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:46:53.326-0400 2015-10-13T18:46:53.326-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20262, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:46:53.473-0400 c20262| 2015-10-13T18:46:53.473-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:auth] 2015-10-13T18:46:53.473-0400 c20262| 2015-10-13T18:46:53.473-0400 I CONTROL [initandlisten] MongoDB starting : pid=11853 port=20262 dbpath=/data/db/job1/mongorunner/auth-configRS-2 64-bit host=ubuntu [js_test:auth] 2015-10-13T18:46:53.474-0400 c20262| 2015-10-13T18:46:53.473-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:53.474-0400 c20262| 2015-10-13T18:46:53.473-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:46:53.474-0400 c20262| 2015-10-13T18:46:53.473-0400 I CONTROL [initandlisten] ** Not recommended for production. [js_test:auth] 2015-10-13T18:46:53.475-0400 c20262| 2015-10-13T18:46:53.473-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:53.475-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:53.475-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine. [js_test:auth] 2015-10-13T18:46:53.475-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems: [js_test:auth] 2015-10-13T18:46:53.475-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options] [js_test:auth] 2015-10-13T18:46:53.476-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:53.476-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. [js_test:auth] 2015-10-13T18:46:53.476-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:46:53.476-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:53.476-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. [js_test:auth] 2015-10-13T18:46:53.477-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:46:53.477-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:46:53.477-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:46:53.477-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:46:53.478-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:46:53.478-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] allocator: tcmalloc [js_test:auth] 2015-10-13T18:46:53.478-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] modules: subscription [js_test:auth] 2015-10-13T18:46:53.478-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] build environment: [js_test:auth] 2015-10-13T18:46:53.479-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] distarch: x86_64 [js_test:auth] 2015-10-13T18:46:53.479-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] target_arch: x86_64 [js_test:auth] 2015-10-13T18:46:53.479-0400 c20262| 2015-10-13T18:46:53.475-0400 I CONTROL [initandlisten] options: { net: { port: 20262 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "auth-configRS" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/db/job1/mongorunner/auth-configRS-2", engine: "wiredTiger", journal: { enabled: true }, mmapv1: { preallocDataFiles: false, smallFiles: true } } } [js_test:auth] 2015-10-13T18:46:53.527-0400 2015-10-13T18:46:53.527-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20262, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:46:53.601-0400 c20262| 2015-10-13T18:46:53.601-0400 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument Did not find replica set lastVote document in local.replset.election [js_test:auth] 2015-10-13T18:46:53.601-0400 c20262| 2015-10-13T18:46:53.601-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset [js_test:auth] 2015-10-13T18:46:53.602-0400 c20262| 2015-10-13T18:46:53.602-0400 I FTDC [initandlisten] Starting full-time diagnostic data capture with directory '/data/db/job1/mongorunner/auth-configRS-2/diagnostic.data' [js_test:auth] 2015-10-13T18:46:53.712-0400 c20262| 2015-10-13T18:46:53.712-0400 I NETWORK [initandlisten] waiting for connections on port 20262 [js_test:auth] 2015-10-13T18:46:53.727-0400 c20262| 2015-10-13T18:46:53.727-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55904 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:46:53.728-0400 c20262| 2015-10-13T18:46:53.728-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:46:53.728-0400 [ [js_test:auth] 2015-10-13T18:46:53.728-0400 connection to ubuntu:20260, [js_test:auth] 2015-10-13T18:46:53.728-0400 connection to ubuntu:20261, [js_test:auth] 2015-10-13T18:46:53.728-0400 connection to ubuntu:20262 [js_test:auth] 2015-10-13T18:46:53.728-0400 ] [js_test:auth] 2015-10-13T18:46:53.729-0400 { [js_test:auth] 2015-10-13T18:46:53.729-0400 "replSetInitiate" : { [js_test:auth] 2015-10-13T18:46:53.729-0400 "_id" : "auth-configRS", [js_test:auth] 2015-10-13T18:46:53.729-0400 "members" : [ [js_test:auth] 2015-10-13T18:46:53.729-0400 { [js_test:auth] 2015-10-13T18:46:53.729-0400 "_id" : 0, [js_test:auth] 2015-10-13T18:46:53.729-0400 "host" : "ubuntu:20260" [js_test:auth] 2015-10-13T18:46:53.729-0400 }, [js_test:auth] 2015-10-13T18:46:53.729-0400 { [js_test:auth] 2015-10-13T18:46:53.729-0400 "_id" : 1, [js_test:auth] 2015-10-13T18:46:53.729-0400 "host" : "ubuntu:20261" [js_test:auth] 2015-10-13T18:46:53.729-0400 }, [js_test:auth] 2015-10-13T18:46:53.729-0400 { [js_test:auth] 2015-10-13T18:46:53.729-0400 "_id" : 2, [js_test:auth] 2015-10-13T18:46:53.729-0400 "host" : "ubuntu:20262" [js_test:auth] 2015-10-13T18:46:53.729-0400 } [js_test:auth] 2015-10-13T18:46:53.729-0400 ], [js_test:auth] 2015-10-13T18:46:53.730-0400 "configsvr" : true, [js_test:auth] 2015-10-13T18:46:53.730-0400 "settings" : { [js_test:auth] 2015-10-13T18:46:53.730-0400 [js_test:auth] 2015-10-13T18:46:53.730-0400 } [js_test:auth] 2015-10-13T18:46:53.730-0400 } [js_test:auth] 2015-10-13T18:46:53.730-0400 } [js_test:auth] 2015-10-13T18:46:53.730-0400 c20260| 2015-10-13T18:46:53.729-0400 I REPL [conn1] replSetInitiate admin command received from client [js_test:auth] 2015-10-13T18:46:53.730-0400 c20260| 2015-10-13T18:46:53.730-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:48793 #2 (2 connections now open) [js_test:auth] 2015-10-13T18:46:53.746-0400 c20260| 2015-10-13T18:46:53.746-0400 I ACCESS [conn2] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:53.746-0400 c20260| 2015-10-13T18:46:53.746-0400 I NETWORK [conn2] end connection 127.0.0.1:48793 (1 connection now open) [js_test:auth] 2015-10-13T18:46:53.746-0400 c20261| 2015-10-13T18:46:53.746-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:45921 #2 (2 connections now open) [js_test:auth] 2015-10-13T18:46:53.763-0400 c20261| 2015-10-13T18:46:53.762-0400 I ACCESS [conn2] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:53.763-0400 c20261| 2015-10-13T18:46:53.763-0400 I NETWORK [conn2] end connection 127.0.0.1:45921 (1 connection now open) [js_test:auth] 2015-10-13T18:46:53.763-0400 c20262| 2015-10-13T18:46:53.763-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:45915 #2 (2 connections now open) [js_test:auth] 2015-10-13T18:46:53.780-0400 c20262| 2015-10-13T18:46:53.780-0400 I ACCESS [conn2] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:53.781-0400 c20260| 2015-10-13T18:46:53.780-0400 I REPL [conn1] replSetInitiate config object with 3 members parses ok [js_test:auth] 2015-10-13T18:46:53.781-0400 c20262| 2015-10-13T18:46:53.781-0400 I NETWORK [conn2] end connection 127.0.0.1:45915 (1 connection now open) [js_test:auth] 2015-10-13T18:46:53.782-0400 c20261| 2015-10-13T18:46:53.781-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:45928 #3 (2 connections now open) [js_test:auth] 2015-10-13T18:46:53.782-0400 c20262| 2015-10-13T18:46:53.781-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:45921 #3 (2 connections now open) [js_test:auth] 2015-10-13T18:46:53.810-0400 c20262| 2015-10-13T18:46:53.810-0400 I ACCESS [conn3] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:53.810-0400 c20261| 2015-10-13T18:46:53.810-0400 I ACCESS [conn3] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:53.811-0400 c20260| 2015-10-13T18:46:53.810-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20262 [js_test:auth] 2015-10-13T18:46:53.811-0400 c20260| 2015-10-13T18:46:53.810-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20261 [js_test:auth] 2015-10-13T18:46:53.811-0400 c20260| 2015-10-13T18:46:53.811-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:48804 #3 (2 connections now open) [js_test:auth] 2015-10-13T18:46:53.811-0400 c20260| 2015-10-13T18:46:53.811-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:48805 #4 (3 connections now open) [js_test:auth] 2015-10-13T18:46:53.831-0400 c20260| 2015-10-13T18:46:53.831-0400 I ACCESS [conn3] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:53.831-0400 c20262| 2015-10-13T18:46:53.831-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20260 [js_test:auth] 2015-10-13T18:46:53.831-0400 c20260| 2015-10-13T18:46:53.831-0400 I ACCESS [conn4] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:53.832-0400 c20261| 2015-10-13T18:46:53.831-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20260 [js_test:auth] 2015-10-13T18:46:53.901-0400 c20260| 2015-10-13T18:46:53.900-0400 I REPL [ReplicationExecutor] New replica set config in use: { _id: "auth-configRS", version: 1, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "ubuntu:20260", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ubuntu:20261", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "ubuntu:20262", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } [js_test:auth] 2015-10-13T18:46:53.901-0400 c20260| 2015-10-13T18:46:53.900-0400 I REPL [ReplicationExecutor] This node is ubuntu:20260 in the config [js_test:auth] 2015-10-13T18:46:53.901-0400 c20260| 2015-10-13T18:46:53.900-0400 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:auth] 2015-10-13T18:46:53.901-0400 c20260| 2015-10-13T18:46:53.900-0400 I REPL [conn1] ****** [js_test:auth] 2015-10-13T18:46:53.901-0400 c20260| 2015-10-13T18:46:53.900-0400 I REPL [conn1] creating replication oplog of size: 40MB... [js_test:auth] 2015-10-13T18:46:53.902-0400 c20260| 2015-10-13T18:46:53.900-0400 I REPL [ReplicationExecutor] Member ubuntu:20261 is now in state STARTUP [js_test:auth] 2015-10-13T18:46:53.902-0400 c20260| 2015-10-13T18:46:53.901-0400 I REPL [ReplicationExecutor] Member ubuntu:20262 is now in state STARTUP [js_test:auth] 2015-10-13T18:46:53.964-0400 c20260| 2015-10-13T18:46:53.964-0400 I STORAGE [conn1] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:auth] 2015-10-13T18:46:53.964-0400 c20260| 2015-10-13T18:46:53.964-0400 I STORAGE [conn1] Scanning the oplog to determine where to place markers for when to truncate [js_test:auth] 2015-10-13T18:46:54.301-0400 c20260| 2015-10-13T18:46:54.301-0400 I REPL [conn1] ****** [js_test:auth] 2015-10-13T18:46:54.303-0400 c20260| 2015-10-13T18:46:54.303-0400 I REPL [conn1] Starting replication applier threads [js_test:auth] 2015-10-13T18:46:54.303-0400 c20260| 2015-10-13T18:46:54.303-0400 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:auth] 2015-10-13T18:46:54.303-0400 c20260| 2015-10-13T18:46:54.303-0400 I COMMAND [conn1] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "auth-configRS", members: [ { _id: 0.0, host: "ubuntu:20260" }, { _id: 1.0, host: "ubuntu:20261" }, { _id: 2.0, host: "ubuntu:20262" } ], configsvr: true, settings: {} } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:102 locks:{ Global: { acquireCount: { r: 8, w: 4, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 1042 } }, Database: { acquireCount: { r: 1, w: 2, W: 2 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 574ms [js_test:auth] 2015-10-13T18:46:54.304-0400 c20260| 2015-10-13T18:46:54.303-0400 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:auth] 2015-10-13T18:46:55.832-0400 c20260| 2015-10-13T18:46:55.832-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:48916 #5 (4 connections now open) [js_test:auth] 2015-10-13T18:46:55.833-0400 c20260| 2015-10-13T18:46:55.833-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:48917 #6 (5 connections now open) [js_test:auth] 2015-10-13T18:46:55.850-0400 c20260| 2015-10-13T18:46:55.850-0400 I ACCESS [conn5] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:55.850-0400 c20260| 2015-10-13T18:46:55.850-0400 I NETWORK [conn5] end connection 127.0.0.1:48916 (4 connections now open) [js_test:auth] 2015-10-13T18:46:55.850-0400 c20261| 2015-10-13T18:46:55.850-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46053 #4 (3 connections now open) [js_test:auth] 2015-10-13T18:46:55.851-0400 c20260| 2015-10-13T18:46:55.851-0400 I ACCESS [conn6] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:55.852-0400 c20260| 2015-10-13T18:46:55.852-0400 I NETWORK [conn6] end connection 127.0.0.1:48917 (3 connections now open) [js_test:auth] 2015-10-13T18:46:55.852-0400 c20261| 2015-10-13T18:46:55.852-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46054 #5 (4 connections now open) [js_test:auth] 2015-10-13T18:46:55.866-0400 c20261| 2015-10-13T18:46:55.866-0400 I ACCESS [conn4] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:55.866-0400 c20261| 2015-10-13T18:46:55.866-0400 I NETWORK [conn4] end connection 127.0.0.1:46053 (3 connections now open) [js_test:auth] 2015-10-13T18:46:55.866-0400 c20262| 2015-10-13T18:46:55.866-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46049 #4 (3 connections now open) [js_test:auth] 2015-10-13T18:46:55.867-0400 c20261| 2015-10-13T18:46:55.867-0400 I ACCESS [conn5] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:55.868-0400 c20261| 2015-10-13T18:46:55.868-0400 I NETWORK [conn5] end connection 127.0.0.1:46054 (2 connections now open) [js_test:auth] 2015-10-13T18:46:55.868-0400 c20262| 2015-10-13T18:46:55.868-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46050 #5 (4 connections now open) [js_test:auth] 2015-10-13T18:46:55.883-0400 c20262| 2015-10-13T18:46:55.883-0400 I ACCESS [conn4] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:55.883-0400 c20262| 2015-10-13T18:46:55.883-0400 I NETWORK [conn4] end connection 127.0.0.1:46049 (3 connections now open) [js_test:auth] 2015-10-13T18:46:55.884-0400 c20262| 2015-10-13T18:46:55.884-0400 I ACCESS [conn5] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:55.885-0400 c20262| 2015-10-13T18:46:55.884-0400 I NETWORK [conn5] end connection 127.0.0.1:46050 (2 connections now open) [js_test:auth] 2015-10-13T18:46:56.002-0400 c20262| 2015-10-13T18:46:56.001-0400 I REPL [replExecDBWorker-2] Starting replication applier threads [js_test:auth] 2015-10-13T18:46:56.004-0400 c20262| 2015-10-13T18:46:56.002-0400 I REPL [ReplicationExecutor] New replica set config in use: { _id: "auth-configRS", version: 1, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "ubuntu:20260", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ubuntu:20261", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "ubuntu:20262", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } [js_test:auth] 2015-10-13T18:46:56.005-0400 c20262| 2015-10-13T18:46:56.002-0400 I REPL [ReplicationExecutor] This node is ubuntu:20262 in the config [js_test:auth] 2015-10-13T18:46:56.005-0400 c20262| 2015-10-13T18:46:56.002-0400 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:auth] 2015-10-13T18:46:56.005-0400 c20261| 2015-10-13T18:46:56.003-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46063 #6 (3 connections now open) [js_test:auth] 2015-10-13T18:46:56.006-0400 c20262| 2015-10-13T18:46:56.005-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:46:56.006-0400 c20262| 2015-10-13T18:46:56.005-0400 I REPL [rsSync] creating replication oplog of size: 40MB... [js_test:auth] 2015-10-13T18:46:56.006-0400 c20262| 2015-10-13T18:46:56.005-0400 I REPL [ReplicationExecutor] Member ubuntu:20260 is now in state SECONDARY [js_test:auth] 2015-10-13T18:46:56.009-0400 c20261| 2015-10-13T18:46:56.008-0400 I REPL [replExecDBWorker-0] Starting replication applier threads [js_test:auth] 2015-10-13T18:46:56.009-0400 c20261| 2015-10-13T18:46:56.009-0400 W REPL [rsSync] did not receive a valid config yet [js_test:auth] 2015-10-13T18:46:56.010-0400 c20261| 2015-10-13T18:46:56.009-0400 I REPL [ReplicationExecutor] New replica set config in use: { _id: "auth-configRS", version: 1, configsvr: true, protocolVersion: 1, members: [ { _id: 0, host: "ubuntu:20260", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ubuntu:20261", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "ubuntu:20262", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } [js_test:auth] 2015-10-13T18:46:56.010-0400 c20261| 2015-10-13T18:46:56.009-0400 I REPL [ReplicationExecutor] This node is ubuntu:20261 in the config [js_test:auth] 2015-10-13T18:46:56.010-0400 c20261| 2015-10-13T18:46:56.009-0400 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:auth] 2015-10-13T18:46:56.011-0400 c20261| 2015-10-13T18:46:56.010-0400 I REPL [ReplicationExecutor] Member ubuntu:20260 is now in state SECONDARY [js_test:auth] 2015-10-13T18:46:56.011-0400 c20262| 2015-10-13T18:46:56.010-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46056 #6 (3 connections now open) [js_test:auth] 2015-10-13T18:46:56.021-0400 c20261| 2015-10-13T18:46:56.021-0400 I ACCESS [conn6] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:56.021-0400 c20262| 2015-10-13T18:46:56.021-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20261 [js_test:auth] 2015-10-13T18:46:56.021-0400 c20262| 2015-10-13T18:46:56.021-0400 I REPL [ReplicationExecutor] Member ubuntu:20261 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:46:56.053-0400 c20262| 2015-10-13T18:46:56.053-0400 I STORAGE [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:auth] 2015-10-13T18:46:56.053-0400 c20262| 2015-10-13T18:46:56.053-0400 I STORAGE [rsSync] Scanning the oplog to determine where to place markers for when to truncate [js_test:auth] 2015-10-13T18:46:56.304-0400 c20260| 2015-10-13T18:46:56.304-0400 I REPL [ReplicationExecutor] Member ubuntu:20261 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:46:56.411-0400 c20262| 2015-10-13T18:46:56.411-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:46:56.412-0400 c20262| 2015-10-13T18:46:56.411-0400 I REPL [rsSync] initial sync pending [js_test:auth] 2015-10-13T18:46:56.413-0400 c20260| 2015-10-13T18:46:56.412-0400 I REPL [ReplicationExecutor] Member ubuntu:20262 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:46:56.438-0400 c20262| 2015-10-13T18:46:56.437-0400 I ACCESS [conn6] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:56.438-0400 c20261| 2015-10-13T18:46:56.437-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20262 [js_test:auth] 2015-10-13T18:46:56.438-0400 c20261| 2015-10-13T18:46:56.438-0400 I REPL [ReplicationExecutor] Member ubuntu:20262 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:46:56.541-0400 c20262| 2015-10-13T18:46:56.540-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20260 [js_test:auth] 2015-10-13T18:46:56.541-0400 c20260| 2015-10-13T18:46:56.541-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:48974 #7 (4 connections now open) [js_test:auth] 2015-10-13T18:46:56.557-0400 c20260| 2015-10-13T18:46:56.557-0400 I ACCESS [conn7] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:56.574-0400 c20262| 2015-10-13T18:46:56.574-0400 I REPL [rsSync] initial sync drop all databases [js_test:auth] 2015-10-13T18:46:56.574-0400 c20262| 2015-10-13T18:46:56.574-0400 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 [js_test:auth] 2015-10-13T18:46:56.575-0400 c20262| 2015-10-13T18:46:56.574-0400 I REPL [rsSync] initial sync clone all databases [js_test:auth] 2015-10-13T18:46:56.575-0400 c20262| 2015-10-13T18:46:56.575-0400 I REPL [rsSync] initial sync data copy, starting syncup [js_test:auth] 2015-10-13T18:46:56.575-0400 c20262| 2015-10-13T18:46:56.575-0400 I REPL [rsSync] oplog sync 1 of 3 [js_test:auth] 2015-10-13T18:46:56.575-0400 c20262| 2015-10-13T18:46:56.575-0400 I REPL [rsSync] oplog sync 2 of 3 [js_test:auth] 2015-10-13T18:46:56.575-0400 c20262| 2015-10-13T18:46:56.575-0400 I REPL [rsSync] initial sync building indexes [js_test:auth] 2015-10-13T18:46:56.575-0400 c20262| 2015-10-13T18:46:56.575-0400 I REPL [rsSync] oplog sync 3 of 3 [js_test:auth] 2015-10-13T18:46:56.576-0400 c20262| 2015-10-13T18:46:56.576-0400 I REPL [rsSync] initial sync finishing up [js_test:auth] 2015-10-13T18:46:56.576-0400 c20262| 2015-10-13T18:46:56.576-0400 I REPL [rsSync] set minValid=(term: 0, timestamp: Oct 13 18:46:54:1) [js_test:auth] 2015-10-13T18:46:56.592-0400 c20262| 2015-10-13T18:46:56.592-0400 I REPL [rsSync] initial sync done [js_test:auth] 2015-10-13T18:46:56.594-0400 c20260| 2015-10-13T18:46:56.594-0400 I NETWORK [conn7] end connection 127.0.0.1:48974 (3 connections now open) [js_test:auth] 2015-10-13T18:46:56.594-0400 c20262| 2015-10-13T18:46:56.594-0400 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:auth] 2015-10-13T18:46:56.595-0400 c20262| 2015-10-13T18:46:56.595-0400 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:auth] 2015-10-13T18:46:57.005-0400 c20262| 2015-10-13T18:46:57.005-0400 I REPL [ReplicationExecutor] could not find member to sync from [js_test:auth] 2015-10-13T18:46:57.009-0400 c20261| 2015-10-13T18:46:57.009-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:46:57.009-0400 c20261| 2015-10-13T18:46:57.009-0400 I REPL [rsSync] creating replication oplog of size: 40MB... [js_test:auth] 2015-10-13T18:46:57.072-0400 c20261| 2015-10-13T18:46:57.072-0400 I STORAGE [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:auth] 2015-10-13T18:46:57.072-0400 c20261| 2015-10-13T18:46:57.072-0400 I STORAGE [rsSync] Scanning the oplog to determine where to place markers for when to truncate [js_test:auth] 2015-10-13T18:46:57.386-0400 c20261| 2015-10-13T18:46:57.386-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:46:57.386-0400 c20261| 2015-10-13T18:46:57.386-0400 I REPL [rsSync] initial sync pending [js_test:auth] 2015-10-13T18:46:57.481-0400 c20261| 2015-10-13T18:46:57.481-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20260 [js_test:auth] 2015-10-13T18:46:57.481-0400 c20260| 2015-10-13T18:46:57.481-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49024 #8 (4 connections now open) [js_test:auth] 2015-10-13T18:46:57.497-0400 c20260| 2015-10-13T18:46:57.497-0400 I ACCESS [conn8] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:46:57.509-0400 c20261| 2015-10-13T18:46:57.508-0400 I REPL [rsSync] initial sync drop all databases [js_test:auth] 2015-10-13T18:46:57.509-0400 c20261| 2015-10-13T18:46:57.508-0400 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 [js_test:auth] 2015-10-13T18:46:57.510-0400 c20261| 2015-10-13T18:46:57.508-0400 I REPL [rsSync] initial sync clone all databases [js_test:auth] 2015-10-13T18:46:57.510-0400 c20261| 2015-10-13T18:46:57.509-0400 I REPL [rsSync] initial sync data copy, starting syncup [js_test:auth] 2015-10-13T18:46:57.510-0400 c20261| 2015-10-13T18:46:57.509-0400 I REPL [rsSync] oplog sync 1 of 3 [js_test:auth] 2015-10-13T18:46:57.510-0400 c20261| 2015-10-13T18:46:57.509-0400 I REPL [rsSync] oplog sync 2 of 3 [js_test:auth] 2015-10-13T18:46:57.510-0400 c20261| 2015-10-13T18:46:57.509-0400 I REPL [rsSync] initial sync building indexes [js_test:auth] 2015-10-13T18:46:57.510-0400 c20261| 2015-10-13T18:46:57.509-0400 I REPL [rsSync] oplog sync 3 of 3 [js_test:auth] 2015-10-13T18:46:57.510-0400 c20261| 2015-10-13T18:46:57.510-0400 I REPL [rsSync] initial sync finishing up [js_test:auth] 2015-10-13T18:46:57.510-0400 c20261| 2015-10-13T18:46:57.510-0400 I REPL [rsSync] set minValid=(term: 0, timestamp: Oct 13 18:46:54:1) [js_test:auth] 2015-10-13T18:46:57.526-0400 c20261| 2015-10-13T18:46:57.526-0400 I REPL [rsSync] initial sync done [js_test:auth] 2015-10-13T18:46:57.528-0400 c20260| 2015-10-13T18:46:57.528-0400 I NETWORK [conn8] end connection 127.0.0.1:49024 (3 connections now open) [js_test:auth] 2015-10-13T18:46:57.528-0400 c20261| 2015-10-13T18:46:57.528-0400 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:auth] 2015-10-13T18:46:57.530-0400 c20261| 2015-10-13T18:46:57.530-0400 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:auth] 2015-10-13T18:46:58.010-0400 c20261| 2015-10-13T18:46:58.010-0400 I REPL [ReplicationExecutor] could not find member to sync from [js_test:auth] 2015-10-13T18:46:58.011-0400 c20261| 2015-10-13T18:46:58.010-0400 I REPL [ReplicationExecutor] Member ubuntu:20262 is now in state SECONDARY [js_test:auth] 2015-10-13T18:46:58.304-0400 c20260| 2015-10-13T18:46:58.304-0400 I REPL [ReplicationExecutor] Member ubuntu:20261 is now in state SECONDARY [js_test:auth] 2015-10-13T18:46:58.304-0400 c20260| 2015-10-13T18:46:58.304-0400 I REPL [ReplicationExecutor] Member ubuntu:20262 is now in state SECONDARY [js_test:auth] 2015-10-13T18:46:59.007-0400 c20262| 2015-10-13T18:46:59.007-0400 I REPL [ReplicationExecutor] Member ubuntu:20261 is now in state SECONDARY [js_test:auth] 2015-10-13T18:46:59.550-0400 c20260| 2015-10-13T18:46:59.550-0400 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected [js_test:auth] 2015-10-13T18:46:59.671-0400 c20262| 2015-10-13T18:46:59.671-0400 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "auth-configRS", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1444776414000|1, t: 0 } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, W: 2 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 121ms [js_test:auth] 2015-10-13T18:46:59.672-0400 c20261| 2015-10-13T18:46:59.671-0400 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "auth-configRS", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1444776414000|1, t: 0 } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:143 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, W: 2 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 121ms [js_test:auth] 2015-10-13T18:46:59.672-0400 c20260| 2015-10-13T18:46:59.671-0400 I REPL [ReplicationExecutor] dry election run succeeded, running for election [js_test:auth] 2015-10-13T18:46:59.825-0400 c20260| 2015-10-13T18:46:59.825-0400 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 1 [js_test:auth] 2015-10-13T18:46:59.825-0400 c20260| 2015-10-13T18:46:59.825-0400 I REPL [ReplicationExecutor] transition to PRIMARY [js_test:auth] 2015-10-13T18:47:00.011-0400 c20261| 2015-10-13T18:47:00.011-0400 I REPL [ReplicationExecutor] Member ubuntu:20260 is now in state PRIMARY [js_test:auth] 2015-10-13T18:47:00.304-0400 c20260| 2015-10-13T18:47:00.304-0400 I REPL [rsSync] transition to primary complete; database writes are now permitted [js_test:auth] 2015-10-13T18:47:00.411-0400 "config servers: auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262" [js_test:auth] 2015-10-13T18:47:00.411-0400 2015-10-13T18:47:00.411-0400 I NETWORK [thread1] Starting new replica set monitor for auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 [js_test:auth] 2015-10-13T18:47:00.411-0400 2015-10-13T18:47:00.411-0400 I NETWORK [ReplicaSetMonitorWatcher] starting [js_test:auth] 2015-10-13T18:47:00.412-0400 c20262| 2015-10-13T18:47:00.412-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46280 #7 (4 connections now open) [js_test:auth] 2015-10-13T18:47:00.413-0400 c20261| 2015-10-13T18:47:00.412-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46289 #7 (4 connections now open) [js_test:auth] 2015-10-13T18:47:00.413-0400 c20260| 2015-10-13T18:47:00.413-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49163 #9 (4 connections now open) [js_test:auth] 2015-10-13T18:47:00.414-0400 ShardingTest auth : [js_test:auth] 2015-10-13T18:47:00.414-0400 { [js_test:auth] 2015-10-13T18:47:00.414-0400 "config" : "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", [js_test:auth] 2015-10-13T18:47:00.414-0400 "shards" : [ ] [js_test:auth] 2015-10-13T18:47:00.414-0400 } [js_test:auth] 2015-10-13T18:47:00.417-0400 2015-10-13T18:47:00.416-0400 I - [thread1] shell: started program (sh13958): /media/ssd/mongo1/mongos --configdb auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 --keyFile jstests/libs/key1 --port 20263 --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:47:00.417-0400 2015-10-13T18:47:00.417-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:00.434-0400 s20263| 2015-10-13T18:47:00.434-0400 I CONTROL [main] [js_test:auth] 2015-10-13T18:47:00.434-0400 s20263| 2015-10-13T18:47:00.434-0400 I CONTROL [main] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:47:00.434-0400 s20263| 2015-10-13T18:47:00.434-0400 I CONTROL [main] ** Not recommended for production. [js_test:auth] 2015-10-13T18:47:00.434-0400 s20263| 2015-10-13T18:47:00.434-0400 I CONTROL [main] [js_test:auth] 2015-10-13T18:47:00.449-0400 s20263| 2015-10-13T18:47:00.449-0400 I SHARDING [mongosMain] MongoS version 3.1.10-pre- starting: pid=13958 port=20263 64-bit host=ubuntu (--help for usage) [js_test:auth] 2015-10-13T18:47:00.449-0400 s20263| 2015-10-13T18:47:00.449-0400 I CONTROL [mongosMain] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:47:00.449-0400 s20263| 2015-10-13T18:47:00.449-0400 I CONTROL [mongosMain] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:47:00.449-0400 s20263| 2015-10-13T18:47:00.449-0400 I CONTROL [mongosMain] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:47:00.449-0400 s20263| 2015-10-13T18:47:00.449-0400 I CONTROL [mongosMain] allocator: tcmalloc [js_test:auth] 2015-10-13T18:47:00.449-0400 s20263| 2015-10-13T18:47:00.449-0400 I CONTROL [mongosMain] modules: subscription [js_test:auth] 2015-10-13T18:47:00.449-0400 s20263| 2015-10-13T18:47:00.449-0400 I CONTROL [mongosMain] build environment: [js_test:auth] 2015-10-13T18:47:00.450-0400 s20263| 2015-10-13T18:47:00.449-0400 I CONTROL [mongosMain] distarch: x86_64 [js_test:auth] 2015-10-13T18:47:00.450-0400 s20263| 2015-10-13T18:47:00.449-0400 I CONTROL [mongosMain] target_arch: x86_64 [js_test:auth] 2015-10-13T18:47:00.450-0400 s20263| 2015-10-13T18:47:00.449-0400 I CONTROL [mongosMain] options: { net: { port: 20263 }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, sharding: { configDB: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262" } } [js_test:auth] 2015-10-13T18:47:00.450-0400 s20263| 2015-10-13T18:47:00.449-0400 I SHARDING [mongosMain] Updating config server connection string to: auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 [js_test:auth] 2015-10-13T18:47:00.450-0400 s20263| 2015-10-13T18:47:00.449-0400 I NETWORK [mongosMain] Starting new replica set monitor for auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 [js_test:auth] 2015-10-13T18:47:00.450-0400 s20263| 2015-10-13T18:47:00.449-0400 I NETWORK [ReplicaSetMonitorWatcher] starting [js_test:auth] 2015-10-13T18:47:00.451-0400 s20263| 2015-10-13T18:47:00.451-0400 I SHARDING [thread1] creating distributed lock ping thread for process ubuntu:20263:1444776420:516127640 (sleeping for 30000ms) [js_test:auth] 2015-10-13T18:47:00.452-0400 c20261| 2015-10-13T18:47:00.451-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46303 #8 (5 connections now open) [js_test:auth] 2015-10-13T18:47:00.452-0400 c20262| 2015-10-13T18:47:00.451-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46294 #8 (5 connections now open) [js_test:auth] 2015-10-13T18:47:00.470-0400 c20262| 2015-10-13T18:47:00.470-0400 I ACCESS [conn8] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:00.470-0400 c20261| 2015-10-13T18:47:00.470-0400 I ACCESS [conn8] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:00.471-0400 c20260| 2015-10-13T18:47:00.471-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49178 #10 (5 connections now open) [js_test:auth] 2015-10-13T18:47:00.487-0400 c20260| 2015-10-13T18:47:00.487-0400 I ACCESS [conn10] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:00.487-0400 c20260| 2015-10-13T18:47:00.487-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49179 #11 (6 connections now open) [js_test:auth] 2015-10-13T18:47:00.487-0400 c20261| 2015-10-13T18:47:00.487-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46307 #9 (6 connections now open) [js_test:auth] 2015-10-13T18:47:00.517-0400 c20260| 2015-10-13T18:47:00.517-0400 I ACCESS [conn11] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:00.517-0400 s20263| 2015-10-13T18:47:00.517-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20260 [js_test:auth] 2015-10-13T18:47:00.517-0400 c20261| 2015-10-13T18:47:00.517-0400 I ACCESS [conn9] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:00.517-0400 s20263| 2015-10-13T18:47:00.517-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20261 [js_test:auth] 2015-10-13T18:47:00.617-0400 2015-10-13T18:47:00.617-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:00.818-0400 2015-10-13T18:47:00.818-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:01.007-0400 c20262| 2015-10-13T18:47:01.007-0400 I REPL [ReplicationExecutor] Member ubuntu:20260 is now in state PRIMARY [js_test:auth] 2015-10-13T18:47:01.018-0400 2015-10-13T18:47:01.018-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:01.219-0400 2015-10-13T18:47:01.219-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:01.419-0400 2015-10-13T18:47:01.419-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:01.620-0400 2015-10-13T18:47:01.620-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:01.820-0400 2015-10-13T18:47:01.820-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:02.007-0400 c20260| 2015-10-13T18:47:02.007-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49273 #12 (7 connections now open) [js_test:auth] 2015-10-13T18:47:02.008-0400 c20262| 2015-10-13T18:47:02.007-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20260 [js_test:auth] 2015-10-13T18:47:02.021-0400 2015-10-13T18:47:02.021-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:02.024-0400 c20260| 2015-10-13T18:47:02.024-0400 I ACCESS [conn12] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:02.024-0400 c20262| 2015-10-13T18:47:02.024-0400 I REPL [SyncSourceFeedback] setting syncSourceFeedback to ubuntu:20260 [js_test:auth] 2015-10-13T18:47:02.024-0400 c20260| 2015-10-13T18:47:02.024-0400 I NETWORK [conn12] end connection 127.0.0.1:49273 (6 connections now open) [js_test:auth] 2015-10-13T18:47:02.024-0400 c20260| 2015-10-13T18:47:02.024-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49279 #13 (7 connections now open) [js_test:auth] 2015-10-13T18:47:02.025-0400 c20260| 2015-10-13T18:47:02.024-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49280 #14 (8 connections now open) [js_test:auth] 2015-10-13T18:47:02.042-0400 c20260| 2015-10-13T18:47:02.042-0400 I ACCESS [conn14] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:02.042-0400 c20262| 2015-10-13T18:47:02.042-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20260 [js_test:auth] 2015-10-13T18:47:02.043-0400 c20260| 2015-10-13T18:47:02.043-0400 I ACCESS [conn13] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:02.221-0400 2015-10-13T18:47:02.221-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:02.223-0400 c20260| 2015-10-13T18:47:02.222-0400 I COMMAND [conn11] command config.$cmd command: findAndModify { findAndModify: "lockpings", query: { _id: "ubuntu:20263:1444776420:516127640" }, update: { $set: { ping: new Date(1444776420451) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } update: { $set: { ping: new Date(1444776420451) } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:363 locks:{ Global: { acquireCount: { r: 5, w: 3 } }, Database: { acquireCount: { r: 1, w: 3, W: 2 } }, Collection: { acquireCount: { r: 1, w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 1705ms [js_test:auth] 2015-10-13T18:47:02.223-0400 s20263| 2015-10-13T18:47:02.222-0400 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: findAndModify query predicate didn't match any lock document [js_test:auth] 2015-10-13T18:47:02.422-0400 2015-10-13T18:47:02.422-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:02.622-0400 2015-10-13T18:47:02.622-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:02.823-0400 2015-10-13T18:47:02.823-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:03.011-0400 c20261| 2015-10-13T18:47:03.010-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20260 [js_test:auth] 2015-10-13T18:47:03.011-0400 c20260| 2015-10-13T18:47:03.011-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49321 #15 (9 connections now open) [js_test:auth] 2015-10-13T18:47:03.023-0400 2015-10-13T18:47:03.023-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:03.029-0400 c20260| 2015-10-13T18:47:03.029-0400 I ACCESS [conn15] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:03.029-0400 c20260| 2015-10-13T18:47:03.029-0400 I NETWORK [conn15] end connection 127.0.0.1:49321 (8 connections now open) [js_test:auth] 2015-10-13T18:47:03.029-0400 c20261| 2015-10-13T18:47:03.029-0400 I REPL [SyncSourceFeedback] setting syncSourceFeedback to ubuntu:20260 [js_test:auth] 2015-10-13T18:47:03.030-0400 c20260| 2015-10-13T18:47:03.029-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49324 #16 (9 connections now open) [js_test:auth] 2015-10-13T18:47:03.030-0400 c20260| 2015-10-13T18:47:03.029-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49325 #17 (10 connections now open) [js_test:auth] 2015-10-13T18:47:03.048-0400 c20260| 2015-10-13T18:47:03.048-0400 I ACCESS [conn17] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:03.049-0400 c20261| 2015-10-13T18:47:03.048-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20260 [js_test:auth] 2015-10-13T18:47:03.049-0400 c20260| 2015-10-13T18:47:03.048-0400 I ACCESS [conn16] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:03.049-0400 c20261| 2015-10-13T18:47:03.049-0400 I COMMAND [conn9] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } planSummary: EOF keysExamined:0 docsExamined:0 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:0 reslen:323 locks:{ Global: { acquireCount: { r: 4 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } } } protocol:op_command 2531ms [js_test:auth] 2015-10-13T18:47:03.175-0400 c20260| 2015-10-13T18:47:03.175-0400 I WRITE [conn11] update config.version query: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('561d89e7069f0881537f24fd') } update: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('561d89e7069f0881537f24fd') } keysExamined:0 docsExamined:0 nMatched:1 nModified:1 fastmodinsert:1 upsert:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 7, w: 5 } }, Database: { acquireCount: { r: 1, w: 4, W: 1 } }, Collection: { acquireCount: { r: 1, w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 125ms [js_test:auth] 2015-10-13T18:47:03.224-0400 2015-10-13T18:47:03.224-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:03.331-0400 c20260| 2015-10-13T18:47:03.331-0400 I COMMAND [conn11] command config.$cmd command: update { update: "version", updates: [ { q: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('561d89e7069f0881537f24fd') }, u: { _id: 1, minCompatibleVersion: 5, currentVersion: 6, clusterId: ObjectId('561d89e7069f0881537f24fd') }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:382 locks:{ Global: { acquireCount: { r: 7, w: 5 } }, Database: { acquireCount: { r: 1, w: 4, W: 1 } }, Collection: { acquireCount: { r: 1, w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 281ms [js_test:auth] 2015-10-13T18:47:03.424-0400 2015-10-13T18:47:03.424-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:03.458-0400 c20260| 2015-10-13T18:47:03.457-0400 I WRITE [conn11] insert config.settings query: { _id: "chunksize", value: 64 } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 6, w: 4 } }, Database: { acquireCount: { r: 1, w: 3, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1, W: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 125ms [js_test:auth] 2015-10-13T18:47:03.603-0400 c20260| 2015-10-13T18:47:03.603-0400 I COMMAND [conn11] command config.$cmd command: insert { insert: "settings", documents: [ { _id: "chunksize", value: 64 } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:324 locks:{ Global: { acquireCount: { r: 6, w: 4 } }, Database: { acquireCount: { r: 1, w: 3, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1, W: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 271ms [js_test:auth] 2015-10-13T18:47:03.625-0400 2015-10-13T18:47:03.625-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:03.784-0400 c20260| 2015-10-13T18:47:03.784-0400 I INDEX [conn11] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:auth] 2015-10-13T18:47:03.784-0400 c20260| 2015-10-13T18:47:03.784-0400 I INDEX [conn11] building index using bulk method [js_test:auth] 2015-10-13T18:47:03.794-0400 c20260| 2015-10-13T18:47:03.794-0400 I INDEX [conn11] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:03.794-0400 c20260| 2015-10-13T18:47:03.794-0400 I WRITE [conn11] insert config.system.indexes query: { ns: "config.chunks", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 5, w: 3 } }, Database: { acquireCount: { r: 1, w: 2, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 190ms [js_test:auth] 2015-10-13T18:47:03.825-0400 2015-10-13T18:47:03.825-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:03.978-0400 c20261| 2015-10-13T18:47:03.977-0400 I INDEX [repl writer worker 7] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:auth] 2015-10-13T18:47:03.978-0400 c20261| 2015-10-13T18:47:03.978-0400 I INDEX [repl writer worker 7] building index using bulk method [js_test:auth] 2015-10-13T18:47:03.979-0400 c20262| 2015-10-13T18:47:03.978-0400 I INDEX [repl writer worker 8] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } [js_test:auth] 2015-10-13T18:47:03.979-0400 c20262| 2015-10-13T18:47:03.978-0400 I INDEX [repl writer worker 8] building index using bulk method [js_test:auth] 2015-10-13T18:47:03.988-0400 c20262| 2015-10-13T18:47:03.988-0400 I INDEX [repl writer worker 8] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:03.988-0400 c20261| 2015-10-13T18:47:03.988-0400 I INDEX [repl writer worker 7] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:03.989-0400 c20260| 2015-10-13T18:47:03.989-0400 I COMMAND [conn11] command config.$cmd command: insert { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:324 locks:{ Global: { acquireCount: { r: 5, w: 3 } }, Database: { acquireCount: { r: 1, w: 2, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 385ms [js_test:auth] 2015-10-13T18:47:04.026-0400 2015-10-13T18:47:04.026-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:04.060-0400 c20260| 2015-10-13T18:47:04.060-0400 I INDEX [conn11] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:auth] 2015-10-13T18:47:04.061-0400 c20260| 2015-10-13T18:47:04.060-0400 I INDEX [conn11] building index using bulk method [js_test:auth] 2015-10-13T18:47:04.066-0400 c20260| 2015-10-13T18:47:04.066-0400 I INDEX [conn11] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:04.119-0400 c20262| 2015-10-13T18:47:04.119-0400 I INDEX [repl writer worker 9] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:auth] 2015-10-13T18:47:04.119-0400 c20262| 2015-10-13T18:47:04.119-0400 I INDEX [repl writer worker 9] building index using bulk method [js_test:auth] 2015-10-13T18:47:04.125-0400 c20262| 2015-10-13T18:47:04.124-0400 I INDEX [repl writer worker 9] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:04.125-0400 c20260| 2015-10-13T18:47:04.125-0400 I COMMAND [conn11] command config.$cmd command: insert { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:324 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 135ms [js_test:auth] 2015-10-13T18:47:04.129-0400 c20261| 2015-10-13T18:47:04.129-0400 I INDEX [repl writer worker 8] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } [js_test:auth] 2015-10-13T18:47:04.129-0400 c20261| 2015-10-13T18:47:04.129-0400 I INDEX [repl writer worker 8] building index using bulk method [js_test:auth] 2015-10-13T18:47:04.141-0400 c20261| 2015-10-13T18:47:04.141-0400 I INDEX [repl writer worker 8] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:04.185-0400 c20260| 2015-10-13T18:47:04.185-0400 I INDEX [conn11] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:auth] 2015-10-13T18:47:04.185-0400 c20260| 2015-10-13T18:47:04.185-0400 I INDEX [conn11] building index using bulk method [js_test:auth] 2015-10-13T18:47:04.195-0400 c20260| 2015-10-13T18:47:04.195-0400 I INDEX [conn11] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:04.227-0400 2015-10-13T18:47:04.227-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:04.262-0400 c20261| 2015-10-13T18:47:04.261-0400 I INDEX [repl writer worker 9] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:auth] 2015-10-13T18:47:04.262-0400 c20261| 2015-10-13T18:47:04.261-0400 I INDEX [repl writer worker 9] building index using bulk method [js_test:auth] 2015-10-13T18:47:04.263-0400 c20262| 2015-10-13T18:47:04.262-0400 I INDEX [repl writer worker 10] build index on: config.chunks properties: { v: 1, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } [js_test:auth] 2015-10-13T18:47:04.263-0400 c20262| 2015-10-13T18:47:04.262-0400 I INDEX [repl writer worker 10] building index using bulk method [js_test:auth] 2015-10-13T18:47:04.267-0400 c20262| 2015-10-13T18:47:04.267-0400 I INDEX [repl writer worker 10] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:04.267-0400 c20261| 2015-10-13T18:47:04.267-0400 I INDEX [repl writer worker 9] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:04.268-0400 c20260| 2015-10-13T18:47:04.267-0400 I COMMAND [conn11] command config.$cmd command: insert { insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:324 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 141ms [js_test:auth] 2015-10-13T18:47:04.427-0400 2015-10-13T18:47:04.427-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:04.460-0400 c20260| 2015-10-13T18:47:04.460-0400 I INDEX [conn11] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } [js_test:auth] 2015-10-13T18:47:04.461-0400 c20260| 2015-10-13T18:47:04.460-0400 I INDEX [conn11] building index using bulk method [js_test:auth] 2015-10-13T18:47:04.471-0400 c20260| 2015-10-13T18:47:04.471-0400 I INDEX [conn11] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:04.471-0400 c20260| 2015-10-13T18:47:04.471-0400 I WRITE [conn11] insert config.system.indexes query: { ns: "config.shards", key: { host: 1 }, name: "host_1", unique: true } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 5, w: 3 } }, Database: { acquireCount: { r: 1, w: 2, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 203ms [js_test:auth] 2015-10-13T18:47:04.605-0400 c20261| 2015-10-13T18:47:04.605-0400 I INDEX [repl writer worker 11] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } [js_test:auth] 2015-10-13T18:47:04.605-0400 c20262| 2015-10-13T18:47:04.605-0400 I INDEX [repl writer worker 12] build index on: config.shards properties: { v: 1, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } [js_test:auth] 2015-10-13T18:47:04.605-0400 c20261| 2015-10-13T18:47:04.605-0400 I INDEX [repl writer worker 11] building index using bulk method [js_test:auth] 2015-10-13T18:47:04.605-0400 c20262| 2015-10-13T18:47:04.605-0400 I INDEX [repl writer worker 12] building index using bulk method [js_test:auth] 2015-10-13T18:47:04.614-0400 c20261| 2015-10-13T18:47:04.614-0400 I INDEX [repl writer worker 11] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:04.614-0400 c20262| 2015-10-13T18:47:04.614-0400 I INDEX [repl writer worker 12] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:04.615-0400 c20260| 2015-10-13T18:47:04.614-0400 I COMMAND [conn11] command config.$cmd command: insert { insert: "system.indexes", documents: [ { ns: "config.shards", key: { host: 1 }, name: "host_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:324 locks:{ Global: { acquireCount: { r: 5, w: 3 } }, Database: { acquireCount: { r: 1, w: 2, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 346ms [js_test:auth] 2015-10-13T18:47:04.628-0400 2015-10-13T18:47:04.628-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:04.801-0400 c20260| 2015-10-13T18:47:04.800-0400 I INDEX [conn11] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:auth] 2015-10-13T18:47:04.801-0400 c20260| 2015-10-13T18:47:04.800-0400 I INDEX [conn11] building index using bulk method [js_test:auth] 2015-10-13T18:47:04.813-0400 c20260| 2015-10-13T18:47:04.813-0400 I INDEX [conn11] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:04.814-0400 c20260| 2015-10-13T18:47:04.813-0400 I WRITE [conn11] insert config.system.indexes query: { ns: "config.locks", key: { ts: 1 }, name: "ts_1" } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 5, w: 3 } }, Database: { acquireCount: { r: 1, w: 2, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 198ms [js_test:auth] 2015-10-13T18:47:04.828-0400 2015-10-13T18:47:04.828-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:04.986-0400 c20262| 2015-10-13T18:47:04.986-0400 I INDEX [repl writer worker 14] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:auth] 2015-10-13T18:47:04.986-0400 c20262| 2015-10-13T18:47:04.986-0400 I INDEX [repl writer worker 14] building index using bulk method [js_test:auth] 2015-10-13T18:47:04.987-0400 c20261| 2015-10-13T18:47:04.986-0400 I INDEX [repl writer worker 13] build index on: config.locks properties: { v: 1, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } [js_test:auth] 2015-10-13T18:47:04.987-0400 c20261| 2015-10-13T18:47:04.986-0400 I INDEX [repl writer worker 13] building index using bulk method [js_test:auth] 2015-10-13T18:47:04.995-0400 c20261| 2015-10-13T18:47:04.995-0400 I INDEX [repl writer worker 13] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:04.995-0400 c20262| 2015-10-13T18:47:04.995-0400 I INDEX [repl writer worker 14] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:04.996-0400 c20260| 2015-10-13T18:47:04.996-0400 I COMMAND [conn11] command config.$cmd command: insert { insert: "system.indexes", documents: [ { ns: "config.locks", key: { ts: 1 }, name: "ts_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:324 locks:{ Global: { acquireCount: { r: 5, w: 3 } }, Database: { acquireCount: { r: 1, w: 2, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 380ms [js_test:auth] 2015-10-13T18:47:05.029-0400 2015-10-13T18:47:05.029-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:05.064-0400 c20260| 2015-10-13T18:47:05.064-0400 I INDEX [conn11] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:auth] 2015-10-13T18:47:05.064-0400 c20260| 2015-10-13T18:47:05.064-0400 I INDEX [conn11] building index using bulk method [js_test:auth] 2015-10-13T18:47:05.068-0400 c20260| 2015-10-13T18:47:05.068-0400 I INDEX [conn11] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:05.133-0400 c20262| 2015-10-13T18:47:05.133-0400 I INDEX [repl writer worker 1] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:auth] 2015-10-13T18:47:05.133-0400 c20262| 2015-10-13T18:47:05.133-0400 I INDEX [repl writer worker 1] building index using bulk method [js_test:auth] 2015-10-13T18:47:05.134-0400 c20261| 2015-10-13T18:47:05.133-0400 I INDEX [repl writer worker 14] build index on: config.locks properties: { v: 1, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } [js_test:auth] 2015-10-13T18:47:05.134-0400 c20261| 2015-10-13T18:47:05.133-0400 I INDEX [repl writer worker 14] building index using bulk method [js_test:auth] 2015-10-13T18:47:05.149-0400 c20262| 2015-10-13T18:47:05.149-0400 I INDEX [repl writer worker 1] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:05.150-0400 c20261| 2015-10-13T18:47:05.149-0400 I INDEX [repl writer worker 14] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:05.150-0400 c20260| 2015-10-13T18:47:05.150-0400 I COMMAND [conn11] command config.$cmd command: insert { insert: "system.indexes", documents: [ { ns: "config.locks", key: { state: 1, process: 1 }, name: "state_1_process_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:324 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 153ms [js_test:auth] 2015-10-13T18:47:05.203-0400 c20260| 2015-10-13T18:47:05.203-0400 I INDEX [conn11] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } [js_test:auth] 2015-10-13T18:47:05.203-0400 c20260| 2015-10-13T18:47:05.203-0400 I INDEX [conn11] building index using bulk method [js_test:auth] 2015-10-13T18:47:05.225-0400 c20260| 2015-10-13T18:47:05.225-0400 I INDEX [conn11] build index done. scanned 1 total records. 0 secs [js_test:auth] 2015-10-13T18:47:05.229-0400 2015-10-13T18:47:05.229-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:05.296-0400 c20262| 2015-10-13T18:47:05.295-0400 I INDEX [repl writer worker 0] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } [js_test:auth] 2015-10-13T18:47:05.296-0400 c20262| 2015-10-13T18:47:05.295-0400 I INDEX [repl writer worker 0] building index using bulk method [js_test:auth] 2015-10-13T18:47:05.297-0400 c20261| 2015-10-13T18:47:05.295-0400 I INDEX [repl writer worker 15] build index on: config.lockpings properties: { v: 1, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } [js_test:auth] 2015-10-13T18:47:05.297-0400 c20261| 2015-10-13T18:47:05.295-0400 I INDEX [repl writer worker 15] building index using bulk method [js_test:auth] 2015-10-13T18:47:05.316-0400 c20261| 2015-10-13T18:47:05.316-0400 I INDEX [repl writer worker 15] build index done. scanned 1 total records. 0 secs [js_test:auth] 2015-10-13T18:47:05.317-0400 c20262| 2015-10-13T18:47:05.316-0400 I INDEX [repl writer worker 0] build index done. scanned 1 total records. 0 secs [js_test:auth] 2015-10-13T18:47:05.317-0400 c20260| 2015-10-13T18:47:05.317-0400 I COMMAND [conn11] command config.$cmd command: insert { insert: "system.indexes", documents: [ { ns: "config.lockpings", key: { ping: 1 }, name: "ping_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:324 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, w: 1, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 166ms [js_test:auth] 2015-10-13T18:47:05.430-0400 2015-10-13T18:47:05.430-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:05.485-0400 c20260| 2015-10-13T18:47:05.485-0400 I INDEX [conn11] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" } [js_test:auth] 2015-10-13T18:47:05.486-0400 c20260| 2015-10-13T18:47:05.485-0400 I INDEX [conn11] building index using bulk method [js_test:auth] 2015-10-13T18:47:05.490-0400 c20260| 2015-10-13T18:47:05.490-0400 I INDEX [conn11] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:05.491-0400 c20260| 2015-10-13T18:47:05.491-0400 I WRITE [conn11] insert config.system.indexes query: { ns: "config.tags", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 5, w: 3 } }, Database: { acquireCount: { r: 1, w: 2, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 173ms [js_test:auth] 2015-10-13T18:47:05.631-0400 2015-10-13T18:47:05.630-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20263, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:05.634-0400 c20261| 2015-10-13T18:47:05.634-0400 I INDEX [repl writer worker 1] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" } [js_test:auth] 2015-10-13T18:47:05.634-0400 c20262| 2015-10-13T18:47:05.634-0400 I INDEX [repl writer worker 2] build index on: config.tags properties: { v: 1, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" } [js_test:auth] 2015-10-13T18:47:05.634-0400 c20262| 2015-10-13T18:47:05.634-0400 I INDEX [repl writer worker 2] building index using bulk method [js_test:auth] 2015-10-13T18:47:05.634-0400 c20261| 2015-10-13T18:47:05.634-0400 I INDEX [repl writer worker 1] building index using bulk method [js_test:auth] 2015-10-13T18:47:05.643-0400 c20261| 2015-10-13T18:47:05.643-0400 I INDEX [repl writer worker 1] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:05.643-0400 c20262| 2015-10-13T18:47:05.643-0400 I INDEX [repl writer worker 2] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:05.644-0400 c20260| 2015-10-13T18:47:05.643-0400 I COMMAND [conn11] command config.$cmd command: insert { insert: "system.indexes", documents: [ { ns: "config.tags", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:324 locks:{ Global: { acquireCount: { r: 5, w: 3 } }, Database: { acquireCount: { r: 1, w: 2, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 325ms [js_test:auth] 2015-10-13T18:47:05.644-0400 s20263| 2015-10-13T18:47:05.643-0400 I SHARDING [Balancer] about to contact config servers and shards [js_test:auth] 2015-10-13T18:47:05.644-0400 s20263| 2015-10-13T18:47:05.644-0400 I SHARDING [Balancer] config servers and shards contacted successfully [js_test:auth] 2015-10-13T18:47:05.644-0400 s20263| 2015-10-13T18:47:05.644-0400 I SHARDING [Balancer] balancer id: ubuntu:20263 started [js_test:auth] 2015-10-13T18:47:05.673-0400 s20263| 2015-10-13T18:47:05.673-0400 I NETWORK [mongosMain] waiting for connections on port 20263 [js_test:auth] 2015-10-13T18:47:05.753-0400 c20260| 2015-10-13T18:47:05.753-0400 I WRITE [conn11] update config.mongos query: { _id: "ubuntu:20263" } update: { $set: { _id: "ubuntu:20263", ping: new Date(1444776425644), up: 0, waiting: false, mongoVersion: "3.1.10-pre-" } } keysExamined:0 docsExamined:0 nMatched:1 nModified:1 upsert:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 7, w: 5 } }, Database: { acquireCount: { r: 1, w: 4, W: 1 } }, Collection: { acquireCount: { r: 1, w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 108ms [js_test:auth] 2015-10-13T18:47:05.831-0400 s20263| 2015-10-13T18:47:05.831-0400 I NETWORK [mongosMain] connection accepted from 127.0.0.1:40434 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:47:05.832-0400 c20260| 2015-10-13T18:47:05.831-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49502 #18 (11 connections now open) [js_test:auth] 2015-10-13T18:47:05.852-0400 c20260| 2015-10-13T18:47:05.852-0400 I ACCESS [conn18] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:05.852-0400 s20263| 2015-10-13T18:47:05.852-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20260 [js_test:auth] 2015-10-13T18:47:05.853-0400 s20263| 2015-10-13T18:47:05.853-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:47:05.856-0400 s20263| 2015-10-13T18:47:05.856-0400 I ACCESS [conn1] authenticate db: admin { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" } [js_test:auth] 2015-10-13T18:47:05.894-0400 c20260| 2015-10-13T18:47:05.894-0400 I COMMAND [conn11] command config.$cmd command: update { update: "mongos", updates: [ { q: { _id: "ubuntu:20263" }, u: { $set: { _id: "ubuntu:20263", ping: new Date(1444776425644), up: 0, waiting: false, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:395 locks:{ Global: { acquireCount: { r: 7, w: 5 } }, Database: { acquireCount: { r: 1, w: 4, W: 1 } }, Collection: { acquireCount: { r: 1, w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 249ms [js_test:auth] 2015-10-13T18:47:05.896-0400 c20262| 2015-10-13T18:47:05.895-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46626 #9 (6 connections now open) [js_test:auth] 2015-10-13T18:47:05.918-0400 c20262| 2015-10-13T18:47:05.918-0400 I ACCESS [conn9] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:05.918-0400 s20263| 2015-10-13T18:47:05.918-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20262 [js_test:auth] 2015-10-13T18:47:05.919-0400 Waiting for active hosts... [js_test:auth] 2015-10-13T18:47:05.919-0400 Waiting for the balancer lock... [js_test:auth] 2015-10-13T18:47:05.920-0400 Waiting again for active hosts after balancer is off... [js_test:auth] 2015-10-13T18:47:05.921-0400 Configuration: Add user { "db" : "admin", "username" : "foo", "password" : "bar" } [js_test:auth] 2015-10-13T18:47:05.947-0400 s20263| 2015-10-13T18:47:05.947-0400 I SHARDING [conn1] distributed lock 'authorizationData' acquired for 'createUser', ts : 561d89e9069f0881537f24ff [js_test:auth] 2015-10-13T18:47:06.101-0400 c20260| 2015-10-13T18:47:06.100-0400 I WRITE [conn18] update admin.system.version query: { _id: "authSchema" } update: { $set: { currentVersion: 5 } } keysExamined:0 docsExamined:0 nMatched:1 nModified:1 upsert:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 6, w: 4 } }, Database: { acquireCount: { r: 1, w: 2, W: 2 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 138ms [js_test:auth] 2015-10-13T18:47:06.254-0400 c20260| 2015-10-13T18:47:06.254-0400 I COMMAND [conn18] command admin.$cmd command: getLastError { getLastError: 1, w: "majority", wtimeout: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:329 locks:{ Global: { acquireCount: { r: 6, w: 4 } }, Database: { acquireCount: { r: 1, w: 2, W: 2 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_query 153ms [js_test:auth] 2015-10-13T18:47:06.431-0400 c20260| 2015-10-13T18:47:06.431-0400 I WRITE [conn18] insert admin.system.users ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 10, w: 8 } }, Database: { acquireCount: { r: 1, w: 4, W: 4 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 4 } }, oplog: { acquireCount: { w: 4 } } } 176ms [js_test:auth] 2015-10-13T18:47:06.649-0400 c20260| 2015-10-13T18:47:06.648-0400 I COMMAND [conn18] command admin.$cmd command: getLastError { getLastError: 1, w: "majority", wtimeout: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:286 locks:{ Global: { acquireCount: { r: 10, w: 8 } }, Database: { acquireCount: { r: 1, w: 4, W: 4 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 4 } }, oplog: { acquireCount: { w: 4 } } } protocol:op_query 217ms [js_test:auth] 2015-10-13T18:47:06.649-0400 c20260| 2015-10-13T18:47:06.648-0400 I COMMAND [conn18] command admin.$cmd command: createUser { createUser: "foo", pwd: "xxx", roles: [ "root" ], digestPassword: false, writeConcern: { w: "majority", wtimeout: 30000 }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:261 locks:{ Global: { acquireCount: { r: 10, w: 8 } }, Database: { acquireCount: { r: 1, w: 4, W: 4 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 4 } }, oplog: { acquireCount: { w: 4 } } } protocol:op_command 701ms [js_test:auth] 2015-10-13T18:47:06.674-0400 s20263| 2015-10-13T18:47:06.673-0400 I SHARDING [conn1] distributed lock with ts: 561d89e9069f0881537f24ff' unlocked. [js_test:auth] 2015-10-13T18:47:06.674-0400 Successfully added user: { "user" : "foo", "roles" : [ "root" ] } [js_test:auth] 2015-10-13T18:47:06.693-0400 s20263| 2015-10-13T18:47:06.692-0400 I ACCESS [conn1] Successfully authenticated as principal foo on admin [js_test:auth] 2015-10-13T18:47:06.741-0400 [ [js_test:auth] 2015-10-13T18:47:06.741-0400 { [js_test:auth] 2015-10-13T18:47:06.742-0400 "_id" : "chunksize", [js_test:auth] 2015-10-13T18:47:06.742-0400 "value" : 1 [js_test:auth] 2015-10-13T18:47:06.742-0400 }, [js_test:auth] 2015-10-13T18:47:06.742-0400 { [js_test:auth] 2015-10-13T18:47:06.743-0400 "_id" : "balancer", [js_test:auth] 2015-10-13T18:47:06.743-0400 "stopped" : true, [js_test:auth] 2015-10-13T18:47:06.743-0400 "_secondaryThrottle" : false, [js_test:auth] 2015-10-13T18:47:06.744-0400 "_waitForDelete" : true [js_test:auth] 2015-10-13T18:47:06.744-0400 } [js_test:auth] 2015-10-13T18:47:06.744-0400 ] [js_test:auth] 2015-10-13T18:47:06.744-0400 Restart mongos with different auth options [js_test:auth] 2015-10-13T18:47:06.744-0400 s20263| 2015-10-13T18:47:06.741-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:47:06.762-0400 s20263| 2015-10-13T18:47:06.762-0400 I SHARDING [signalProcessingThread] dbexit: rc:0 [js_test:auth] 2015-10-13T18:47:06.764-0400 c20262| 2015-10-13T18:47:06.764-0400 I NETWORK [conn8] end connection 127.0.0.1:46294 (5 connections now open) [js_test:auth] 2015-10-13T18:47:06.764-0400 c20261| 2015-10-13T18:47:06.764-0400 I NETWORK [conn8] end connection 127.0.0.1:46303 (5 connections now open) [js_test:auth] 2015-10-13T18:47:06.764-0400 c20260| 2015-10-13T18:47:06.764-0400 I NETWORK [conn10] end connection 127.0.0.1:49178 (10 connections now open) [js_test:auth] 2015-10-13T18:47:06.764-0400 c20262| 2015-10-13T18:47:06.764-0400 I NETWORK [conn9] end connection 127.0.0.1:46626 (4 connections now open) [js_test:auth] 2015-10-13T18:47:06.764-0400 c20261| 2015-10-13T18:47:06.764-0400 I NETWORK [conn9] end connection 127.0.0.1:46307 (5 connections now open) [js_test:auth] 2015-10-13T18:47:06.764-0400 c20260| 2015-10-13T18:47:06.764-0400 I NETWORK [conn18] end connection 127.0.0.1:49502 (9 connections now open) [js_test:auth] 2015-10-13T18:47:06.765-0400 c20260| 2015-10-13T18:47:06.764-0400 I NETWORK [conn11] end connection 127.0.0.1:49179 (9 connections now open) [js_test:auth] 2015-10-13T18:47:07.741-0400 2015-10-13T18:47:07.741-0400 I - [thread1] shell: stopped mongo program on port 20263 [js_test:auth] 2015-10-13T18:47:07.744-0400 2015-10-13T18:47:07.744-0400 I - [thread1] shell: started program (sh16853): /media/ssd/mongo1/mongos --port 20264 -vv --configdb auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 --keyFile jstests/libs/key1 --chunkSize 1 --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:47:07.745-0400 2015-10-13T18:47:07.744-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20264, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:07.760-0400 s20264| 2015-10-13T18:47:07.760-0400 D - [main] tcmallocPoolSize: 1073741824 [js_test:auth] 2015-10-13T18:47:07.765-0400 s20264| 2015-10-13T18:47:07.765-0400 I CONTROL [main] [js_test:auth] 2015-10-13T18:47:07.765-0400 s20264| 2015-10-13T18:47:07.765-0400 I CONTROL [main] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:47:07.765-0400 s20264| 2015-10-13T18:47:07.765-0400 I CONTROL [main] ** Not recommended for production. [js_test:auth] 2015-10-13T18:47:07.766-0400 s20264| 2015-10-13T18:47:07.765-0400 I CONTROL [main] [js_test:auth] 2015-10-13T18:47:07.780-0400 s20264| 2015-10-13T18:47:07.780-0400 I SHARDING [mongosMain] MongoS version 3.1.10-pre- starting: pid=16853 port=20264 64-bit host=ubuntu (--help for usage) [js_test:auth] 2015-10-13T18:47:07.780-0400 s20264| 2015-10-13T18:47:07.780-0400 I CONTROL [mongosMain] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:47:07.780-0400 s20264| 2015-10-13T18:47:07.780-0400 I CONTROL [mongosMain] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:47:07.781-0400 s20264| 2015-10-13T18:47:07.780-0400 I CONTROL [mongosMain] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:47:07.781-0400 s20264| 2015-10-13T18:47:07.780-0400 I CONTROL [mongosMain] allocator: tcmalloc [js_test:auth] 2015-10-13T18:47:07.781-0400 s20264| 2015-10-13T18:47:07.780-0400 I CONTROL [mongosMain] modules: subscription [js_test:auth] 2015-10-13T18:47:07.781-0400 s20264| 2015-10-13T18:47:07.780-0400 I CONTROL [mongosMain] build environment: [js_test:auth] 2015-10-13T18:47:07.781-0400 s20264| 2015-10-13T18:47:07.780-0400 I CONTROL [mongosMain] distarch: x86_64 [js_test:auth] 2015-10-13T18:47:07.781-0400 s20264| 2015-10-13T18:47:07.780-0400 I CONTROL [mongosMain] target_arch: x86_64 [js_test:auth] 2015-10-13T18:47:07.781-0400 s20264| 2015-10-13T18:47:07.780-0400 I CONTROL [mongosMain] options: { net: { port: 20264 }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, sharding: { chunkSize: 1, configDB: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262" }, systemLog: { verbosity: 2 } } [js_test:auth] 2015-10-13T18:47:07.781-0400 s20264| 2015-10-13T18:47:07.780-0400 I SHARDING [mongosMain] Updating config server connection string to: auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.780-0400 I NETWORK [mongosMain] Starting new replica set monitor for auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.780-0400 D COMMAND [ReplicaSetMonitorWatcher] BackgroundJob starting: ReplicaSetMonitorWatcher [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.780-0400 I NETWORK [ReplicaSetMonitorWatcher] starting [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.781-0400 D ASIO [NetworkInterfaceASIO] The NetworkInterfaceASIO worker thread is spinning up [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.781-0400 D EXECUTOR [ShardWork-0] starting thread in pool ShardWork [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.781-0400 D ASIO [NetworkInterfaceASIO] The NetworkInterfaceASIO worker thread is spinning up [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.781-0400 D EXECUTOR [ShardWork-0] starting thread in pool ShardWork [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.782-0400 D NETWORK [mongosMain] Starting new refresh of replica set auth-configRS [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.782-0400 I SHARDING [thread1] creating distributed lock ping thread for process ubuntu:20264:1444776427:399327856 (sleeping for 30000ms) [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.782-0400 D NETWORK [mongosMain] creating new connection to:ubuntu:20261 [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.782-0400 D NETWORK [replSetDistLockPinger] creating new connection to:ubuntu:20262 [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.782-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:07.782-0400 s20264| 2015-10-13T18:47:07.782-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:07.783-0400 s20264| 2015-10-13T18:47:07.782-0400 D NETWORK [mongosMain] connected to server ubuntu:20261 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:07.783-0400 s20264| 2015-10-13T18:47:07.782-0400 D NETWORK [replSetDistLockPinger] connected to server ubuntu:20262 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:07.783-0400 c20261| 2015-10-13T18:47:07.782-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46752 #10 (5 connections now open) [js_test:auth] 2015-10-13T18:47:07.783-0400 c20262| 2015-10-13T18:47:07.782-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46745 #10 (5 connections now open) [js_test:auth] 2015-10-13T18:47:07.783-0400 s20264| 2015-10-13T18:47:07.782-0400 D NETWORK [mongosMain] connected connection! [js_test:auth] 2015-10-13T18:47:07.783-0400 s20264| 2015-10-13T18:47:07.782-0400 D NETWORK [replSetDistLockPinger] connected connection! [js_test:auth] 2015-10-13T18:47:07.783-0400 s20264| 2015-10-13T18:47:07.782-0400 D SHARDING [mongosMain] calling onCreate auth for ubuntu:20261 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:07.783-0400 s20264| 2015-10-13T18:47:07.782-0400 D SHARDING [replSetDistLockPinger] calling onCreate auth for ubuntu:20262 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:07.803-0400 c20262| 2015-10-13T18:47:07.803-0400 I ACCESS [conn10] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:07.804-0400 c20261| 2015-10-13T18:47:07.803-0400 I ACCESS [conn10] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:07.804-0400 s20264| 2015-10-13T18:47:07.803-0400 D NETWORK [replSetDistLockPinger] creating new connection to:ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.804-0400 s20264| 2015-10-13T18:47:07.804-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:07.804-0400 s20264| 2015-10-13T18:47:07.804-0400 D NETWORK [replSetDistLockPinger] connected to server ubuntu:20260 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:07.805-0400 c20260| 2015-10-13T18:47:07.804-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49632 #19 (9 connections now open) [js_test:auth] 2015-10-13T18:47:07.805-0400 s20264| 2015-10-13T18:47:07.804-0400 D NETWORK [replSetDistLockPinger] connected connection! [js_test:auth] 2015-10-13T18:47:07.805-0400 s20264| 2015-10-13T18:47:07.804-0400 D SHARDING [replSetDistLockPinger] calling onCreate auth for ubuntu:20260 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:07.822-0400 c20260| 2015-10-13T18:47:07.822-0400 I ACCESS [conn19] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:07.822-0400 s20264| 2015-10-13T18:47:07.822-0400 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:37.822-0400 cmd:{ findAndModify: "lockpings", query: { _id: "ubuntu:20264:1444776427:399327856" }, update: { $set: { ping: new Date(1444776427782) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.822-0400 s20264| 2015-10-13T18:47:07.822-0400 D ASIO [mongosMain] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 0|0, t: -1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.823-0400 s20264| 2015-10-13T18:47:07.822-0400 D ASIO [NetworkInterfaceASIO] Connecting to ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.823-0400 s20264| 2015-10-13T18:47:07.822-0400 D ASIO [NetworkInterfaceASIO] Connecting to ubuntu:20262 [js_test:auth] 2015-10-13T18:47:07.823-0400 c20260| 2015-10-13T18:47:07.823-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49633 #20 (10 connections now open) [js_test:auth] 2015-10-13T18:47:07.823-0400 c20262| 2015-10-13T18:47:07.823-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46753 #11 (6 connections now open) [js_test:auth] 2015-10-13T18:47:07.823-0400 s20264| 2015-10-13T18:47:07.823-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.823-0400 s20264| 2015-10-13T18:47:07.823-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:07.824-0400 s20264| 2015-10-13T18:47:07.824-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.826-0400 s20264| 2015-10-13T18:47:07.826-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:07.841-0400 s20264| 2015-10-13T18:47:07.840-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.854-0400 s20264| 2015-10-13T18:47:07.854-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:07.854-0400 s20264| 2015-10-13T18:47:07.854-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.854-0400 c20260| 2015-10-13T18:47:07.854-0400 I ACCESS [conn20] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:07.854-0400 s20264| 2015-10-13T18:47:07.854-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:07.855-0400 s20264| 2015-10-13T18:47:07.854-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.855-0400 c20262| 2015-10-13T18:47:07.854-0400 I ACCESS [conn11] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:07.855-0400 s20264| 2015-10-13T18:47:07.854-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.855-0400 s20264| 2015-10-13T18:47:07.854-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20262 [js_test:auth] 2015-10-13T18:47:07.855-0400 s20264| 2015-10-13T18:47:07.854-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:07.855-0400 s20264| 2015-10-13T18:47:07.854-0400 D SHARDING [mongosMain] found 0 shards listed on config server(s) [js_test:auth] 2015-10-13T18:47:07.855-0400 s20264| 2015-10-13T18:47:07.855-0400 D ASIO [mongosMain] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "version", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776426000|8, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.855-0400 s20264| 2015-10-13T18:47:07.855-0400 D ASIO [NetworkInterfaceASIO] Connecting to ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.856-0400 c20260| 2015-10-13T18:47:07.855-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49636 #21 (11 connections now open) [js_test:auth] 2015-10-13T18:47:07.856-0400 s20264| 2015-10-13T18:47:07.855-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.856-0400 s20264| 2015-10-13T18:47:07.856-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.871-0400 s20264| 2015-10-13T18:47:07.871-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.871-0400 s20264| 2015-10-13T18:47:07.871-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.871-0400 c20260| 2015-10-13T18:47:07.871-0400 I ACCESS [conn21] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:07.872-0400 s20264| 2015-10-13T18:47:07.871-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.872-0400 s20264| 2015-10-13T18:47:07.871-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.872-0400 s20264| 2015-10-13T18:47:07.872-0400 D ASIO [mongosMain] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776427000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.872-0400 s20264| 2015-10-13T18:47:07.872-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.872-0400 s20264| 2015-10-13T18:47:07.872-0400 D SHARDING [mongosMain] Found MaxChunkSize: 1 [js_test:auth] 2015-10-13T18:47:07.872-0400 s20264| 2015-10-13T18:47:07.872-0400 D ASIO [mongosMain] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:37.872-0400 cmd:{ insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.872-0400 s20264| 2015-10-13T18:47:07.872-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.877-0400 s20264| 2015-10-13T18:47:07.877-0400 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: findAndModify query predicate didn't match any lock document [js_test:auth] 2015-10-13T18:47:07.878-0400 s20264| 2015-10-13T18:47:07.877-0400 D ASIO [mongosMain] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:37.877-0400 cmd:{ insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.878-0400 s20264| 2015-10-13T18:47:07.877-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.878-0400 s20264| 2015-10-13T18:47:07.878-0400 D ASIO [mongosMain] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:37.878-0400 cmd:{ insert: "system.indexes", documents: [ { ns: "config.chunks", key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.878-0400 s20264| 2015-10-13T18:47:07.878-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.878-0400 s20264| 2015-10-13T18:47:07.878-0400 D ASIO [mongosMain] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:37.878-0400 cmd:{ insert: "system.indexes", documents: [ { ns: "config.shards", key: { host: 1 }, name: "host_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.878-0400 s20264| 2015-10-13T18:47:07.878-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.879-0400 s20264| 2015-10-13T18:47:07.878-0400 D ASIO [mongosMain] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:37.878-0400 cmd:{ insert: "system.indexes", documents: [ { ns: "config.locks", key: { ts: 1 }, name: "ts_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.879-0400 s20264| 2015-10-13T18:47:07.878-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.879-0400 s20264| 2015-10-13T18:47:07.878-0400 D ASIO [mongosMain] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:37.878-0400 cmd:{ insert: "system.indexes", documents: [ { ns: "config.locks", key: { state: 1, process: 1 }, name: "state_1_process_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.879-0400 s20264| 2015-10-13T18:47:07.879-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.879-0400 s20264| 2015-10-13T18:47:07.879-0400 D ASIO [mongosMain] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:37.879-0400 cmd:{ insert: "system.indexes", documents: [ { ns: "config.lockpings", key: { ping: 1 }, name: "ping_1" } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.879-0400 s20264| 2015-10-13T18:47:07.879-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.879-0400 s20264| 2015-10-13T18:47:07.879-0400 D ASIO [mongosMain] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:37.879-0400 cmd:{ insert: "system.indexes", documents: [ { ns: "config.tags", key: { ns: 1, min: 1 }, name: "ns_1_min_1", unique: true } ], writeConcern: { w: "majority", wtimeout: 0 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.880-0400 s20264| 2015-10-13T18:47:07.879-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.880-0400 s20264| 2015-10-13T18:47:07.879-0400 D COMMAND [Balancer] BackgroundJob starting: Balancer [js_test:auth] 2015-10-13T18:47:07.880-0400 s20264| 2015-10-13T18:47:07.879-0400 D COMMAND [cursorTimeout] BackgroundJob starting: cursorTimeout [js_test:auth] 2015-10-13T18:47:07.880-0400 s20264| 2015-10-13T18:47:07.879-0400 I SHARDING [Balancer] about to contact config servers and shards [js_test:auth] 2015-10-13T18:47:07.880-0400 s20264| 2015-10-13T18:47:07.879-0400 D ASIO [mongosMain] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:47:37.879-0400 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.880-0400 s20264| 2015-10-13T18:47:07.879-0400 D COMMAND [ClusterCursorCleanupJob] BackgroundJob starting: ClusterCursorCleanupJob [js_test:auth] 2015-10-13T18:47:07.880-0400 s20264| 2015-10-13T18:47:07.879-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.880-0400 s20264| 2015-10-13T18:47:07.879-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776427000|1, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.880-0400 s20264| 2015-10-13T18:47:07.880-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:07.880-0400 s20264| 2015-10-13T18:47:07.880-0400 D COMMAND [UserCacheInvalidatorThread] BackgroundJob starting: UserCacheInvalidatorThread [js_test:auth] 2015-10-13T18:47:07.881-0400 s20264| 2015-10-13T18:47:07.880-0400 D NETWORK [mongosMain] fd limit hard:4096 soft:4096 max conn: 3276 [js_test:auth] 2015-10-13T18:47:07.881-0400 s20264| 2015-10-13T18:47:07.880-0400 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner [js_test:auth] 2015-10-13T18:47:07.881-0400 s20264| 2015-10-13T18:47:07.880-0400 D SHARDING [Balancer] found 0 shards listed on config server(s) [js_test:auth] 2015-10-13T18:47:07.881-0400 s20264| 2015-10-13T18:47:07.880-0400 I SHARDING [Balancer] config servers and shards contacted successfully [js_test:auth] 2015-10-13T18:47:07.881-0400 s20264| 2015-10-13T18:47:07.880-0400 I SHARDING [Balancer] balancer id: ubuntu:20264 started [js_test:auth] 2015-10-13T18:47:07.881-0400 s20264| 2015-10-13T18:47:07.880-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:37.880-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776427880), up: 0, waiting: false, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.881-0400 s20264| 2015-10-13T18:47:07.880-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.903-0400 s20264| 2015-10-13T18:47:07.903-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776427000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.903-0400 s20264| 2015-10-13T18:47:07.903-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.903-0400 s20264| 2015-10-13T18:47:07.903-0400 D SHARDING [Balancer] found 0 shards listed on config server(s) [js_test:auth] 2015-10-13T18:47:07.904-0400 s20264| 2015-10-13T18:47:07.903-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776427000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.904-0400 s20264| 2015-10-13T18:47:07.904-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:07.904-0400 s20264| 2015-10-13T18:47:07.904-0400 D SHARDING [Balancer] Refreshing MaxChunkSize: 1MB [js_test:auth] 2015-10-13T18:47:07.904-0400 s20264| 2015-10-13T18:47:07.904-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776427000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.904-0400 s20264| 2015-10-13T18:47:07.904-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.904-0400 s20264| 2015-10-13T18:47:07.904-0400 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:auth] 2015-10-13T18:47:07.904-0400 s20264| 2015-10-13T18:47:07.904-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:37.904-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776427904), up: 0, waiting: true, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.904-0400 s20264| 2015-10-13T18:47:07.904-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.905-0400 s20264| 2015-10-13T18:47:07.905-0400 I NETWORK [mongosMain] waiting for connections on port 20264 [js_test:auth] 2015-10-13T18:47:07.945-0400 s20264| 2015-10-13T18:47:07.945-0400 I NETWORK [mongosMain] connection accepted from 127.0.0.1:54935 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:47:07.946-0400 s20264| 2015-10-13T18:47:07.945-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:47:37.945-0400 cmd:{ usersInfo: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.946-0400 s20264| 2015-10-13T18:47:07.945-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.948-0400 s20264| 2015-10-13T18:47:07.948-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:47:37.948-0400 cmd:{ getParameter: 1, authSchemaVersion: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.948-0400 s20264| 2015-10-13T18:47:07.948-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.948-0400 s20264| 2015-10-13T18:47:07.948-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:47:37.948-0400 cmd:{ usersInfo: [ { user: "foo", db: "admin" } ], showPrivileges: true, showCredentials: true, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.949-0400 s20264| 2015-10-13T18:47:07.948-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.964-0400 s20264| 2015-10-13T18:47:07.964-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:47:37.964-0400 cmd:{ usersInfo: [ { user: "foo", db: "admin" } ], showPrivileges: true, showCredentials: true, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:07.964-0400 s20264| 2015-10-13T18:47:07.964-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:07.965-0400 s20264| 2015-10-13T18:47:07.965-0400 I ACCESS [conn1] Successfully authenticated as principal foo on admin [js_test:auth] 2015-10-13T18:47:07.965-0400 ReplSetTest Starting Set [js_test:auth] 2015-10-13T18:47:07.965-0400 ReplSetTest n is : 0 [js_test:auth] 2015-10-13T18:47:07.965-0400 ReplSetTest n: 0 ports: [ 20265, 20266, 20267 ] 20265 number [js_test:auth] 2015-10-13T18:47:07.966-0400 { [js_test:auth] 2015-10-13T18:47:07.966-0400 "useHostName" : true, [js_test:auth] 2015-10-13T18:47:07.966-0400 "oplogSize" : 40, [js_test:auth] 2015-10-13T18:47:07.966-0400 "keyFile" : "jstests/libs/key2", [js_test:auth] 2015-10-13T18:47:07.966-0400 "port" : 20265, [js_test:auth] 2015-10-13T18:47:07.966-0400 "noprealloc" : "", [js_test:auth] 2015-10-13T18:47:07.967-0400 "smallfiles" : "", [js_test:auth] 2015-10-13T18:47:07.967-0400 "replSet" : "d1", [js_test:auth] 2015-10-13T18:47:07.967-0400 "dbpath" : "$set-$node", [js_test:auth] 2015-10-13T18:47:07.967-0400 "verbose" : 0, [js_test:auth] 2015-10-13T18:47:07.967-0400 "restart" : undefined, [js_test:auth] 2015-10-13T18:47:07.968-0400 "pathOpts" : { [js_test:auth] 2015-10-13T18:47:07.968-0400 "node" : 0, [js_test:auth] 2015-10-13T18:47:07.968-0400 "set" : "d1" [js_test:auth] 2015-10-13T18:47:07.968-0400 } [js_test:auth] 2015-10-13T18:47:07.968-0400 } [js_test:auth] 2015-10-13T18:47:07.968-0400 ReplSetTest Starting.... [js_test:auth] 2015-10-13T18:47:07.969-0400 Resetting db path '/data/db/job1/mongorunner/d1-0' [js_test:auth] 2015-10-13T18:47:07.969-0400 2015-10-13T18:47:07.968-0400 I - [thread1] shell: started program (sh16944): /media/ssd/mongo1/mongod --oplogSize 40 --keyFile jstests/libs/key2 --port 20265 --noprealloc --smallfiles --replSet d1 --dbpath /data/db/job1/mongorunner/d1-0 --nopreallocj --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:47:07.969-0400 2015-10-13T18:47:07.969-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:07.988-0400 d20265| note: noprealloc may hurt performance in many applications [js_test:auth] 2015-10-13T18:47:08.038-0400 d20265| 2015-10-13T18:47:08.037-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:auth] 2015-10-13T18:47:08.169-0400 2015-10-13T18:47:08.169-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:08.370-0400 2015-10-13T18:47:08.370-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:08.436-0400 d20265| 2015-10-13T18:47:08.435-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:auth] 2015-10-13T18:47:08.440-0400 d20265| 2015-10-13T18:47:08.435-0400 I CONTROL [initandlisten] MongoDB starting : pid=16944 port=20265 dbpath=/data/db/job1/mongorunner/d1-0 64-bit host=ubuntu [js_test:auth] 2015-10-13T18:47:08.441-0400 d20265| 2015-10-13T18:47:08.435-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:08.441-0400 d20265| 2015-10-13T18:47:08.435-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:47:08.441-0400 d20265| 2015-10-13T18:47:08.435-0400 I CONTROL [initandlisten] ** Not recommended for production. [js_test:auth] 2015-10-13T18:47:08.442-0400 d20265| 2015-10-13T18:47:08.436-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:08.442-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:08.442-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine. [js_test:auth] 2015-10-13T18:47:08.442-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems: [js_test:auth] 2015-10-13T18:47:08.442-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options] [js_test:auth] 2015-10-13T18:47:08.443-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:08.443-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. [js_test:auth] 2015-10-13T18:47:08.443-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:08.443-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:08.444-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. [js_test:auth] 2015-10-13T18:47:08.444-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:08.445-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:08.445-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:47:08.445-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:47:08.445-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:47:08.446-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] allocator: tcmalloc [js_test:auth] 2015-10-13T18:47:08.446-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] modules: subscription [js_test:auth] 2015-10-13T18:47:08.446-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] build environment: [js_test:auth] 2015-10-13T18:47:08.446-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] distarch: x86_64 [js_test:auth] 2015-10-13T18:47:08.447-0400 d20265| 2015-10-13T18:47:08.437-0400 I CONTROL [initandlisten] target_arch: x86_64 [js_test:auth] 2015-10-13T18:47:08.447-0400 d20265| 2015-10-13T18:47:08.438-0400 I CONTROL [initandlisten] options: { net: { port: 20265 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "d1" }, security: { keyFile: "jstests/libs/key2" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/job1/mongorunner/d1-0", mmapv1: { preallocDataFiles: false, smallFiles: true } } } [js_test:auth] 2015-10-13T18:47:08.570-0400 2015-10-13T18:47:08.570-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:08.580-0400 d20265| 2015-10-13T18:47:08.579-0400 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument Did not find replica set lastVote document in local.replset.election [js_test:auth] 2015-10-13T18:47:08.580-0400 d20265| 2015-10-13T18:47:08.579-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset [js_test:auth] 2015-10-13T18:47:08.581-0400 d20265| 2015-10-13T18:47:08.580-0400 I FTDC [initandlisten] Starting full-time diagnostic data capture with directory '/data/db/job1/mongorunner/d1-0/diagnostic.data' [js_test:auth] 2015-10-13T18:47:08.709-0400 d20265| 2015-10-13T18:47:08.708-0400 I NETWORK [initandlisten] waiting for connections on port 20265 [js_test:auth] 2015-10-13T18:47:08.771-0400 d20265| 2015-10-13T18:47:08.771-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53582 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:47:08.771-0400 d20265| 2015-10-13T18:47:08.771-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:47:08.772-0400 [ connection to ubuntu:20265 ] [js_test:auth] 2015-10-13T18:47:08.772-0400 ReplSetTest n is : 1 [js_test:auth] 2015-10-13T18:47:08.772-0400 ReplSetTest n: 1 ports: [ 20265, 20266, 20267 ] 20266 number [js_test:auth] 2015-10-13T18:47:08.773-0400 { [js_test:auth] 2015-10-13T18:47:08.773-0400 "useHostName" : true, [js_test:auth] 2015-10-13T18:47:08.773-0400 "oplogSize" : 40, [js_test:auth] 2015-10-13T18:47:08.773-0400 "keyFile" : "jstests/libs/key2", [js_test:auth] 2015-10-13T18:47:08.773-0400 "port" : 20266, [js_test:auth] 2015-10-13T18:47:08.773-0400 "noprealloc" : "", [js_test:auth] 2015-10-13T18:47:08.773-0400 "smallfiles" : "", [js_test:auth] 2015-10-13T18:47:08.773-0400 "replSet" : "d1", [js_test:auth] 2015-10-13T18:47:08.773-0400 "dbpath" : "$set-$node", [js_test:auth] 2015-10-13T18:47:08.773-0400 "verbose" : 0, [js_test:auth] 2015-10-13T18:47:08.773-0400 "restart" : undefined, [js_test:auth] 2015-10-13T18:47:08.773-0400 "pathOpts" : { [js_test:auth] 2015-10-13T18:47:08.773-0400 "node" : 1, [js_test:auth] 2015-10-13T18:47:08.774-0400 "set" : "d1" [js_test:auth] 2015-10-13T18:47:08.774-0400 } [js_test:auth] 2015-10-13T18:47:08.774-0400 } [js_test:auth] 2015-10-13T18:47:08.774-0400 ReplSetTest Starting.... [js_test:auth] 2015-10-13T18:47:08.774-0400 Resetting db path '/data/db/job1/mongorunner/d1-1' [js_test:auth] 2015-10-13T18:47:08.776-0400 2015-10-13T18:47:08.776-0400 I - [thread1] shell: started program (sh17500): /media/ssd/mongo1/mongod --oplogSize 40 --keyFile jstests/libs/key2 --port 20266 --noprealloc --smallfiles --replSet d1 --dbpath /data/db/job1/mongorunner/d1-1 --nopreallocj --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:47:08.776-0400 2015-10-13T18:47:08.776-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20266, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:08.793-0400 d20266| note: noprealloc may hurt performance in many applications [js_test:auth] 2015-10-13T18:47:08.841-0400 d20266| 2015-10-13T18:47:08.841-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:auth] 2015-10-13T18:47:08.977-0400 2015-10-13T18:47:08.977-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20266, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:09.178-0400 2015-10-13T18:47:09.177-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20266, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:09.255-0400 d20266| 2015-10-13T18:47:09.255-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:auth] 2015-10-13T18:47:09.255-0400 d20266| 2015-10-13T18:47:09.255-0400 I CONTROL [initandlisten] MongoDB starting : pid=17500 port=20266 dbpath=/data/db/job1/mongorunner/d1-1 64-bit host=ubuntu [js_test:auth] 2015-10-13T18:47:09.256-0400 d20266| 2015-10-13T18:47:09.255-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:09.256-0400 d20266| 2015-10-13T18:47:09.255-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:47:09.256-0400 d20266| 2015-10-13T18:47:09.255-0400 I CONTROL [initandlisten] ** Not recommended for production. [js_test:auth] 2015-10-13T18:47:09.256-0400 d20266| 2015-10-13T18:47:09.255-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:09.256-0400 d20266| 2015-10-13T18:47:09.256-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:09.257-0400 d20266| 2015-10-13T18:47:09.256-0400 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine. [js_test:auth] 2015-10-13T18:47:09.257-0400 d20266| 2015-10-13T18:47:09.256-0400 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems: [js_test:auth] 2015-10-13T18:47:09.257-0400 d20266| 2015-10-13T18:47:09.256-0400 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options] [js_test:auth] 2015-10-13T18:47:09.257-0400 d20266| 2015-10-13T18:47:09.256-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:09.257-0400 d20266| 2015-10-13T18:47:09.256-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. [js_test:auth] 2015-10-13T18:47:09.258-0400 d20266| 2015-10-13T18:47:09.256-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:09.258-0400 d20266| 2015-10-13T18:47:09.256-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:09.258-0400 d20266| 2015-10-13T18:47:09.256-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. [js_test:auth] 2015-10-13T18:47:09.258-0400 d20266| 2015-10-13T18:47:09.256-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:09.258-0400 d20266| 2015-10-13T18:47:09.256-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:09.259-0400 d20266| 2015-10-13T18:47:09.257-0400 I CONTROL [initandlisten] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:47:09.259-0400 d20266| 2015-10-13T18:47:09.257-0400 I CONTROL [initandlisten] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:47:09.259-0400 d20266| 2015-10-13T18:47:09.257-0400 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:47:09.259-0400 d20266| 2015-10-13T18:47:09.257-0400 I CONTROL [initandlisten] allocator: tcmalloc [js_test:auth] 2015-10-13T18:47:09.259-0400 d20266| 2015-10-13T18:47:09.257-0400 I CONTROL [initandlisten] modules: subscription [js_test:auth] 2015-10-13T18:47:09.260-0400 d20266| 2015-10-13T18:47:09.257-0400 I CONTROL [initandlisten] build environment: [js_test:auth] 2015-10-13T18:47:09.260-0400 d20266| 2015-10-13T18:47:09.257-0400 I CONTROL [initandlisten] distarch: x86_64 [js_test:auth] 2015-10-13T18:47:09.260-0400 d20266| 2015-10-13T18:47:09.257-0400 I CONTROL [initandlisten] target_arch: x86_64 [js_test:auth] 2015-10-13T18:47:09.260-0400 d20266| 2015-10-13T18:47:09.257-0400 I CONTROL [initandlisten] options: { net: { port: 20266 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "d1" }, security: { keyFile: "jstests/libs/key2" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/job1/mongorunner/d1-1", mmapv1: { preallocDataFiles: false, smallFiles: true } } } [js_test:auth] 2015-10-13T18:47:09.378-0400 2015-10-13T18:47:09.378-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20266, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:09.379-0400 d20266| 2015-10-13T18:47:09.379-0400 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument Did not find replica set lastVote document in local.replset.election [js_test:auth] 2015-10-13T18:47:09.380-0400 d20266| 2015-10-13T18:47:09.379-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset [js_test:auth] 2015-10-13T18:47:09.380-0400 d20266| 2015-10-13T18:47:09.379-0400 I FTDC [initandlisten] Starting full-time diagnostic data capture with directory '/data/db/job1/mongorunner/d1-1/diagnostic.data' [js_test:auth] 2015-10-13T18:47:09.489-0400 d20266| 2015-10-13T18:47:09.488-0400 I NETWORK [initandlisten] waiting for connections on port 20266 [js_test:auth] 2015-10-13T18:47:09.578-0400 d20266| 2015-10-13T18:47:09.578-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:33482 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:47:09.579-0400 d20266| 2015-10-13T18:47:09.579-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:47:09.579-0400 [ connection to ubuntu:20265, connection to ubuntu:20266 ] [js_test:auth] 2015-10-13T18:47:09.579-0400 ReplSetTest n is : 2 [js_test:auth] 2015-10-13T18:47:09.579-0400 ReplSetTest n: 2 ports: [ 20265, 20266, 20267 ] 20267 number [js_test:auth] 2015-10-13T18:47:09.580-0400 { [js_test:auth] 2015-10-13T18:47:09.580-0400 "useHostName" : true, [js_test:auth] 2015-10-13T18:47:09.580-0400 "oplogSize" : 40, [js_test:auth] 2015-10-13T18:47:09.580-0400 "keyFile" : "jstests/libs/key2", [js_test:auth] 2015-10-13T18:47:09.580-0400 "port" : 20267, [js_test:auth] 2015-10-13T18:47:09.580-0400 "noprealloc" : "", [js_test:auth] 2015-10-13T18:47:09.580-0400 "smallfiles" : "", [js_test:auth] 2015-10-13T18:47:09.581-0400 "replSet" : "d1", [js_test:auth] 2015-10-13T18:47:09.581-0400 "dbpath" : "$set-$node", [js_test:auth] 2015-10-13T18:47:09.581-0400 "verbose" : 0, [js_test:auth] 2015-10-13T18:47:09.581-0400 "restart" : undefined, [js_test:auth] 2015-10-13T18:47:09.581-0400 "pathOpts" : { [js_test:auth] 2015-10-13T18:47:09.581-0400 "node" : 2, [js_test:auth] 2015-10-13T18:47:09.581-0400 "set" : "d1" [js_test:auth] 2015-10-13T18:47:09.581-0400 } [js_test:auth] 2015-10-13T18:47:09.581-0400 } [js_test:auth] 2015-10-13T18:47:09.581-0400 ReplSetTest Starting.... [js_test:auth] 2015-10-13T18:47:09.581-0400 Resetting db path '/data/db/job1/mongorunner/d1-2' [js_test:auth] 2015-10-13T18:47:09.584-0400 2015-10-13T18:47:09.583-0400 I - [thread1] shell: started program (sh17644): /media/ssd/mongo1/mongod --oplogSize 40 --keyFile jstests/libs/key2 --port 20267 --noprealloc --smallfiles --replSet d1 --dbpath /data/db/job1/mongorunner/d1-2 --nopreallocj --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:47:09.584-0400 2015-10-13T18:47:09.584-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20267, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:09.599-0400 d20267| note: noprealloc may hurt performance in many applications [js_test:auth] 2015-10-13T18:47:09.647-0400 d20267| 2015-10-13T18:47:09.647-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:auth] 2015-10-13T18:47:09.784-0400 2015-10-13T18:47:09.784-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20267, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:09.985-0400 2015-10-13T18:47:09.985-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20267, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:10.070-0400 d20267| 2015-10-13T18:47:10.070-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:auth] 2015-10-13T18:47:10.070-0400 d20267| 2015-10-13T18:47:10.070-0400 I CONTROL [initandlisten] MongoDB starting : pid=17644 port=20267 dbpath=/data/db/job1/mongorunner/d1-2 64-bit host=ubuntu [js_test:auth] 2015-10-13T18:47:10.071-0400 d20267| 2015-10-13T18:47:10.070-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:10.071-0400 d20267| 2015-10-13T18:47:10.070-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:47:10.071-0400 d20267| 2015-10-13T18:47:10.070-0400 I CONTROL [initandlisten] ** Not recommended for production. [js_test:auth] 2015-10-13T18:47:10.071-0400 d20267| 2015-10-13T18:47:10.070-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:10.071-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:10.071-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine. [js_test:auth] 2015-10-13T18:47:10.071-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems: [js_test:auth] 2015-10-13T18:47:10.072-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options] [js_test:auth] 2015-10-13T18:47:10.072-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:10.072-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. [js_test:auth] 2015-10-13T18:47:10.072-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:10.072-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:10.072-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. [js_test:auth] 2015-10-13T18:47:10.072-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:10.072-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:10.072-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:47:10.072-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:47:10.073-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:47:10.073-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] allocator: tcmalloc [js_test:auth] 2015-10-13T18:47:10.073-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] modules: subscription [js_test:auth] 2015-10-13T18:47:10.073-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] build environment: [js_test:auth] 2015-10-13T18:47:10.073-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] distarch: x86_64 [js_test:auth] 2015-10-13T18:47:10.073-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] target_arch: x86_64 [js_test:auth] 2015-10-13T18:47:10.073-0400 d20267| 2015-10-13T18:47:10.071-0400 I CONTROL [initandlisten] options: { net: { port: 20267 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "d1" }, security: { keyFile: "jstests/libs/key2" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/job1/mongorunner/d1-2", mmapv1: { preallocDataFiles: false, smallFiles: true } } } [js_test:auth] 2015-10-13T18:47:10.185-0400 2015-10-13T18:47:10.185-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20267, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:10.197-0400 d20267| 2015-10-13T18:47:10.197-0400 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument Did not find replica set lastVote document in local.replset.election [js_test:auth] 2015-10-13T18:47:10.197-0400 d20267| 2015-10-13T18:47:10.197-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset [js_test:auth] 2015-10-13T18:47:10.197-0400 d20267| 2015-10-13T18:47:10.197-0400 I FTDC [initandlisten] Starting full-time diagnostic data capture with directory '/data/db/job1/mongorunner/d1-2/diagnostic.data' [js_test:auth] 2015-10-13T18:47:10.312-0400 d20267| 2015-10-13T18:47:10.312-0400 I NETWORK [initandlisten] waiting for connections on port 20267 [js_test:auth] 2015-10-13T18:47:10.386-0400 d20267| 2015-10-13T18:47:10.386-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55142 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:47:10.386-0400 d20267| 2015-10-13T18:47:10.386-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:47:10.386-0400 [ [js_test:auth] 2015-10-13T18:47:10.386-0400 connection to ubuntu:20265, [js_test:auth] 2015-10-13T18:47:10.387-0400 connection to ubuntu:20266, [js_test:auth] 2015-10-13T18:47:10.387-0400 connection to ubuntu:20267 [js_test:auth] 2015-10-13T18:47:10.387-0400 ] [js_test:auth] 2015-10-13T18:47:10.387-0400 { [js_test:auth] 2015-10-13T18:47:10.387-0400 "replSetInitiate" : { [js_test:auth] 2015-10-13T18:47:10.387-0400 "_id" : "d1", [js_test:auth] 2015-10-13T18:47:10.387-0400 "members" : [ [js_test:auth] 2015-10-13T18:47:10.387-0400 { [js_test:auth] 2015-10-13T18:47:10.387-0400 "_id" : 0, [js_test:auth] 2015-10-13T18:47:10.387-0400 "host" : "ubuntu:20265" [js_test:auth] 2015-10-13T18:47:10.387-0400 }, [js_test:auth] 2015-10-13T18:47:10.388-0400 { [js_test:auth] 2015-10-13T18:47:10.388-0400 "_id" : 1, [js_test:auth] 2015-10-13T18:47:10.388-0400 "host" : "ubuntu:20266" [js_test:auth] 2015-10-13T18:47:10.388-0400 }, [js_test:auth] 2015-10-13T18:47:10.388-0400 { [js_test:auth] 2015-10-13T18:47:10.388-0400 "_id" : 2, [js_test:auth] 2015-10-13T18:47:10.388-0400 "host" : "ubuntu:20267" [js_test:auth] 2015-10-13T18:47:10.388-0400 } [js_test:auth] 2015-10-13T18:47:10.388-0400 ] [js_test:auth] 2015-10-13T18:47:10.389-0400 } [js_test:auth] 2015-10-13T18:47:10.389-0400 } [js_test:auth] 2015-10-13T18:47:10.389-0400 d20265| 2015-10-13T18:47:10.387-0400 I REPL [conn1] replSetInitiate admin command received from client [js_test:auth] 2015-10-13T18:47:10.389-0400 d20265| 2015-10-13T18:47:10.388-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51940 #2 (2 connections now open) [js_test:auth] 2015-10-13T18:47:10.407-0400 d20265| 2015-10-13T18:47:10.407-0400 I ACCESS [conn2] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:10.407-0400 d20265| 2015-10-13T18:47:10.407-0400 I NETWORK [conn2] end connection 127.0.0.1:51940 (1 connection now open) [js_test:auth] 2015-10-13T18:47:10.407-0400 d20266| 2015-10-13T18:47:10.407-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:33863 #2 (2 connections now open) [js_test:auth] 2015-10-13T18:47:10.423-0400 d20266| 2015-10-13T18:47:10.423-0400 I ACCESS [conn2] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:10.423-0400 d20266| 2015-10-13T18:47:10.423-0400 I NETWORK [conn2] end connection 127.0.0.1:33863 (1 connection now open) [js_test:auth] 2015-10-13T18:47:10.424-0400 d20267| 2015-10-13T18:47:10.423-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46079 #2 (2 connections now open) [js_test:auth] 2015-10-13T18:47:10.442-0400 d20267| 2015-10-13T18:47:10.441-0400 I ACCESS [conn2] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:10.442-0400 d20265| 2015-10-13T18:47:10.442-0400 I REPL [conn1] replSetInitiate config object with 3 members parses ok [js_test:auth] 2015-10-13T18:47:10.443-0400 d20267| 2015-10-13T18:47:10.442-0400 I NETWORK [conn2] end connection 127.0.0.1:46079 (1 connection now open) [js_test:auth] 2015-10-13T18:47:10.443-0400 d20266| 2015-10-13T18:47:10.442-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:33866 #3 (2 connections now open) [js_test:auth] 2015-10-13T18:47:10.443-0400 d20267| 2015-10-13T18:47:10.442-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46081 #3 (2 connections now open) [js_test:auth] 2015-10-13T18:47:10.473-0400 d20266| 2015-10-13T18:47:10.473-0400 I ACCESS [conn3] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:10.473-0400 d20267| 2015-10-13T18:47:10.473-0400 I ACCESS [conn3] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:10.473-0400 d20265| 2015-10-13T18:47:10.473-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20266 [js_test:auth] 2015-10-13T18:47:10.473-0400 d20265| 2015-10-13T18:47:10.473-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20267 [js_test:auth] 2015-10-13T18:47:10.473-0400 d20265| 2015-10-13T18:47:10.473-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51956 #3 (2 connections now open) [js_test:auth] 2015-10-13T18:47:10.474-0400 d20265| 2015-10-13T18:47:10.473-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51957 #4 (3 connections now open) [js_test:auth] 2015-10-13T18:47:10.491-0400 d20265| 2015-10-13T18:47:10.491-0400 I ACCESS [conn3] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:10.491-0400 d20267| 2015-10-13T18:47:10.491-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20265 [js_test:auth] 2015-10-13T18:47:10.491-0400 d20265| 2015-10-13T18:47:10.491-0400 I ACCESS [conn4] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:10.492-0400 d20266| 2015-10-13T18:47:10.491-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20265 [js_test:auth] 2015-10-13T18:47:10.629-0400 d20265| 2015-10-13T18:47:10.629-0400 I REPL [ReplicationExecutor] New replica set config in use: { _id: "d1", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "ubuntu:20265", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ubuntu:20266", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "ubuntu:20267", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } [js_test:auth] 2015-10-13T18:47:10.629-0400 d20265| 2015-10-13T18:47:10.629-0400 I REPL [ReplicationExecutor] This node is ubuntu:20265 in the config [js_test:auth] 2015-10-13T18:47:10.629-0400 d20265| 2015-10-13T18:47:10.629-0400 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:auth] 2015-10-13T18:47:10.629-0400 d20265| 2015-10-13T18:47:10.629-0400 I REPL [conn1] ****** [js_test:auth] 2015-10-13T18:47:10.629-0400 d20265| 2015-10-13T18:47:10.629-0400 I REPL [conn1] creating replication oplog of size: 40MB... [js_test:auth] 2015-10-13T18:47:10.630-0400 d20265| 2015-10-13T18:47:10.629-0400 I REPL [ReplicationExecutor] Member ubuntu:20266 is now in state STARTUP [js_test:auth] 2015-10-13T18:47:10.630-0400 d20265| 2015-10-13T18:47:10.629-0400 I REPL [ReplicationExecutor] Member ubuntu:20267 is now in state STARTUP [js_test:auth] 2015-10-13T18:47:10.696-0400 d20265| 2015-10-13T18:47:10.696-0400 I STORAGE [conn1] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:auth] 2015-10-13T18:47:10.697-0400 d20265| 2015-10-13T18:47:10.696-0400 I STORAGE [conn1] Scanning the oplog to determine where to place markers for when to truncate [js_test:auth] 2015-10-13T18:47:11.035-0400 d20265| 2015-10-13T18:47:11.035-0400 I REPL [conn1] ****** [js_test:auth] 2015-10-13T18:47:11.036-0400 d20265| 2015-10-13T18:47:11.036-0400 I REPL [conn1] Starting replication applier threads [js_test:auth] 2015-10-13T18:47:11.036-0400 d20265| 2015-10-13T18:47:11.036-0400 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:auth] 2015-10-13T18:47:11.037-0400 d20265| 2015-10-13T18:47:11.036-0400 I COMMAND [conn1] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "d1", members: [ { _id: 0.0, host: "ubuntu:20265" }, { _id: 1.0, host: "ubuntu:20266" }, { _id: 2.0, host: "ubuntu:20267" } ] } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{ Global: { acquireCount: { r: 8, w: 4, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 1132 } }, Database: { acquireCount: { r: 1, w: 2, W: 2 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 649ms [js_test:auth] 2015-10-13T18:47:11.037-0400 d20265| 2015-10-13T18:47:11.037-0400 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:auth] 2015-10-13T18:47:12.492-0400 d20265| 2015-10-13T18:47:12.492-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52116 #5 (4 connections now open) [js_test:auth] 2015-10-13T18:47:12.493-0400 d20265| 2015-10-13T18:47:12.493-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52117 #6 (5 connections now open) [js_test:auth] 2015-10-13T18:47:12.510-0400 d20265| 2015-10-13T18:47:12.510-0400 I ACCESS [conn6] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:12.511-0400 d20265| 2015-10-13T18:47:12.510-0400 I NETWORK [conn6] end connection 127.0.0.1:52117 (4 connections now open) [js_test:auth] 2015-10-13T18:47:12.511-0400 d20266| 2015-10-13T18:47:12.511-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34033 #4 (3 connections now open) [js_test:auth] 2015-10-13T18:47:12.519-0400 d20265| 2015-10-13T18:47:12.518-0400 I ACCESS [conn5] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:12.519-0400 d20265| 2015-10-13T18:47:12.519-0400 I NETWORK [conn5] end connection 127.0.0.1:52116 (3 connections now open) [js_test:auth] 2015-10-13T18:47:12.519-0400 d20266| 2015-10-13T18:47:12.519-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34034 #5 (4 connections now open) [js_test:auth] 2015-10-13T18:47:12.527-0400 d20266| 2015-10-13T18:47:12.527-0400 I ACCESS [conn4] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:12.527-0400 d20266| 2015-10-13T18:47:12.527-0400 I NETWORK [conn4] end connection 127.0.0.1:34033 (3 connections now open) [js_test:auth] 2015-10-13T18:47:12.527-0400 d20267| 2015-10-13T18:47:12.527-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46249 #4 (3 connections now open) [js_test:auth] 2015-10-13T18:47:12.536-0400 d20266| 2015-10-13T18:47:12.536-0400 I ACCESS [conn5] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:12.536-0400 d20266| 2015-10-13T18:47:12.536-0400 I NETWORK [conn5] end connection 127.0.0.1:34034 (2 connections now open) [js_test:auth] 2015-10-13T18:47:12.537-0400 d20267| 2015-10-13T18:47:12.537-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46250 #5 (4 connections now open) [js_test:auth] 2015-10-13T18:47:12.543-0400 d20267| 2015-10-13T18:47:12.542-0400 I ACCESS [conn4] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:12.543-0400 d20267| 2015-10-13T18:47:12.543-0400 I NETWORK [conn4] end connection 127.0.0.1:46249 (3 connections now open) [js_test:auth] 2015-10-13T18:47:12.552-0400 d20267| 2015-10-13T18:47:12.552-0400 I ACCESS [conn5] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:12.553-0400 d20267| 2015-10-13T18:47:12.552-0400 I NETWORK [conn5] end connection 127.0.0.1:46250 (2 connections now open) [js_test:auth] 2015-10-13T18:47:12.665-0400 d20266| 2015-10-13T18:47:12.665-0400 I REPL [replExecDBWorker-0] Starting replication applier threads [js_test:auth] 2015-10-13T18:47:12.666-0400 d20266| 2015-10-13T18:47:12.665-0400 W REPL [rsSync] did not receive a valid config yet [js_test:auth] 2015-10-13T18:47:12.666-0400 d20266| 2015-10-13T18:47:12.666-0400 I REPL [ReplicationExecutor] New replica set config in use: { _id: "d1", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "ubuntu:20265", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ubuntu:20266", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "ubuntu:20267", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } [js_test:auth] 2015-10-13T18:47:12.666-0400 d20266| 2015-10-13T18:47:12.666-0400 I REPL [ReplicationExecutor] This node is ubuntu:20266 in the config [js_test:auth] 2015-10-13T18:47:12.666-0400 d20266| 2015-10-13T18:47:12.666-0400 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:auth] 2015-10-13T18:47:12.666-0400 d20266| 2015-10-13T18:47:12.666-0400 I REPL [ReplicationExecutor] Member ubuntu:20265 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:12.666-0400 d20267| 2015-10-13T18:47:12.666-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46274 #6 (3 connections now open) [js_test:auth] 2015-10-13T18:47:12.675-0400 d20267| 2015-10-13T18:47:12.675-0400 I REPL [replExecDBWorker-0] Starting replication applier threads [js_test:auth] 2015-10-13T18:47:12.675-0400 d20267| 2015-10-13T18:47:12.675-0400 W REPL [rsSync] did not receive a valid config yet [js_test:auth] 2015-10-13T18:47:12.676-0400 d20267| 2015-10-13T18:47:12.675-0400 I REPL [ReplicationExecutor] New replica set config in use: { _id: "d1", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "ubuntu:20265", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ubuntu:20266", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "ubuntu:20267", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } [js_test:auth] 2015-10-13T18:47:12.676-0400 d20267| 2015-10-13T18:47:12.675-0400 I REPL [ReplicationExecutor] This node is ubuntu:20267 in the config [js_test:auth] 2015-10-13T18:47:12.676-0400 d20267| 2015-10-13T18:47:12.675-0400 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:auth] 2015-10-13T18:47:12.676-0400 d20266| 2015-10-13T18:47:12.676-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34062 #6 (3 connections now open) [js_test:auth] 2015-10-13T18:47:12.676-0400 d20267| 2015-10-13T18:47:12.676-0400 I REPL [ReplicationExecutor] Member ubuntu:20265 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:12.683-0400 d20267| 2015-10-13T18:47:12.683-0400 I ACCESS [conn6] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:12.683-0400 d20266| 2015-10-13T18:47:12.683-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20267 [js_test:auth] 2015-10-13T18:47:12.683-0400 d20266| 2015-10-13T18:47:12.683-0400 I REPL [ReplicationExecutor] Member ubuntu:20267 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:47:12.693-0400 d20266| 2015-10-13T18:47:12.693-0400 I ACCESS [conn6] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:12.693-0400 d20267| 2015-10-13T18:47:12.693-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20266 [js_test:auth] 2015-10-13T18:47:12.693-0400 d20267| 2015-10-13T18:47:12.693-0400 I REPL [ReplicationExecutor] Member ubuntu:20266 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:47:13.038-0400 d20265| 2015-10-13T18:47:13.037-0400 I REPL [ReplicationExecutor] Member ubuntu:20266 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:47:13.038-0400 d20265| 2015-10-13T18:47:13.037-0400 I REPL [ReplicationExecutor] Member ubuntu:20267 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:47:13.666-0400 d20266| 2015-10-13T18:47:13.666-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:47:13.666-0400 d20266| 2015-10-13T18:47:13.666-0400 I REPL [rsSync] creating replication oplog of size: 40MB... [js_test:auth] 2015-10-13T18:47:13.675-0400 d20267| 2015-10-13T18:47:13.675-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:47:13.675-0400 d20267| 2015-10-13T18:47:13.675-0400 I REPL [rsSync] creating replication oplog of size: 40MB... [js_test:auth] 2015-10-13T18:47:13.738-0400 d20266| 2015-10-13T18:47:13.737-0400 I STORAGE [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:auth] 2015-10-13T18:47:13.738-0400 d20266| 2015-10-13T18:47:13.737-0400 I STORAGE [rsSync] Scanning the oplog to determine where to place markers for when to truncate [js_test:auth] 2015-10-13T18:47:13.749-0400 d20267| 2015-10-13T18:47:13.749-0400 I STORAGE [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:auth] 2015-10-13T18:47:13.749-0400 d20267| 2015-10-13T18:47:13.749-0400 I STORAGE [rsSync] Scanning the oplog to determine where to place markers for when to truncate [js_test:auth] 2015-10-13T18:47:14.118-0400 d20266| 2015-10-13T18:47:14.117-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:47:14.118-0400 d20267| 2015-10-13T18:47:14.117-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:47:14.118-0400 d20267| 2015-10-13T18:47:14.117-0400 I REPL [rsSync] initial sync pending [js_test:auth] 2015-10-13T18:47:14.119-0400 d20266| 2015-10-13T18:47:14.117-0400 I REPL [rsSync] initial sync pending [js_test:auth] 2015-10-13T18:47:14.234-0400 d20267| 2015-10-13T18:47:14.233-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20265 [js_test:auth] 2015-10-13T18:47:14.234-0400 d20265| 2015-10-13T18:47:14.234-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52254 #7 (4 connections now open) [js_test:auth] 2015-10-13T18:47:14.235-0400 d20266| 2015-10-13T18:47:14.234-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20265 [js_test:auth] 2015-10-13T18:47:14.235-0400 d20265| 2015-10-13T18:47:14.235-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52255 #8 (5 connections now open) [js_test:auth] 2015-10-13T18:47:14.255-0400 d20265| 2015-10-13T18:47:14.254-0400 I ACCESS [conn7] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:14.255-0400 d20265| 2015-10-13T18:47:14.255-0400 I ACCESS [conn8] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:14.268-0400 d20267| 2015-10-13T18:47:14.268-0400 I REPL [rsSync] initial sync drop all databases [js_test:auth] 2015-10-13T18:47:14.268-0400 d20266| 2015-10-13T18:47:14.268-0400 I REPL [rsSync] initial sync drop all databases [js_test:auth] 2015-10-13T18:47:14.268-0400 d20267| 2015-10-13T18:47:14.268-0400 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 [js_test:auth] 2015-10-13T18:47:14.268-0400 d20267| 2015-10-13T18:47:14.268-0400 I REPL [rsSync] initial sync clone all databases [js_test:auth] 2015-10-13T18:47:14.268-0400 d20266| 2015-10-13T18:47:14.268-0400 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 [js_test:auth] 2015-10-13T18:47:14.269-0400 d20266| 2015-10-13T18:47:14.268-0400 I REPL [rsSync] initial sync clone all databases [js_test:auth] 2015-10-13T18:47:14.269-0400 d20267| 2015-10-13T18:47:14.268-0400 I REPL [rsSync] initial sync data copy, starting syncup [js_test:auth] 2015-10-13T18:47:14.269-0400 d20267| 2015-10-13T18:47:14.268-0400 I REPL [rsSync] oplog sync 1 of 3 [js_test:auth] 2015-10-13T18:47:14.269-0400 d20266| 2015-10-13T18:47:14.268-0400 I REPL [rsSync] initial sync data copy, starting syncup [js_test:auth] 2015-10-13T18:47:14.269-0400 d20266| 2015-10-13T18:47:14.269-0400 I REPL [rsSync] oplog sync 1 of 3 [js_test:auth] 2015-10-13T18:47:14.269-0400 d20267| 2015-10-13T18:47:14.269-0400 I REPL [rsSync] oplog sync 2 of 3 [js_test:auth] 2015-10-13T18:47:14.269-0400 d20266| 2015-10-13T18:47:14.269-0400 I REPL [rsSync] oplog sync 2 of 3 [js_test:auth] 2015-10-13T18:47:14.269-0400 d20267| 2015-10-13T18:47:14.269-0400 I REPL [rsSync] initial sync building indexes [js_test:auth] 2015-10-13T18:47:14.269-0400 d20267| 2015-10-13T18:47:14.269-0400 I REPL [rsSync] oplog sync 3 of 3 [js_test:auth] 2015-10-13T18:47:14.269-0400 d20266| 2015-10-13T18:47:14.269-0400 I REPL [rsSync] initial sync building indexes [js_test:auth] 2015-10-13T18:47:14.269-0400 d20266| 2015-10-13T18:47:14.269-0400 I REPL [rsSync] oplog sync 3 of 3 [js_test:auth] 2015-10-13T18:47:14.270-0400 d20267| 2015-10-13T18:47:14.270-0400 I REPL [rsSync] initial sync finishing up [js_test:auth] 2015-10-13T18:47:14.270-0400 d20267| 2015-10-13T18:47:14.270-0400 I REPL [rsSync] set minValid=(term: 0, timestamp: Oct 13 18:47:11:1) [js_test:auth] 2015-10-13T18:47:14.270-0400 d20266| 2015-10-13T18:47:14.270-0400 I REPL [rsSync] initial sync finishing up [js_test:auth] 2015-10-13T18:47:14.270-0400 d20266| 2015-10-13T18:47:14.270-0400 I REPL [rsSync] set minValid=(term: 0, timestamp: Oct 13 18:47:11:1) [js_test:auth] 2015-10-13T18:47:14.287-0400 d20267| 2015-10-13T18:47:14.287-0400 I REPL [rsSync] initial sync done [js_test:auth] 2015-10-13T18:47:14.287-0400 d20266| 2015-10-13T18:47:14.287-0400 I REPL [rsSync] initial sync done [js_test:auth] 2015-10-13T18:47:14.289-0400 d20265| 2015-10-13T18:47:14.289-0400 I NETWORK [conn7] end connection 127.0.0.1:52254 (4 connections now open) [js_test:auth] 2015-10-13T18:47:14.289-0400 d20267| 2015-10-13T18:47:14.289-0400 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:auth] 2015-10-13T18:47:14.289-0400 d20265| 2015-10-13T18:47:14.289-0400 I NETWORK [conn8] end connection 127.0.0.1:52255 (3 connections now open) [js_test:auth] 2015-10-13T18:47:14.289-0400 d20266| 2015-10-13T18:47:14.289-0400 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:auth] 2015-10-13T18:47:14.289-0400 d20267| 2015-10-13T18:47:14.289-0400 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:auth] 2015-10-13T18:47:14.291-0400 d20266| 2015-10-13T18:47:14.291-0400 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:auth] 2015-10-13T18:47:14.666-0400 d20266| 2015-10-13T18:47:14.666-0400 I REPL [ReplicationExecutor] could not find member to sync from [js_test:auth] 2015-10-13T18:47:14.666-0400 d20266| 2015-10-13T18:47:14.666-0400 I REPL [ReplicationExecutor] Member ubuntu:20267 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:14.676-0400 d20267| 2015-10-13T18:47:14.676-0400 I REPL [ReplicationExecutor] could not find member to sync from [js_test:auth] 2015-10-13T18:47:14.676-0400 d20267| 2015-10-13T18:47:14.676-0400 I REPL [ReplicationExecutor] Member ubuntu:20266 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:15.038-0400 d20265| 2015-10-13T18:47:15.037-0400 I REPL [ReplicationExecutor] Member ubuntu:20266 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:15.038-0400 d20265| 2015-10-13T18:47:15.037-0400 I REPL [ReplicationExecutor] Member ubuntu:20267 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:16.575-0400 d20265| 2015-10-13T18:47:16.574-0400 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected [js_test:auth] 2015-10-13T18:47:16.698-0400 d20267| 2015-10-13T18:47:16.697-0400 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "d1", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1444776431000|1, t: 0 } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, W: 2 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 122ms [js_test:auth] 2015-10-13T18:47:16.698-0400 d20266| 2015-10-13T18:47:16.697-0400 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "d1", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1444776431000|1, t: 0 } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, W: 2 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 122ms [js_test:auth] 2015-10-13T18:47:16.698-0400 d20265| 2015-10-13T18:47:16.697-0400 I REPL [ReplicationExecutor] dry election run succeeded, running for election [js_test:auth] 2015-10-13T18:47:16.797-0400 d20265| 2015-10-13T18:47:16.797-0400 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 1 [js_test:auth] 2015-10-13T18:47:16.797-0400 d20265| 2015-10-13T18:47:16.797-0400 I REPL [ReplicationExecutor] transition to PRIMARY [js_test:auth] 2015-10-13T18:47:17.038-0400 d20265| 2015-10-13T18:47:17.037-0400 I REPL [rsSync] transition to primary complete; database writes are now permitted [js_test:auth] 2015-10-13T18:47:17.138-0400 d1 initiated [js_test:auth] 2015-10-13T18:47:17.157-0400 d20265| 2015-10-13T18:47:17.157-0400 I ACCESS [conn1] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:17.173-0400 d20266| 2015-10-13T18:47:17.173-0400 I ACCESS [conn1] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:17.193-0400 d20267| 2015-10-13T18:47:17.193-0400 I ACCESS [conn1] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:17.195-0400 adding shard w/out auth d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:47:17.196-0400 s20264| 2015-10-13T18:47:17.196-0400 I ACCESS [conn1] Unauthorized not authorized on admin to execute command { addShard: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267" } [js_test:auth] 2015-10-13T18:47:17.196-0400 { [js_test:auth] 2015-10-13T18:47:17.196-0400 "ok" : 0, [js_test:auth] 2015-10-13T18:47:17.196-0400 "errmsg" : "not authorized on admin to execute command { addShard: \"d1/ubuntu:20265,ubuntu:20266,ubuntu:20267\" }", [js_test:auth] 2015-10-13T18:47:17.196-0400 "code" : 13 [js_test:auth] 2015-10-13T18:47:17.196-0400 } [js_test:auth] 2015-10-13T18:47:17.198-0400 s20264| 2015-10-13T18:47:17.197-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:47:47.197-0400 cmd:{ usersInfo: [ { user: "foo", db: "admin" } ], showPrivileges: true, showCredentials: true, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:17.198-0400 s20264| 2015-10-13T18:47:17.198-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:17.217-0400 s20264| 2015-10-13T18:47:17.217-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:47:47.217-0400 cmd:{ usersInfo: [ { user: "foo", db: "admin" } ], showPrivileges: true, showCredentials: true, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:17.218-0400 s20264| 2015-10-13T18:47:17.217-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:17.218-0400 s20264| 2015-10-13T18:47:17.218-0400 I ACCESS [conn1] Successfully authenticated as principal foo on admin [js_test:auth] 2015-10-13T18:47:17.219-0400 adding shard w/wrong key d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:47:17.219-0400 s20264| 2015-10-13T18:47:17.219-0400 I NETWORK [conn1] Starting new replica set monitor for d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:47:17.219-0400 s20264| 2015-10-13T18:47:17.219-0400 D NETWORK [conn1] Starting new refresh of replica set d1 [js_test:auth] 2015-10-13T18:47:17.219-0400 s20264| 2015-10-13T18:47:17.219-0400 D NETWORK [conn1] creating new connection to:ubuntu:20267 [js_test:auth] 2015-10-13T18:47:17.219-0400 s20264| 2015-10-13T18:47:17.219-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:17.219-0400 s20264| 2015-10-13T18:47:17.219-0400 D NETWORK [conn1] connected to server ubuntu:20267 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:17.219-0400 d20267| 2015-10-13T18:47:17.219-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46619 #7 (4 connections now open) [js_test:auth] 2015-10-13T18:47:17.219-0400 s20264| 2015-10-13T18:47:17.219-0400 D NETWORK [conn1] connected connection! [js_test:auth] 2015-10-13T18:47:17.219-0400 s20264| 2015-10-13T18:47:17.219-0400 D SHARDING [conn1] calling onCreate auth for ubuntu:20267 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:17.240-0400 d20267| 2015-10-13T18:47:17.240-0400 I ACCESS [conn7] SCRAM-SHA-1 authentication failed for __system on local from client 127.0.1.1 ; AuthenticationFailed SCRAM-SHA-1 authentication failed, storedKey mismatch [js_test:auth] 2015-10-13T18:47:17.240-0400 s20264| 2015-10-13T18:47:17.240-0400 D - [conn1] User Assertion: 18:Authentication failed. [js_test:auth] 2015-10-13T18:47:17.240-0400 s20264| 2015-10-13T18:47:17.240-0400 D - [conn1] User Assertion: 18:Authentication failed. [js_test:auth] 2015-10-13T18:47:17.240-0400 s20264| 2015-10-13T18:47:17.240-0400 D - [conn1] User Assertion: 18:Authentication failed. [js_test:auth] 2015-10-13T18:47:17.240-0400 s20264| 2015-10-13T18:47:17.240-0400 I NETWORK [conn1] can't authenticate to ubuntu:20267 (127.0.1.1) as internal user, error: Authentication failed. [js_test:auth] 2015-10-13T18:47:17.240-0400 s20264| 2015-10-13T18:47:17.240-0400 D - [conn1] User Assertion: 15847:can't authenticate to server ubuntu:20267 [js_test:auth] 2015-10-13T18:47:17.241-0400 d20267| 2015-10-13T18:47:17.240-0400 I NETWORK [conn7] end connection 127.0.0.1:46619 (3 connections now open) [js_test:auth] 2015-10-13T18:47:17.241-0400 s20264| 2015-10-13T18:47:17.240-0400 D NETWORK [conn1] creating new connection to:ubuntu:20265 [js_test:auth] 2015-10-13T18:47:17.241-0400 s20264| 2015-10-13T18:47:17.240-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:17.241-0400 s20264| 2015-10-13T18:47:17.240-0400 D NETWORK [conn1] connected to server ubuntu:20265 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:17.241-0400 d20265| 2015-10-13T18:47:17.240-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52493 #9 (4 connections now open) [js_test:auth] 2015-10-13T18:47:17.241-0400 s20264| 2015-10-13T18:47:17.241-0400 D NETWORK [conn1] connected connection! [js_test:auth] 2015-10-13T18:47:17.241-0400 s20264| 2015-10-13T18:47:17.241-0400 D SHARDING [conn1] calling onCreate auth for ubuntu:20265 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:17.259-0400 d20265| 2015-10-13T18:47:17.259-0400 I ACCESS [conn9] SCRAM-SHA-1 authentication failed for __system on local from client 127.0.1.1 ; AuthenticationFailed SCRAM-SHA-1 authentication failed, storedKey mismatch [js_test:auth] 2015-10-13T18:47:17.259-0400 s20264| 2015-10-13T18:47:17.259-0400 D - [conn1] User Assertion: 18:Authentication failed. [js_test:auth] 2015-10-13T18:47:17.259-0400 s20264| 2015-10-13T18:47:17.259-0400 D - [conn1] User Assertion: 18:Authentication failed. [js_test:auth] 2015-10-13T18:47:17.259-0400 s20264| 2015-10-13T18:47:17.259-0400 D - [conn1] User Assertion: 18:Authentication failed. [js_test:auth] 2015-10-13T18:47:17.259-0400 s20264| 2015-10-13T18:47:17.259-0400 I NETWORK [conn1] can't authenticate to ubuntu:20265 (127.0.1.1) as internal user, error: Authentication failed. [js_test:auth] 2015-10-13T18:47:17.260-0400 s20264| 2015-10-13T18:47:17.259-0400 D - [conn1] User Assertion: 15847:can't authenticate to server ubuntu:20265 [js_test:auth] 2015-10-13T18:47:17.260-0400 d20265| 2015-10-13T18:47:17.259-0400 I NETWORK [conn9] end connection 127.0.0.1:52493 (3 connections now open) [js_test:auth] 2015-10-13T18:47:17.260-0400 s20264| 2015-10-13T18:47:17.259-0400 D NETWORK [conn1] creating new connection to:ubuntu:20266 [js_test:auth] 2015-10-13T18:47:17.260-0400 s20264| 2015-10-13T18:47:17.259-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:17.260-0400 s20264| 2015-10-13T18:47:17.259-0400 D NETWORK [conn1] connected to server ubuntu:20266 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:17.260-0400 d20266| 2015-10-13T18:47:17.259-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34409 #7 (4 connections now open) [js_test:auth] 2015-10-13T18:47:17.260-0400 s20264| 2015-10-13T18:47:17.260-0400 D NETWORK [conn1] connected connection! [js_test:auth] 2015-10-13T18:47:17.260-0400 s20264| 2015-10-13T18:47:17.260-0400 D SHARDING [conn1] calling onCreate auth for ubuntu:20266 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:17.276-0400 d20266| 2015-10-13T18:47:17.276-0400 I ACCESS [conn7] SCRAM-SHA-1 authentication failed for __system on local from client 127.0.1.1 ; AuthenticationFailed SCRAM-SHA-1 authentication failed, storedKey mismatch [js_test:auth] 2015-10-13T18:47:17.277-0400 s20264| 2015-10-13T18:47:17.276-0400 D - [conn1] User Assertion: 18:Authentication failed. [js_test:auth] 2015-10-13T18:47:17.277-0400 s20264| 2015-10-13T18:47:17.276-0400 D - [conn1] User Assertion: 18:Authentication failed. [js_test:auth] 2015-10-13T18:47:17.277-0400 s20264| 2015-10-13T18:47:17.276-0400 D - [conn1] User Assertion: 18:Authentication failed. [js_test:auth] 2015-10-13T18:47:17.277-0400 s20264| 2015-10-13T18:47:17.277-0400 I NETWORK [conn1] can't authenticate to ubuntu:20266 (127.0.1.1) as internal user, error: Authentication failed. [js_test:auth] 2015-10-13T18:47:17.277-0400 s20264| 2015-10-13T18:47:17.277-0400 D - [conn1] User Assertion: 15847:can't authenticate to server ubuntu:20266 [js_test:auth] 2015-10-13T18:47:17.277-0400 s20264| 2015-10-13T18:47:17.277-0400 W NETWORK [conn1] No primary detected for set d1 [js_test:auth] 2015-10-13T18:47:17.277-0400 d20266| 2015-10-13T18:47:17.277-0400 I NETWORK [conn7] end connection 127.0.0.1:34409 (3 connections now open) [js_test:auth] 2015-10-13T18:47:17.277-0400 s20264| 2015-10-13T18:47:17.277-0400 I NETWORK [conn1] All nodes for set d1 are down. This has happened for 1 checks in a row. Polling will stop after 29 more failed checks [js_test:auth] 2015-10-13T18:47:17.277-0400 s20264| 2015-10-13T18:47:17.277-0400 D NETWORK [conn1] Removing connections from all pools for host: d1 [js_test:auth] 2015-10-13T18:47:17.277-0400 s20264| 2015-10-13T18:47:17.277-0400 I COMMAND [conn1] addShard request '{ addShard: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267" }' failed: No master found for set d1 [js_test:auth] 2015-10-13T18:47:17.277-0400 Error: command { "addShard" : "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267" } failed: { "ok" : 0, "errmsg" : "No master found for set d1", "code" : 10107 } [js_test:auth] 2015-10-13T18:47:17.278-0400 start rs w/correct key [js_test:auth] 2015-10-13T18:47:17.278-0400 ReplSetTest n: 0 ports: [ 20265, 20266, 20267 ] 20265 number [js_test:auth] 2015-10-13T18:47:17.278-0400 ReplSetTest stop *** Shutting down mongod in port 20265 *** [js_test:auth] 2015-10-13T18:47:17.278-0400 d20265| 2015-10-13T18:47:17.277-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:47:17.278-0400 d20265| 2015-10-13T18:47:17.277-0400 I FTDC [signalProcessingThread] Stopping full-time diagnostic data capture [js_test:auth] 2015-10-13T18:47:17.279-0400 d20265| 2015-10-13T18:47:17.279-0400 I REPL [signalProcessingThread] Stopping replication applier threads [js_test:auth] 2015-10-13T18:47:17.780-0400 s20264| 2015-10-13T18:47:17.780-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: auth-configRS [js_test:auth] 2015-10-13T18:47:17.926-0400 s20264| 2015-10-13T18:47:17.926-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:47.926-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776437926), up: 10, waiting: false, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:17.926-0400 s20264| 2015-10-13T18:47:17.926-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:17.942-0400 s20264| 2015-10-13T18:47:17.942-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776437000|1, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:17.942-0400 s20264| 2015-10-13T18:47:17.942-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:17.942-0400 s20264| 2015-10-13T18:47:17.942-0400 D SHARDING [Balancer] found 0 shards listed on config server(s) [js_test:auth] 2015-10-13T18:47:17.942-0400 s20264| 2015-10-13T18:47:17.942-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776437000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:17.942-0400 s20264| 2015-10-13T18:47:17.942-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:17.942-0400 s20264| 2015-10-13T18:47:17.942-0400 D SHARDING [Balancer] Refreshing MaxChunkSize: 1MB [js_test:auth] 2015-10-13T18:47:17.943-0400 s20264| 2015-10-13T18:47:17.942-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776437000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:17.943-0400 s20264| 2015-10-13T18:47:17.942-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:17.943-0400 s20264| 2015-10-13T18:47:17.943-0400 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:auth] 2015-10-13T18:47:17.943-0400 s20264| 2015-10-13T18:47:17.943-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:47.943-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776437943), up: 10, waiting: true, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:17.943-0400 s20264| 2015-10-13T18:47:17.943-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:18.038-0400 d20265| 2015-10-13T18:47:18.038-0400 I CONTROL [signalProcessingThread] now exiting [js_test:auth] 2015-10-13T18:47:18.038-0400 d20265| 2015-10-13T18:47:18.038-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... [js_test:auth] 2015-10-13T18:47:18.038-0400 d20265| 2015-10-13T18:47:18.038-0400 I NETWORK [signalProcessingThread] closing listening socket: 25 [js_test:auth] 2015-10-13T18:47:18.039-0400 d20265| 2015-10-13T18:47:18.038-0400 I NETWORK [signalProcessingThread] closing listening socket: 26 [js_test:auth] 2015-10-13T18:47:18.039-0400 d20265| 2015-10-13T18:47:18.038-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20265.sock [js_test:auth] 2015-10-13T18:47:18.039-0400 d20265| 2015-10-13T18:47:18.038-0400 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... [js_test:auth] 2015-10-13T18:47:18.039-0400 d20265| 2015-10-13T18:47:18.038-0400 I NETWORK [signalProcessingThread] shutdown: going to close sockets... [js_test:auth] 2015-10-13T18:47:18.039-0400 d20265| 2015-10-13T18:47:18.038-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down [js_test:auth] 2015-10-13T18:47:18.039-0400 d20265| 2015-10-13T18:47:18.038-0400 I NETWORK [conn1] end connection 127.0.0.1:53582 (2 connections now open) [js_test:auth] 2015-10-13T18:47:18.039-0400 d20265| 2015-10-13T18:47:18.038-0400 I NETWORK [conn4] end connection 127.0.0.1:51957 (2 connections now open) [js_test:auth] 2015-10-13T18:47:18.039-0400 d20265| 2015-10-13T18:47:18.038-0400 I NETWORK [conn3] end connection 127.0.0.1:51956 (2 connections now open) [js_test:auth] 2015-10-13T18:47:18.358-0400 d20265| 2015-10-13T18:47:18.357-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock... [js_test:auth] 2015-10-13T18:47:18.358-0400 d20265| 2015-10-13T18:47:18.358-0400 I CONTROL [signalProcessingThread] dbexit: rc: 0 [js_test:auth] 2015-10-13T18:47:18.365-0400 d20267| 2015-10-13T18:47:18.365-0400 I NETWORK [conn3] end connection 127.0.0.1:46081 (2 connections now open) [js_test:auth] 2015-10-13T18:47:18.365-0400 d20266| 2015-10-13T18:47:18.365-0400 I NETWORK [conn3] end connection 127.0.0.1:33866 (2 connections now open) [js_test:auth] 2015-10-13T18:47:18.667-0400 d20266| 2015-10-13T18:47:18.667-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable End of file [js_test:auth] 2015-10-13T18:47:18.667-0400 d20266| 2015-10-13T18:47:18.667-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:47:18.667-0400 d20266| 2015-10-13T18:47:18.667-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:47:18.667-0400 d20266| 2015-10-13T18:47:18.667-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:47:18.667-0400 d20266| 2015-10-13T18:47:18.667-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:47:18.677-0400 d20267| 2015-10-13T18:47:18.677-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable End of file [js_test:auth] 2015-10-13T18:47:18.677-0400 d20267| 2015-10-13T18:47:18.677-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:47:18.677-0400 d20267| 2015-10-13T18:47:18.677-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:47:18.677-0400 d20267| 2015-10-13T18:47:18.677-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:47:18.677-0400 d20267| 2015-10-13T18:47:18.677-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:47:19.278-0400 2015-10-13T18:47:19.277-0400 I - [thread1] shell: stopped mongo program on port 20265 [js_test:auth] 2015-10-13T18:47:19.278-0400 ReplSetTest stop *** Mongod in port 20265 shutdown with code (0) *** [js_test:auth] 2015-10-13T18:47:19.278-0400 ReplSetTest n: 1 ports: [ 20265, 20266, 20267 ] 20266 number [js_test:auth] 2015-10-13T18:47:19.278-0400 ReplSetTest stop *** Shutting down mongod in port 20266 *** [js_test:auth] 2015-10-13T18:47:19.278-0400 d20266| 2015-10-13T18:47:19.278-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:47:19.279-0400 d20266| 2015-10-13T18:47:19.278-0400 I FTDC [signalProcessingThread] Stopping full-time diagnostic data capture [js_test:auth] 2015-10-13T18:47:19.280-0400 d20266| 2015-10-13T18:47:19.279-0400 I REPL [signalProcessingThread] Stopping replication applier threads [js_test:auth] 2015-10-13T18:47:19.667-0400 d20266| 2015-10-13T18:47:19.667-0400 I CONTROL [signalProcessingThread] now exiting [js_test:auth] 2015-10-13T18:47:19.668-0400 d20266| 2015-10-13T18:47:19.667-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... [js_test:auth] 2015-10-13T18:47:19.668-0400 d20266| 2015-10-13T18:47:19.667-0400 I NETWORK [signalProcessingThread] closing listening socket: 28 [js_test:auth] 2015-10-13T18:47:19.668-0400 d20266| 2015-10-13T18:47:19.667-0400 I NETWORK [signalProcessingThread] closing listening socket: 29 [js_test:auth] 2015-10-13T18:47:19.668-0400 d20266| 2015-10-13T18:47:19.667-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20266.sock [js_test:auth] 2015-10-13T18:47:19.668-0400 d20266| 2015-10-13T18:47:19.667-0400 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... [js_test:auth] 2015-10-13T18:47:19.668-0400 d20266| 2015-10-13T18:47:19.667-0400 I NETWORK [signalProcessingThread] shutdown: going to close sockets... [js_test:auth] 2015-10-13T18:47:19.669-0400 d20266| 2015-10-13T18:47:19.668-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down [js_test:auth] 2015-10-13T18:47:19.669-0400 d20266| 2015-10-13T18:47:19.668-0400 I NETWORK [conn1] end connection 127.0.0.1:33482 (1 connection now open) [js_test:auth] 2015-10-13T18:47:19.669-0400 d20266| 2015-10-13T18:47:19.668-0400 I NETWORK [conn6] end connection 127.0.0.1:34062 (1 connection now open) [js_test:auth] 2015-10-13T18:47:20.109-0400 d20266| 2015-10-13T18:47:20.109-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock... [js_test:auth] 2015-10-13T18:47:20.112-0400 d20266| 2015-10-13T18:47:20.112-0400 I CONTROL [signalProcessingThread] dbexit: rc: 0 [js_test:auth] 2015-10-13T18:47:20.120-0400 d20267| 2015-10-13T18:47:20.120-0400 I NETWORK [conn6] end connection 127.0.0.1:46274 (1 connection now open) [js_test:auth] 2015-10-13T18:47:20.278-0400 2015-10-13T18:47:20.278-0400 I - [thread1] shell: stopped mongo program on port 20266 [js_test:auth] 2015-10-13T18:47:20.278-0400 ReplSetTest stop *** Mongod in port 20266 shutdown with code (0) *** [js_test:auth] 2015-10-13T18:47:20.279-0400 ReplSetTest n: 2 ports: [ 20265, 20266, 20267 ] 20267 number [js_test:auth] 2015-10-13T18:47:20.279-0400 ReplSetTest stop *** Shutting down mongod in port 20267 *** [js_test:auth] 2015-10-13T18:47:20.279-0400 d20267| 2015-10-13T18:47:20.278-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:47:20.279-0400 d20267| 2015-10-13T18:47:20.278-0400 I FTDC [signalProcessingThread] Stopping full-time diagnostic data capture [js_test:auth] 2015-10-13T18:47:20.281-0400 d20267| 2015-10-13T18:47:20.281-0400 I REPL [signalProcessingThread] Stopping replication applier threads [js_test:auth] 2015-10-13T18:47:20.677-0400 d20267| 2015-10-13T18:47:20.677-0400 I CONTROL [signalProcessingThread] now exiting [js_test:auth] 2015-10-13T18:47:20.677-0400 d20267| 2015-10-13T18:47:20.677-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... [js_test:auth] 2015-10-13T18:47:20.677-0400 d20267| 2015-10-13T18:47:20.677-0400 I NETWORK [signalProcessingThread] closing listening socket: 31 [js_test:auth] 2015-10-13T18:47:20.677-0400 d20267| 2015-10-13T18:47:20.677-0400 I NETWORK [signalProcessingThread] closing listening socket: 32 [js_test:auth] 2015-10-13T18:47:20.677-0400 d20267| 2015-10-13T18:47:20.677-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20267.sock [js_test:auth] 2015-10-13T18:47:20.678-0400 d20267| 2015-10-13T18:47:20.677-0400 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... [js_test:auth] 2015-10-13T18:47:20.678-0400 d20267| 2015-10-13T18:47:20.677-0400 I NETWORK [signalProcessingThread] shutdown: going to close sockets... [js_test:auth] 2015-10-13T18:47:20.678-0400 d20267| 2015-10-13T18:47:20.677-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down [js_test:auth] 2015-10-13T18:47:20.678-0400 d20267| 2015-10-13T18:47:20.677-0400 I NETWORK [conn1] end connection 127.0.0.1:55142 (0 connections now open) [js_test:auth] 2015-10-13T18:47:21.105-0400 d20267| 2015-10-13T18:47:21.105-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock... [js_test:auth] 2015-10-13T18:47:21.105-0400 d20267| 2015-10-13T18:47:21.105-0400 I CONTROL [signalProcessingThread] dbexit: rc: 0 [js_test:auth] 2015-10-13T18:47:21.278-0400 2015-10-13T18:47:21.278-0400 I - [thread1] shell: stopped mongo program on port 20267 [js_test:auth] 2015-10-13T18:47:21.279-0400 ReplSetTest stop *** Mongod in port 20267 shutdown with code (0) *** [js_test:auth] 2015-10-13T18:47:21.279-0400 ReplSetTest stopSet deleting all dbpaths [js_test:auth] 2015-10-13T18:47:21.283-0400 ReplSetTest stopSet *** Shut down repl set - test worked **** [js_test:auth] 2015-10-13T18:47:21.283-0400 ReplSetTest Starting Set [js_test:auth] 2015-10-13T18:47:21.283-0400 ReplSetTest n is : 0 [js_test:auth] 2015-10-13T18:47:21.283-0400 ReplSetTest n: 0 ports: [ 20265, 20266, 20267 ] 20265 number [js_test:auth] 2015-10-13T18:47:21.284-0400 { [js_test:auth] 2015-10-13T18:47:21.284-0400 "useHostName" : true, [js_test:auth] 2015-10-13T18:47:21.284-0400 "oplogSize" : 40, [js_test:auth] 2015-10-13T18:47:21.284-0400 "keyFile" : "jstests/libs/key1", [js_test:auth] 2015-10-13T18:47:21.284-0400 "port" : 20265, [js_test:auth] 2015-10-13T18:47:21.284-0400 "noprealloc" : "", [js_test:auth] 2015-10-13T18:47:21.284-0400 "smallfiles" : "", [js_test:auth] 2015-10-13T18:47:21.284-0400 "replSet" : "d1", [js_test:auth] 2015-10-13T18:47:21.285-0400 "dbpath" : "$set-$node", [js_test:auth] 2015-10-13T18:47:21.285-0400 "verbose" : 0, [js_test:auth] 2015-10-13T18:47:21.285-0400 "restart" : undefined, [js_test:auth] 2015-10-13T18:47:21.285-0400 "pathOpts" : { [js_test:auth] 2015-10-13T18:47:21.285-0400 "node" : 0, [js_test:auth] 2015-10-13T18:47:21.286-0400 "set" : "d1" [js_test:auth] 2015-10-13T18:47:21.286-0400 } [js_test:auth] 2015-10-13T18:47:21.286-0400 } [js_test:auth] 2015-10-13T18:47:21.286-0400 ReplSetTest Starting.... [js_test:auth] 2015-10-13T18:47:21.286-0400 Resetting db path '/data/db/job1/mongorunner/d1-0' [js_test:auth] 2015-10-13T18:47:21.288-0400 2015-10-13T18:47:21.288-0400 I - [thread1] shell: started program (sh22889): /media/ssd/mongo1/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 20265 --noprealloc --smallfiles --replSet d1 --dbpath /data/db/job1/mongorunner/d1-0 --nopreallocj --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:47:21.289-0400 2015-10-13T18:47:21.289-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:21.302-0400 d20265| note: noprealloc may hurt performance in many applications [js_test:auth] 2015-10-13T18:47:21.350-0400 d20265| 2015-10-13T18:47:21.350-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:auth] 2015-10-13T18:47:21.489-0400 2015-10-13T18:47:21.489-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:21.690-0400 2015-10-13T18:47:21.689-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:21.721-0400 d20265| 2015-10-13T18:47:21.721-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:auth] 2015-10-13T18:47:21.721-0400 d20265| 2015-10-13T18:47:21.721-0400 I CONTROL [initandlisten] MongoDB starting : pid=22889 port=20265 dbpath=/data/db/job1/mongorunner/d1-0 64-bit host=ubuntu [js_test:auth] 2015-10-13T18:47:21.721-0400 d20265| 2015-10-13T18:47:21.721-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:21.722-0400 d20265| 2015-10-13T18:47:21.721-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:47:21.722-0400 d20265| 2015-10-13T18:47:21.721-0400 I CONTROL [initandlisten] ** Not recommended for production. [js_test:auth] 2015-10-13T18:47:21.722-0400 d20265| 2015-10-13T18:47:21.721-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:21.723-0400 d20265| 2015-10-13T18:47:21.722-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:21.723-0400 d20265| 2015-10-13T18:47:21.722-0400 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine. [js_test:auth] 2015-10-13T18:47:21.723-0400 d20265| 2015-10-13T18:47:21.722-0400 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems: [js_test:auth] 2015-10-13T18:47:21.723-0400 d20265| 2015-10-13T18:47:21.722-0400 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options] [js_test:auth] 2015-10-13T18:47:21.723-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:21.723-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. [js_test:auth] 2015-10-13T18:47:21.723-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:21.723-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:21.723-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. [js_test:auth] 2015-10-13T18:47:21.723-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:21.723-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:21.723-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:47:21.724-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:47:21.724-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:47:21.724-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] allocator: tcmalloc [js_test:auth] 2015-10-13T18:47:21.724-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] modules: subscription [js_test:auth] 2015-10-13T18:47:21.724-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] build environment: [js_test:auth] 2015-10-13T18:47:21.724-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] distarch: x86_64 [js_test:auth] 2015-10-13T18:47:21.724-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] target_arch: x86_64 [js_test:auth] 2015-10-13T18:47:21.724-0400 d20265| 2015-10-13T18:47:21.723-0400 I CONTROL [initandlisten] options: { net: { port: 20265 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "d1" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/job1/mongorunner/d1-0", mmapv1: { preallocDataFiles: false, smallFiles: true } } } [js_test:auth] 2015-10-13T18:47:21.845-0400 d20265| 2015-10-13T18:47:21.845-0400 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument Did not find replica set lastVote document in local.replset.election [js_test:auth] 2015-10-13T18:47:21.845-0400 d20265| 2015-10-13T18:47:21.845-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset [js_test:auth] 2015-10-13T18:47:21.846-0400 d20265| 2015-10-13T18:47:21.845-0400 I FTDC [initandlisten] Starting full-time diagnostic data capture with directory '/data/db/job1/mongorunner/d1-0/diagnostic.data' [js_test:auth] 2015-10-13T18:47:21.890-0400 2015-10-13T18:47:21.890-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:21.953-0400 d20265| 2015-10-13T18:47:21.953-0400 I NETWORK [initandlisten] waiting for connections on port 20265 [js_test:auth] 2015-10-13T18:47:22.091-0400 d20265| 2015-10-13T18:47:22.090-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54603 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:47:22.091-0400 d20265| 2015-10-13T18:47:22.091-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:47:22.091-0400 [ [js_test:auth] 2015-10-13T18:47:22.092-0400 connection to ubuntu:20265, [js_test:auth] 2015-10-13T18:47:22.092-0400 connection to ubuntu:20266, [js_test:auth] 2015-10-13T18:47:22.092-0400 connection to ubuntu:20267 [js_test:auth] 2015-10-13T18:47:22.092-0400 ] [js_test:auth] 2015-10-13T18:47:22.092-0400 ReplSetTest n is : 1 [js_test:auth] 2015-10-13T18:47:22.092-0400 ReplSetTest n: 1 ports: [ 20265, 20266, 20267 ] 20266 number [js_test:auth] 2015-10-13T18:47:22.092-0400 { [js_test:auth] 2015-10-13T18:47:22.092-0400 "useHostName" : true, [js_test:auth] 2015-10-13T18:47:22.092-0400 "oplogSize" : 40, [js_test:auth] 2015-10-13T18:47:22.092-0400 "keyFile" : "jstests/libs/key1", [js_test:auth] 2015-10-13T18:47:22.093-0400 "port" : 20266, [js_test:auth] 2015-10-13T18:47:22.093-0400 "noprealloc" : "", [js_test:auth] 2015-10-13T18:47:22.093-0400 "smallfiles" : "", [js_test:auth] 2015-10-13T18:47:22.093-0400 "replSet" : "d1", [js_test:auth] 2015-10-13T18:47:22.093-0400 "dbpath" : "$set-$node", [js_test:auth] 2015-10-13T18:47:22.093-0400 "verbose" : 0, [js_test:auth] 2015-10-13T18:47:22.093-0400 "restart" : undefined, [js_test:auth] 2015-10-13T18:47:22.093-0400 "pathOpts" : { [js_test:auth] 2015-10-13T18:47:22.093-0400 "node" : 1, [js_test:auth] 2015-10-13T18:47:22.093-0400 "set" : "d1" [js_test:auth] 2015-10-13T18:47:22.093-0400 } [js_test:auth] 2015-10-13T18:47:22.093-0400 } [js_test:auth] 2015-10-13T18:47:22.094-0400 ReplSetTest Starting.... [js_test:auth] 2015-10-13T18:47:22.094-0400 Resetting db path '/data/db/job1/mongorunner/d1-1' [js_test:auth] 2015-10-13T18:47:22.097-0400 2015-10-13T18:47:22.096-0400 I - [thread1] shell: started program (sh23143): /media/ssd/mongo1/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 20266 --noprealloc --smallfiles --replSet d1 --dbpath /data/db/job1/mongorunner/d1-1 --nopreallocj --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:47:22.097-0400 2015-10-13T18:47:22.097-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20266, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:22.111-0400 d20266| note: noprealloc may hurt performance in many applications [js_test:auth] 2015-10-13T18:47:22.160-0400 d20266| 2015-10-13T18:47:22.160-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:auth] 2015-10-13T18:47:22.297-0400 2015-10-13T18:47:22.297-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20266, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:22.498-0400 2015-10-13T18:47:22.497-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20266, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:22.510-0400 d20266| 2015-10-13T18:47:22.510-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:auth] 2015-10-13T18:47:22.510-0400 d20266| 2015-10-13T18:47:22.510-0400 I CONTROL [initandlisten] MongoDB starting : pid=23143 port=20266 dbpath=/data/db/job1/mongorunner/d1-1 64-bit host=ubuntu [js_test:auth] 2015-10-13T18:47:22.511-0400 d20266| 2015-10-13T18:47:22.510-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:22.511-0400 d20266| 2015-10-13T18:47:22.510-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:47:22.511-0400 d20266| 2015-10-13T18:47:22.510-0400 I CONTROL [initandlisten] ** Not recommended for production. [js_test:auth] 2015-10-13T18:47:22.511-0400 d20266| 2015-10-13T18:47:22.510-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:22.511-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:22.511-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine. [js_test:auth] 2015-10-13T18:47:22.511-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems: [js_test:auth] 2015-10-13T18:47:22.511-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options] [js_test:auth] 2015-10-13T18:47:22.511-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:22.511-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. [js_test:auth] 2015-10-13T18:47:22.511-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:22.512-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:22.512-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. [js_test:auth] 2015-10-13T18:47:22.512-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:22.512-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:22.512-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:47:22.512-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:47:22.512-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:47:22.512-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] allocator: tcmalloc [js_test:auth] 2015-10-13T18:47:22.512-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] modules: subscription [js_test:auth] 2015-10-13T18:47:22.512-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] build environment: [js_test:auth] 2015-10-13T18:47:22.512-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] distarch: x86_64 [js_test:auth] 2015-10-13T18:47:22.512-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] target_arch: x86_64 [js_test:auth] 2015-10-13T18:47:22.513-0400 d20266| 2015-10-13T18:47:22.511-0400 I CONTROL [initandlisten] options: { net: { port: 20266 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "d1" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/job1/mongorunner/d1-1", mmapv1: { preallocDataFiles: false, smallFiles: true } } } [js_test:auth] 2015-10-13T18:47:22.621-0400 d20266| 2015-10-13T18:47:22.621-0400 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument Did not find replica set lastVote document in local.replset.election [js_test:auth] 2015-10-13T18:47:22.621-0400 d20266| 2015-10-13T18:47:22.621-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset [js_test:auth] 2015-10-13T18:47:22.622-0400 d20266| 2015-10-13T18:47:22.621-0400 I FTDC [initandlisten] Starting full-time diagnostic data capture with directory '/data/db/job1/mongorunner/d1-1/diagnostic.data' [js_test:auth] 2015-10-13T18:47:22.698-0400 2015-10-13T18:47:22.698-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20266, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:22.728-0400 d20266| 2015-10-13T18:47:22.728-0400 I NETWORK [initandlisten] waiting for connections on port 20266 [js_test:auth] 2015-10-13T18:47:22.899-0400 d20266| 2015-10-13T18:47:22.898-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34538 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:47:22.899-0400 d20266| 2015-10-13T18:47:22.899-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:47:22.899-0400 [ [js_test:auth] 2015-10-13T18:47:22.900-0400 connection to ubuntu:20265, [js_test:auth] 2015-10-13T18:47:22.900-0400 connection to ubuntu:20266, [js_test:auth] 2015-10-13T18:47:22.900-0400 connection to ubuntu:20267 [js_test:auth] 2015-10-13T18:47:22.900-0400 ] [js_test:auth] 2015-10-13T18:47:22.900-0400 ReplSetTest n is : 2 [js_test:auth] 2015-10-13T18:47:22.900-0400 ReplSetTest n: 2 ports: [ 20265, 20266, 20267 ] 20267 number [js_test:auth] 2015-10-13T18:47:22.900-0400 { [js_test:auth] 2015-10-13T18:47:22.900-0400 "useHostName" : true, [js_test:auth] 2015-10-13T18:47:22.900-0400 "oplogSize" : 40, [js_test:auth] 2015-10-13T18:47:22.900-0400 "keyFile" : "jstests/libs/key1", [js_test:auth] 2015-10-13T18:47:22.900-0400 "port" : 20267, [js_test:auth] 2015-10-13T18:47:22.900-0400 "noprealloc" : "", [js_test:auth] 2015-10-13T18:47:22.900-0400 "smallfiles" : "", [js_test:auth] 2015-10-13T18:47:22.900-0400 "replSet" : "d1", [js_test:auth] 2015-10-13T18:47:22.900-0400 "dbpath" : "$set-$node", [js_test:auth] 2015-10-13T18:47:22.900-0400 "verbose" : 0, [js_test:auth] 2015-10-13T18:47:22.901-0400 "restart" : undefined, [js_test:auth] 2015-10-13T18:47:22.901-0400 "pathOpts" : { [js_test:auth] 2015-10-13T18:47:22.901-0400 "node" : 2, [js_test:auth] 2015-10-13T18:47:22.901-0400 "set" : "d1" [js_test:auth] 2015-10-13T18:47:22.901-0400 } [js_test:auth] 2015-10-13T18:47:22.901-0400 } [js_test:auth] 2015-10-13T18:47:22.901-0400 ReplSetTest Starting.... [js_test:auth] 2015-10-13T18:47:22.901-0400 Resetting db path '/data/db/job1/mongorunner/d1-2' [js_test:auth] 2015-10-13T18:47:22.905-0400 2015-10-13T18:47:22.905-0400 I - [thread1] shell: started program (sh23450): /media/ssd/mongo1/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 20267 --noprealloc --smallfiles --replSet d1 --dbpath /data/db/job1/mongorunner/d1-2 --nopreallocj --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:47:22.905-0400 2015-10-13T18:47:22.905-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20267, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:22.919-0400 d20267| note: noprealloc may hurt performance in many applications [js_test:auth] 2015-10-13T18:47:22.976-0400 d20267| 2015-10-13T18:47:22.975-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:auth] 2015-10-13T18:47:23.106-0400 2015-10-13T18:47:23.106-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20267, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:23.306-0400 2015-10-13T18:47:23.306-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20267, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:23.389-0400 d20267| 2015-10-13T18:47:23.388-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:auth] 2015-10-13T18:47:23.389-0400 d20267| 2015-10-13T18:47:23.388-0400 I CONTROL [initandlisten] MongoDB starting : pid=23450 port=20267 dbpath=/data/db/job1/mongorunner/d1-2 64-bit host=ubuntu [js_test:auth] 2015-10-13T18:47:23.389-0400 d20267| 2015-10-13T18:47:23.388-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:23.389-0400 d20267| 2015-10-13T18:47:23.388-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:47:23.389-0400 d20267| 2015-10-13T18:47:23.388-0400 I CONTROL [initandlisten] ** Not recommended for production. [js_test:auth] 2015-10-13T18:47:23.389-0400 d20267| 2015-10-13T18:47:23.388-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:23.390-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:23.390-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine. [js_test:auth] 2015-10-13T18:47:23.390-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems: [js_test:auth] 2015-10-13T18:47:23.390-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options] [js_test:auth] 2015-10-13T18:47:23.390-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:23.390-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. [js_test:auth] 2015-10-13T18:47:23.390-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:23.390-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:23.390-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. [js_test:auth] 2015-10-13T18:47:23.390-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:23.391-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:23.391-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:47:23.391-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:47:23.391-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:47:23.391-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] allocator: tcmalloc [js_test:auth] 2015-10-13T18:47:23.391-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] modules: subscription [js_test:auth] 2015-10-13T18:47:23.391-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] build environment: [js_test:auth] 2015-10-13T18:47:23.391-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] distarch: x86_64 [js_test:auth] 2015-10-13T18:47:23.391-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] target_arch: x86_64 [js_test:auth] 2015-10-13T18:47:23.391-0400 d20267| 2015-10-13T18:47:23.390-0400 I CONTROL [initandlisten] options: { net: { port: 20267 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "d1" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/job1/mongorunner/d1-2", mmapv1: { preallocDataFiles: false, smallFiles: true } } } [js_test:auth] 2015-10-13T18:47:23.507-0400 2015-10-13T18:47:23.507-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20267, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:23.522-0400 d20267| 2015-10-13T18:47:23.522-0400 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument Did not find replica set lastVote document in local.replset.election [js_test:auth] 2015-10-13T18:47:23.523-0400 d20267| 2015-10-13T18:47:23.522-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset [js_test:auth] 2015-10-13T18:47:23.523-0400 d20267| 2015-10-13T18:47:23.523-0400 I FTDC [initandlisten] Starting full-time diagnostic data capture with directory '/data/db/job1/mongorunner/d1-2/diagnostic.data' [js_test:auth] 2015-10-13T18:47:23.634-0400 d20267| 2015-10-13T18:47:23.634-0400 I NETWORK [initandlisten] waiting for connections on port 20267 [js_test:auth] 2015-10-13T18:47:23.707-0400 d20267| 2015-10-13T18:47:23.707-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:56169 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:47:23.708-0400 d20267| 2015-10-13T18:47:23.708-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:47:23.709-0400 [ [js_test:auth] 2015-10-13T18:47:23.709-0400 connection to ubuntu:20265, [js_test:auth] 2015-10-13T18:47:23.709-0400 connection to ubuntu:20266, [js_test:auth] 2015-10-13T18:47:23.709-0400 connection to ubuntu:20267 [js_test:auth] 2015-10-13T18:47:23.709-0400 ] [js_test:auth] 2015-10-13T18:47:23.709-0400 { [js_test:auth] 2015-10-13T18:47:23.709-0400 "replSetInitiate" : { [js_test:auth] 2015-10-13T18:47:23.710-0400 "_id" : "d1", [js_test:auth] 2015-10-13T18:47:23.710-0400 "members" : [ [js_test:auth] 2015-10-13T18:47:23.710-0400 { [js_test:auth] 2015-10-13T18:47:23.710-0400 "_id" : 0, [js_test:auth] 2015-10-13T18:47:23.710-0400 "host" : "ubuntu:20265" [js_test:auth] 2015-10-13T18:47:23.710-0400 }, [js_test:auth] 2015-10-13T18:47:23.710-0400 { [js_test:auth] 2015-10-13T18:47:23.710-0400 "_id" : 1, [js_test:auth] 2015-10-13T18:47:23.710-0400 "host" : "ubuntu:20266" [js_test:auth] 2015-10-13T18:47:23.710-0400 }, [js_test:auth] 2015-10-13T18:47:23.710-0400 { [js_test:auth] 2015-10-13T18:47:23.710-0400 "_id" : 2, [js_test:auth] 2015-10-13T18:47:23.710-0400 "host" : "ubuntu:20267" [js_test:auth] 2015-10-13T18:47:23.710-0400 } [js_test:auth] 2015-10-13T18:47:23.710-0400 ] [js_test:auth] 2015-10-13T18:47:23.710-0400 } [js_test:auth] 2015-10-13T18:47:23.710-0400 } [js_test:auth] 2015-10-13T18:47:23.710-0400 d20265| 2015-10-13T18:47:23.710-0400 I REPL [conn1] replSetInitiate admin command received from client [js_test:auth] 2015-10-13T18:47:23.711-0400 d20265| 2015-10-13T18:47:23.710-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52967 #2 (2 connections now open) [js_test:auth] 2015-10-13T18:47:23.731-0400 d20265| 2015-10-13T18:47:23.731-0400 I ACCESS [conn2] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:23.731-0400 d20265| 2015-10-13T18:47:23.731-0400 I NETWORK [conn2] end connection 127.0.0.1:52967 (1 connection now open) [js_test:auth] 2015-10-13T18:47:23.731-0400 d20266| 2015-10-13T18:47:23.731-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34883 #2 (2 connections now open) [js_test:auth] 2015-10-13T18:47:23.748-0400 d20266| 2015-10-13T18:47:23.747-0400 I ACCESS [conn2] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:23.748-0400 d20266| 2015-10-13T18:47:23.748-0400 I NETWORK [conn2] end connection 127.0.0.1:34883 (1 connection now open) [js_test:auth] 2015-10-13T18:47:23.748-0400 d20267| 2015-10-13T18:47:23.748-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47098 #2 (2 connections now open) [js_test:auth] 2015-10-13T18:47:23.768-0400 d20267| 2015-10-13T18:47:23.768-0400 I ACCESS [conn2] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:23.768-0400 d20265| 2015-10-13T18:47:23.768-0400 I REPL [conn1] replSetInitiate config object with 3 members parses ok [js_test:auth] 2015-10-13T18:47:23.768-0400 d20267| 2015-10-13T18:47:23.768-0400 I NETWORK [conn2] end connection 127.0.0.1:47098 (1 connection now open) [js_test:auth] 2015-10-13T18:47:23.769-0400 d20266| 2015-10-13T18:47:23.769-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34886 #3 (2 connections now open) [js_test:auth] 2015-10-13T18:47:23.769-0400 d20267| 2015-10-13T18:47:23.769-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47101 #3 (2 connections now open) [js_test:auth] 2015-10-13T18:47:23.798-0400 d20266| 2015-10-13T18:47:23.798-0400 I ACCESS [conn3] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:23.799-0400 d20265| 2015-10-13T18:47:23.798-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20266 [js_test:auth] 2015-10-13T18:47:23.799-0400 d20267| 2015-10-13T18:47:23.798-0400 I ACCESS [conn3] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:23.799-0400 d20265| 2015-10-13T18:47:23.799-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20267 [js_test:auth] 2015-10-13T18:47:23.799-0400 d20265| 2015-10-13T18:47:23.799-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52976 #3 (2 connections now open) [js_test:auth] 2015-10-13T18:47:23.799-0400 d20265| 2015-10-13T18:47:23.799-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52977 #4 (3 connections now open) [js_test:auth] 2015-10-13T18:47:23.818-0400 d20265| 2015-10-13T18:47:23.818-0400 I ACCESS [conn4] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:23.818-0400 d20266| 2015-10-13T18:47:23.818-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20265 [js_test:auth] 2015-10-13T18:47:23.818-0400 d20265| 2015-10-13T18:47:23.818-0400 I ACCESS [conn3] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:23.818-0400 d20267| 2015-10-13T18:47:23.818-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20265 [js_test:auth] 2015-10-13T18:47:23.909-0400 d20265| 2015-10-13T18:47:23.908-0400 I REPL [ReplicationExecutor] New replica set config in use: { _id: "d1", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "ubuntu:20265", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ubuntu:20266", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "ubuntu:20267", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } [js_test:auth] 2015-10-13T18:47:23.909-0400 d20265| 2015-10-13T18:47:23.908-0400 I REPL [ReplicationExecutor] This node is ubuntu:20265 in the config [js_test:auth] 2015-10-13T18:47:23.909-0400 d20265| 2015-10-13T18:47:23.908-0400 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:auth] 2015-10-13T18:47:23.909-0400 d20265| 2015-10-13T18:47:23.908-0400 I REPL [conn1] ****** [js_test:auth] 2015-10-13T18:47:23.910-0400 d20265| 2015-10-13T18:47:23.908-0400 I REPL [conn1] creating replication oplog of size: 40MB... [js_test:auth] 2015-10-13T18:47:23.910-0400 d20265| 2015-10-13T18:47:23.908-0400 I REPL [ReplicationExecutor] Member ubuntu:20266 is now in state STARTUP [js_test:auth] 2015-10-13T18:47:23.910-0400 d20265| 2015-10-13T18:47:23.908-0400 I REPL [ReplicationExecutor] Member ubuntu:20267 is now in state STARTUP [js_test:auth] 2015-10-13T18:47:23.972-0400 d20265| 2015-10-13T18:47:23.972-0400 I STORAGE [conn1] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:auth] 2015-10-13T18:47:23.972-0400 d20265| 2015-10-13T18:47:23.972-0400 I STORAGE [conn1] Scanning the oplog to determine where to place markers for when to truncate [js_test:auth] 2015-10-13T18:47:24.294-0400 d20265| 2015-10-13T18:47:24.293-0400 I REPL [conn1] ****** [js_test:auth] 2015-10-13T18:47:24.295-0400 d20265| 2015-10-13T18:47:24.294-0400 I REPL [conn1] Starting replication applier threads [js_test:auth] 2015-10-13T18:47:24.295-0400 d20265| 2015-10-13T18:47:24.295-0400 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:auth] 2015-10-13T18:47:24.295-0400 d20265| 2015-10-13T18:47:24.295-0400 I COMMAND [conn1] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "d1", members: [ { _id: 0.0, host: "ubuntu:20265" }, { _id: 1.0, host: "ubuntu:20266" }, { _id: 2.0, host: "ubuntu:20267" } ] } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{ Global: { acquireCount: { r: 8, w: 4, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 944 } }, Database: { acquireCount: { r: 1, w: 2, W: 2 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 585ms [js_test:auth] 2015-10-13T18:47:24.296-0400 d20265| 2015-10-13T18:47:24.296-0400 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:auth] 2015-10-13T18:47:25.819-0400 d20265| 2015-10-13T18:47:25.819-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53168 #5 (4 connections now open) [js_test:auth] 2015-10-13T18:47:25.820-0400 d20265| 2015-10-13T18:47:25.820-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53169 #6 (5 connections now open) [js_test:auth] 2015-10-13T18:47:25.839-0400 d20265| 2015-10-13T18:47:25.839-0400 I ACCESS [conn6] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:25.839-0400 d20265| 2015-10-13T18:47:25.839-0400 I NETWORK [conn6] end connection 127.0.0.1:53169 (4 connections now open) [js_test:auth] 2015-10-13T18:47:25.840-0400 d20266| 2015-10-13T18:47:25.840-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35085 #4 (3 connections now open) [js_test:auth] 2015-10-13T18:47:25.844-0400 d20265| 2015-10-13T18:47:25.844-0400 I ACCESS [conn5] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:25.844-0400 d20265| 2015-10-13T18:47:25.844-0400 I NETWORK [conn5] end connection 127.0.0.1:53168 (3 connections now open) [js_test:auth] 2015-10-13T18:47:25.844-0400 d20266| 2015-10-13T18:47:25.844-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35086 #5 (4 connections now open) [js_test:auth] 2015-10-13T18:47:25.855-0400 d20266| 2015-10-13T18:47:25.855-0400 I ACCESS [conn4] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:25.855-0400 d20266| 2015-10-13T18:47:25.855-0400 I NETWORK [conn4] end connection 127.0.0.1:35085 (3 connections now open) [js_test:auth] 2015-10-13T18:47:25.856-0400 d20267| 2015-10-13T18:47:25.856-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47301 #4 (3 connections now open) [js_test:auth] 2015-10-13T18:47:25.860-0400 d20266| 2015-10-13T18:47:25.860-0400 I ACCESS [conn5] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:25.860-0400 d20266| 2015-10-13T18:47:25.860-0400 I NETWORK [conn5] end connection 127.0.0.1:35086 (2 connections now open) [js_test:auth] 2015-10-13T18:47:25.861-0400 d20267| 2015-10-13T18:47:25.860-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47302 #5 (4 connections now open) [js_test:auth] 2015-10-13T18:47:25.871-0400 d20267| 2015-10-13T18:47:25.871-0400 I ACCESS [conn4] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:25.871-0400 d20267| 2015-10-13T18:47:25.871-0400 I NETWORK [conn4] end connection 127.0.0.1:47301 (3 connections now open) [js_test:auth] 2015-10-13T18:47:25.877-0400 d20267| 2015-10-13T18:47:25.876-0400 I ACCESS [conn5] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:25.877-0400 d20267| 2015-10-13T18:47:25.877-0400 I NETWORK [conn5] end connection 127.0.0.1:47302 (2 connections now open) [js_test:auth] 2015-10-13T18:47:25.999-0400 d20267| 2015-10-13T18:47:25.999-0400 I REPL [replExecDBWorker-0] Starting replication applier threads [js_test:auth] 2015-10-13T18:47:26.006-0400 d20267| 2015-10-13T18:47:25.999-0400 W REPL [rsSync] did not receive a valid config yet [js_test:auth] 2015-10-13T18:47:26.006-0400 d20267| 2015-10-13T18:47:26.000-0400 I REPL [ReplicationExecutor] New replica set config in use: { _id: "d1", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "ubuntu:20265", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ubuntu:20266", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "ubuntu:20267", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } [js_test:auth] 2015-10-13T18:47:26.007-0400 d20267| 2015-10-13T18:47:26.000-0400 I REPL [ReplicationExecutor] This node is ubuntu:20267 in the config [js_test:auth] 2015-10-13T18:47:26.007-0400 d20267| 2015-10-13T18:47:26.000-0400 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:auth] 2015-10-13T18:47:26.007-0400 d20266| 2015-10-13T18:47:26.000-0400 I REPL [replExecDBWorker-0] Starting replication applier threads [js_test:auth] 2015-10-13T18:47:26.008-0400 d20266| 2015-10-13T18:47:26.003-0400 I REPL [ReplicationExecutor] New replica set config in use: { _id: "d1", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "ubuntu:20265", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ubuntu:20266", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "ubuntu:20267", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } [js_test:auth] 2015-10-13T18:47:26.008-0400 d20266| 2015-10-13T18:47:26.003-0400 I REPL [ReplicationExecutor] This node is ubuntu:20266 in the config [js_test:auth] 2015-10-13T18:47:26.008-0400 d20266| 2015-10-13T18:47:26.003-0400 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:auth] 2015-10-13T18:47:26.008-0400 d20266| 2015-10-13T18:47:26.004-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:47:26.008-0400 d20266| 2015-10-13T18:47:26.004-0400 I REPL [rsSync] creating replication oplog of size: 40MB... [js_test:auth] 2015-10-13T18:47:26.009-0400 d20266| 2015-10-13T18:47:26.006-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35099 #6 (3 connections now open) [js_test:auth] 2015-10-13T18:47:26.009-0400 d20266| 2015-10-13T18:47:26.006-0400 I REPL [ReplicationExecutor] Member ubuntu:20265 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:26.009-0400 d20267| 2015-10-13T18:47:26.006-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47312 #6 (3 connections now open) [js_test:auth] 2015-10-13T18:47:26.009-0400 d20267| 2015-10-13T18:47:26.007-0400 I REPL [ReplicationExecutor] Member ubuntu:20265 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:26.024-0400 d20267| 2015-10-13T18:47:26.024-0400 I ACCESS [conn6] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:26.024-0400 d20266| 2015-10-13T18:47:26.024-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20267 [js_test:auth] 2015-10-13T18:47:26.024-0400 d20266| 2015-10-13T18:47:26.024-0400 I REPL [ReplicationExecutor] Member ubuntu:20267 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:47:26.078-0400 d20266| 2015-10-13T18:47:26.078-0400 I STORAGE [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:auth] 2015-10-13T18:47:26.079-0400 d20266| 2015-10-13T18:47:26.078-0400 I STORAGE [rsSync] Scanning the oplog to determine where to place markers for when to truncate [js_test:auth] 2015-10-13T18:47:26.296-0400 d20265| 2015-10-13T18:47:26.296-0400 I REPL [ReplicationExecutor] Member ubuntu:20267 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:47:26.407-0400 d20266| 2015-10-13T18:47:26.407-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:47:26.407-0400 d20266| 2015-10-13T18:47:26.407-0400 I REPL [rsSync] initial sync pending [js_test:auth] 2015-10-13T18:47:26.407-0400 d20265| 2015-10-13T18:47:26.407-0400 I REPL [ReplicationExecutor] Member ubuntu:20266 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:47:26.423-0400 d20266| 2015-10-13T18:47:26.423-0400 I ACCESS [conn6] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:26.423-0400 d20267| 2015-10-13T18:47:26.423-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20266 [js_test:auth] 2015-10-13T18:47:26.423-0400 d20267| 2015-10-13T18:47:26.423-0400 I REPL [ReplicationExecutor] Member ubuntu:20266 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:47:26.515-0400 d20266| 2015-10-13T18:47:26.515-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20265 [js_test:auth] 2015-10-13T18:47:26.515-0400 d20265| 2015-10-13T18:47:26.515-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53226 #7 (4 connections now open) [js_test:auth] 2015-10-13T18:47:26.532-0400 d20265| 2015-10-13T18:47:26.532-0400 I ACCESS [conn7] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:26.545-0400 d20266| 2015-10-13T18:47:26.545-0400 I REPL [rsSync] initial sync drop all databases [js_test:auth] 2015-10-13T18:47:26.545-0400 d20266| 2015-10-13T18:47:26.545-0400 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 [js_test:auth] 2015-10-13T18:47:26.545-0400 d20266| 2015-10-13T18:47:26.545-0400 I REPL [rsSync] initial sync clone all databases [js_test:auth] 2015-10-13T18:47:26.545-0400 d20266| 2015-10-13T18:47:26.545-0400 I REPL [rsSync] initial sync data copy, starting syncup [js_test:auth] 2015-10-13T18:47:26.545-0400 d20266| 2015-10-13T18:47:26.545-0400 I REPL [rsSync] oplog sync 1 of 3 [js_test:auth] 2015-10-13T18:47:26.545-0400 d20266| 2015-10-13T18:47:26.545-0400 I REPL [rsSync] oplog sync 2 of 3 [js_test:auth] 2015-10-13T18:47:26.545-0400 d20266| 2015-10-13T18:47:26.545-0400 I REPL [rsSync] initial sync building indexes [js_test:auth] 2015-10-13T18:47:26.546-0400 d20266| 2015-10-13T18:47:26.545-0400 I REPL [rsSync] oplog sync 3 of 3 [js_test:auth] 2015-10-13T18:47:26.547-0400 d20266| 2015-10-13T18:47:26.546-0400 I REPL [rsSync] initial sync finishing up [js_test:auth] 2015-10-13T18:47:26.547-0400 d20266| 2015-10-13T18:47:26.546-0400 I REPL [rsSync] set minValid=(term: 0, timestamp: Oct 13 18:47:24:1) [js_test:auth] 2015-10-13T18:47:26.564-0400 d20266| 2015-10-13T18:47:26.563-0400 I REPL [rsSync] initial sync done [js_test:auth] 2015-10-13T18:47:26.565-0400 d20265| 2015-10-13T18:47:26.565-0400 I NETWORK [conn7] end connection 127.0.0.1:53226 (3 connections now open) [js_test:auth] 2015-10-13T18:47:26.565-0400 d20266| 2015-10-13T18:47:26.565-0400 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:auth] 2015-10-13T18:47:26.565-0400 d20266| 2015-10-13T18:47:26.565-0400 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:auth] 2015-10-13T18:47:27.000-0400 d20267| 2015-10-13T18:47:26.999-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:47:27.000-0400 d20267| 2015-10-13T18:47:27.000-0400 I REPL [rsSync] creating replication oplog of size: 40MB... [js_test:auth] 2015-10-13T18:47:27.009-0400 d20266| 2015-10-13T18:47:27.005-0400 I REPL [ReplicationExecutor] could not find member to sync from [js_test:auth] 2015-10-13T18:47:27.047-0400 d20267| 2015-10-13T18:47:27.047-0400 I STORAGE [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:auth] 2015-10-13T18:47:27.047-0400 d20267| 2015-10-13T18:47:27.047-0400 I STORAGE [rsSync] Scanning the oplog to determine where to place markers for when to truncate [js_test:auth] 2015-10-13T18:47:27.384-0400 d20267| 2015-10-13T18:47:27.384-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:47:27.385-0400 d20267| 2015-10-13T18:47:27.384-0400 I REPL [rsSync] initial sync pending [js_test:auth] 2015-10-13T18:47:27.480-0400 d20267| 2015-10-13T18:47:27.480-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20265 [js_test:auth] 2015-10-13T18:47:27.481-0400 d20265| 2015-10-13T18:47:27.480-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53270 #8 (4 connections now open) [js_test:auth] 2015-10-13T18:47:27.499-0400 d20265| 2015-10-13T18:47:27.499-0400 I ACCESS [conn8] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:27.513-0400 d20267| 2015-10-13T18:47:27.513-0400 I REPL [rsSync] initial sync drop all databases [js_test:auth] 2015-10-13T18:47:27.513-0400 d20267| 2015-10-13T18:47:27.513-0400 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 [js_test:auth] 2015-10-13T18:47:27.513-0400 d20267| 2015-10-13T18:47:27.513-0400 I REPL [rsSync] initial sync clone all databases [js_test:auth] 2015-10-13T18:47:27.513-0400 d20267| 2015-10-13T18:47:27.513-0400 I REPL [rsSync] initial sync data copy, starting syncup [js_test:auth] 2015-10-13T18:47:27.514-0400 d20267| 2015-10-13T18:47:27.513-0400 I REPL [rsSync] oplog sync 1 of 3 [js_test:auth] 2015-10-13T18:47:27.514-0400 d20267| 2015-10-13T18:47:27.513-0400 I REPL [rsSync] oplog sync 2 of 3 [js_test:auth] 2015-10-13T18:47:27.514-0400 d20267| 2015-10-13T18:47:27.513-0400 I REPL [rsSync] initial sync building indexes [js_test:auth] 2015-10-13T18:47:27.515-0400 d20267| 2015-10-13T18:47:27.513-0400 I REPL [rsSync] oplog sync 3 of 3 [js_test:auth] 2015-10-13T18:47:27.515-0400 d20267| 2015-10-13T18:47:27.514-0400 I REPL [rsSync] initial sync finishing up [js_test:auth] 2015-10-13T18:47:27.515-0400 d20267| 2015-10-13T18:47:27.515-0400 I REPL [rsSync] set minValid=(term: 0, timestamp: Oct 13 18:47:24:1) [js_test:auth] 2015-10-13T18:47:27.532-0400 d20267| 2015-10-13T18:47:27.532-0400 I REPL [rsSync] initial sync done [js_test:auth] 2015-10-13T18:47:27.533-0400 d20265| 2015-10-13T18:47:27.533-0400 I NETWORK [conn8] end connection 127.0.0.1:53270 (3 connections now open) [js_test:auth] 2015-10-13T18:47:27.533-0400 d20267| 2015-10-13T18:47:27.533-0400 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:auth] 2015-10-13T18:47:27.533-0400 d20267| 2015-10-13T18:47:27.533-0400 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:auth] 2015-10-13T18:47:27.780-0400 s20264| 2015-10-13T18:47:27.780-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: auth-configRS [js_test:auth] 2015-10-13T18:47:27.781-0400 s20264| 2015-10-13T18:47:27.780-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set auth-configRS [js_test:auth] 2015-10-13T18:47:27.781-0400 s20264| 2015-10-13T18:47:27.780-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:47:27.781-0400 s20264| 2015-10-13T18:47:27.781-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20261, no events [js_test:auth] 2015-10-13T18:47:27.781-0400 s20264| 2015-10-13T18:47:27.781-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20262, no events [js_test:auth] 2015-10-13T18:47:27.964-0400 s20264| 2015-10-13T18:47:27.964-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:57.963-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776447963), up: 20, waiting: false, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:27.964-0400 s20264| 2015-10-13T18:47:27.964-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:27.983-0400 s20264| 2015-10-13T18:47:27.982-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776447000|1, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:27.983-0400 s20264| 2015-10-13T18:47:27.983-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:27.983-0400 s20264| 2015-10-13T18:47:27.983-0400 D SHARDING [Balancer] found 0 shards listed on config server(s) [js_test:auth] 2015-10-13T18:47:27.983-0400 s20264| 2015-10-13T18:47:27.983-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776447000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:27.984-0400 s20264| 2015-10-13T18:47:27.983-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:27.984-0400 s20264| 2015-10-13T18:47:27.983-0400 D SHARDING [Balancer] Refreshing MaxChunkSize: 1MB [js_test:auth] 2015-10-13T18:47:27.984-0400 s20264| 2015-10-13T18:47:27.983-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776447000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:27.984-0400 s20264| 2015-10-13T18:47:27.984-0400 D ASIO [NetworkInterfaceASIO] Connecting to ubuntu:20261 [js_test:auth] 2015-10-13T18:47:27.984-0400 s20264| 2015-10-13T18:47:27.984-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:27.984-0400 c20261| 2015-10-13T18:47:27.984-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:48314 #11 (6 connections now open) [js_test:auth] 2015-10-13T18:47:27.985-0400 s20264| 2015-10-13T18:47:27.985-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:27.999-0400 s20264| 2015-10-13T18:47:27.999-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:28.000-0400 s20264| 2015-10-13T18:47:27.999-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:28.009-0400 c20261| 2015-10-13T18:47:27.999-0400 I ACCESS [conn11] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:28.009-0400 s20264| 2015-10-13T18:47:28.000-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20261 [js_test:auth] 2015-10-13T18:47:28.009-0400 s20264| 2015-10-13T18:47:28.000-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:28.010-0400 s20264| 2015-10-13T18:47:28.008-0400 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:auth] 2015-10-13T18:47:28.010-0400 s20264| 2015-10-13T18:47:28.008-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:47:58.008-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776448008), up: 21, waiting: true, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:28.010-0400 s20264| 2015-10-13T18:47:28.008-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:28.011-0400 d20267| 2015-10-13T18:47:28.011-0400 I REPL [ReplicationExecutor] could not find member to sync from [js_test:auth] 2015-10-13T18:47:28.012-0400 d20267| 2015-10-13T18:47:28.012-0400 I REPL [ReplicationExecutor] Member ubuntu:20266 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:28.296-0400 d20265| 2015-10-13T18:47:28.296-0400 I REPL [ReplicationExecutor] Member ubuntu:20266 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:28.296-0400 d20265| 2015-10-13T18:47:28.296-0400 I REPL [ReplicationExecutor] Member ubuntu:20267 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:29.010-0400 d20266| 2015-10-13T18:47:29.010-0400 I REPL [ReplicationExecutor] Member ubuntu:20267 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:29.355-0400 d20265| 2015-10-13T18:47:29.355-0400 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected [js_test:auth] 2015-10-13T18:47:29.458-0400 d20267| 2015-10-13T18:47:29.457-0400 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "d1", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1444776444000|1, t: 0 } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, W: 2 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 102ms [js_test:auth] 2015-10-13T18:47:29.458-0400 d20265| 2015-10-13T18:47:29.458-0400 I REPL [ReplicationExecutor] dry election run succeeded, running for election [js_test:auth] 2015-10-13T18:47:29.465-0400 d20266| 2015-10-13T18:47:29.465-0400 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "d1", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1444776444000|1, t: 0 } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, W: 2 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 109ms [js_test:auth] 2015-10-13T18:47:29.465-0400 d20266| 2015-10-13T18:47:29.465-0400 I NETWORK [conn3] end connection 127.0.0.1:34886 (2 connections now open) [js_test:auth] 2015-10-13T18:47:29.554-0400 d20266| 2015-10-13T18:47:29.554-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35262 #7 (3 connections now open) [js_test:auth] 2015-10-13T18:47:29.554-0400 d20265| 2015-10-13T18:47:29.554-0400 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 1 [js_test:auth] 2015-10-13T18:47:29.554-0400 d20265| 2015-10-13T18:47:29.554-0400 I REPL [ReplicationExecutor] transition to PRIMARY [js_test:auth] 2015-10-13T18:47:29.556-0400 d20266| 2015-10-13T18:47:29.556-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35263 #8 (4 connections now open) [js_test:auth] 2015-10-13T18:47:29.588-0400 d20266| 2015-10-13T18:47:29.588-0400 I ACCESS [conn7] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:29.588-0400 d20265| 2015-10-13T18:47:29.588-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20266 [js_test:auth] 2015-10-13T18:47:29.588-0400 d20266| 2015-10-13T18:47:29.588-0400 I ACCESS [conn8] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:29.588-0400 d20265| 2015-10-13T18:47:29.588-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20266 [js_test:auth] 2015-10-13T18:47:30.012-0400 d20267| 2015-10-13T18:47:30.012-0400 I REPL [ReplicationExecutor] Member ubuntu:20265 is now in state PRIMARY [js_test:auth] 2015-10-13T18:47:30.296-0400 d20265| 2015-10-13T18:47:30.296-0400 I REPL [rsSync] transition to primary complete; database writes are now permitted [js_test:auth] 2015-10-13T18:47:30.414-0400 adding shard w/auth d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:47:30.415-0400 s20264| 2015-10-13T18:47:30.415-0400 I NETWORK [conn1] Starting new replica set monitor for d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:47:30.415-0400 s20264| 2015-10-13T18:47:30.415-0400 D NETWORK [conn1] Starting new refresh of replica set d1 [js_test:auth] 2015-10-13T18:47:30.415-0400 s20264| 2015-10-13T18:47:30.415-0400 D NETWORK [conn1] creating new connection to:ubuntu:20267 [js_test:auth] 2015-10-13T18:47:30.415-0400 s20264| 2015-10-13T18:47:30.415-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:30.415-0400 s20264| 2015-10-13T18:47:30.415-0400 D NETWORK [conn1] connected to server ubuntu:20267 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:30.415-0400 d20267| 2015-10-13T18:47:30.415-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47498 #7 (4 connections now open) [js_test:auth] 2015-10-13T18:47:30.416-0400 s20264| 2015-10-13T18:47:30.416-0400 D NETWORK [conn1] connected connection! [js_test:auth] 2015-10-13T18:47:30.416-0400 s20264| 2015-10-13T18:47:30.416-0400 D SHARDING [conn1] calling onCreate auth for ubuntu:20267 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:30.431-0400 d20267| 2015-10-13T18:47:30.431-0400 I ACCESS [conn7] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:30.431-0400 s20264| 2015-10-13T18:47:30.431-0400 D NETWORK [conn1] creating new connection to:ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.431-0400 s20264| 2015-10-13T18:47:30.431-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:30.431-0400 s20264| 2015-10-13T18:47:30.431-0400 D NETWORK [conn1] connected to server ubuntu:20265 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:30.432-0400 d20265| 2015-10-13T18:47:30.431-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53370 #9 (4 connections now open) [js_test:auth] 2015-10-13T18:47:30.432-0400 s20264| 2015-10-13T18:47:30.432-0400 D NETWORK [conn1] connected connection! [js_test:auth] 2015-10-13T18:47:30.432-0400 s20264| 2015-10-13T18:47:30.432-0400 D SHARDING [conn1] calling onCreate auth for ubuntu:20265 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:30.448-0400 d20265| 2015-10-13T18:47:30.448-0400 I ACCESS [conn9] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:30.449-0400 s20264| 2015-10-13T18:47:30.448-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:00.448-0400 cmd:{ isdbgrid: 1 } [js_test:auth] 2015-10-13T18:47:30.449-0400 s20264| 2015-10-13T18:47:30.449-0400 D ASIO [NetworkInterfaceASIO] Connecting to ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.449-0400 s20264| 2015-10-13T18:47:30.449-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.449-0400 d20265| 2015-10-13T18:47:30.449-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53371 #10 (5 connections now open) [js_test:auth] 2015-10-13T18:47:30.450-0400 s20264| 2015-10-13T18:47:30.450-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.468-0400 s20264| 2015-10-13T18:47:30.468-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.469-0400 s20264| 2015-10-13T18:47:30.469-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.469-0400 d20265| 2015-10-13T18:47:30.469-0400 I ACCESS [conn10] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:30.469-0400 s20264| 2015-10-13T18:47:30.469-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.469-0400 s20264| 2015-10-13T18:47:30.469-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.469-0400 s20264| 2015-10-13T18:47:30.469-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:00.469-0400 cmd:{ isMaster: 1 } [js_test:auth] 2015-10-13T18:47:30.469-0400 s20264| 2015-10-13T18:47:30.469-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.469-0400 s20264| 2015-10-13T18:47:30.469-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:00.469-0400 cmd:{ replSetGetStatus: 1 } [js_test:auth] 2015-10-13T18:47:30.469-0400 s20264| 2015-10-13T18:47:30.469-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.470-0400 s20264| 2015-10-13T18:47:30.469-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:00.469-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:47:30.470-0400 s20264| 2015-10-13T18:47:30.469-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.470-0400 s20264| 2015-10-13T18:47:30.470-0400 I SHARDING [conn1] going to add shard: { _id: "d1", host: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267" } [js_test:auth] 2015-10-13T18:47:30.470-0400 s20264| 2015-10-13T18:47:30.470-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:00.470-0400 cmd:{ insert: "shards", documents: [ { _id: "d1", host: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267" } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:30.470-0400 s20264| 2015-10-13T18:47:30.470-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:30.486-0400 s20264| 2015-10-13T18:47:30.486-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776450000|1, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:30.486-0400 s20264| 2015-10-13T18:47:30.486-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:30.486-0400 s20264| 2015-10-13T18:47:30.486-0400 D SHARDING [conn1] found 1 shards listed on config server(s) [js_test:auth] 2015-10-13T18:47:30.486-0400 s20264| 2015-10-13T18:47:30.486-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:00.486-0400 cmd:{ create: "config.changelog", capped: true, size: 10485760, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:30.487-0400 s20264| 2015-10-13T18:47:30.486-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:30.569-0400 s20264| 2015-10-13T18:47:30.568-0400 I SHARDING [conn1] about to log metadata event: { _id: "ubuntu-2015-10-13T18:47:30.568-0400-561d8a02c06b51335e5d6893", server: "ubuntu", clientAddr: "127.0.0.1:54935", time: new Date(1444776450568), what: "addShard", ns: "", details: { name: "d1", host: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267" } } [js_test:auth] 2015-10-13T18:47:30.569-0400 s20264| 2015-10-13T18:47:30.569-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:00.568-0400 cmd:{ insert: "changelog", documents: [ { _id: "ubuntu-2015-10-13T18:47:30.568-0400-561d8a02c06b51335e5d6893", server: "ubuntu", clientAddr: "127.0.0.1:54935", time: new Date(1444776450568), what: "addShard", ns: "", details: { name: "d1", host: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267" } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:30.569-0400 s20264| 2015-10-13T18:47:30.569-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:30.789-0400 c20260| 2015-10-13T18:47:30.789-0400 I COMMAND [conn21] command config.$cmd command: insert { insert: "changelog", documents: [ { _id: "ubuntu-2015-10-13T18:47:30.568-0400-561d8a02c06b51335e5d6893", server: "ubuntu", clientAddr: "127.0.0.1:54935", time: new Date(1444776450568), what: "addShard", ns: "", details: { name: "d1", host: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267" } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:324 locks:{ Global: { acquireCount: { r: 4, w: 4 } }, Database: { acquireCount: { w: 3, W: 1 } }, Collection: { acquireCount: { w: 1, W: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 219ms [js_test:auth] 2015-10-13T18:47:30.789-0400 s20264| 2015-10-13T18:47:30.789-0400 D SHARDING [conn1] trying to acquire new distributed lock for test ( lock timeout : 900000 ms, ping interval : 30000 ms, process : ubuntu:20264:1444776427:399327856 ) with lockSessionID: 561d8a02c06b51335e5d6894, why: enableSharding [js_test:auth] 2015-10-13T18:47:30.790-0400 s20264| 2015-10-13T18:47:30.789-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:00.789-0400 cmd:{ findAndModify: "locks", query: { _id: "test", state: 0 }, update: { $set: { ts: ObjectId('561d8a02c06b51335e5d6894'), state: 2, who: "ubuntu:20264:1444776427:399327856:conn1", process: "ubuntu:20264:1444776427:399327856", when: new Date(1444776450789), why: "enableSharding" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:30.790-0400 s20264| 2015-10-13T18:47:30.789-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:30.807-0400 s20264| 2015-10-13T18:47:30.807-0400 I SHARDING [conn1] distributed lock 'test' acquired for 'enableSharding', ts : 561d8a02c06b51335e5d6894 [js_test:auth] 2015-10-13T18:47:30.808-0400 s20264| 2015-10-13T18:47:30.807-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "databases", filter: { _id: /^test$/i }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776450000|5, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:30.808-0400 s20264| 2015-10-13T18:47:30.807-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:30.808-0400 s20264| 2015-10-13T18:47:30.808-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:00.808-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:47:30.808-0400 s20264| 2015-10-13T18:47:30.808-0400 D ASIO [NetworkInterfaceASIO] Connecting to ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.808-0400 s20264| 2015-10-13T18:47:30.808-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.808-0400 d20265| 2015-10-13T18:47:30.808-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53384 #11 (6 connections now open) [js_test:auth] 2015-10-13T18:47:30.810-0400 s20264| 2015-10-13T18:47:30.809-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.824-0400 s20264| 2015-10-13T18:47:30.824-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.824-0400 s20264| 2015-10-13T18:47:30.824-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.824-0400 d20265| 2015-10-13T18:47:30.824-0400 I ACCESS [conn11] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:30.824-0400 s20264| 2015-10-13T18:47:30.824-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.824-0400 d20265| 2015-10-13T18:47:30.824-0400 I SHARDING [conn11] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers [js_test:auth] 2015-10-13T18:47:30.825-0400 d20265| 2015-10-13T18:47:30.824-0400 I SHARDING [conn11] Updating config server connection string to: auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 [js_test:auth] 2015-10-13T18:47:30.825-0400 d20265| 2015-10-13T18:47:30.824-0400 I NETWORK [conn11] Starting new replica set monitor for auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 [js_test:auth] 2015-10-13T18:47:30.825-0400 d20265| 2015-10-13T18:47:30.824-0400 I NETWORK [ReplicaSetMonitorWatcher] starting [js_test:auth] 2015-10-13T18:47:30.828-0400 d20265| 2015-10-13T18:47:30.828-0400 I SHARDING [thread1] creating distributed lock ping thread for process ubuntu:20265:1444776450:269960772 (sleeping for 30000ms) [js_test:auth] 2015-10-13T18:47:30.828-0400 c20262| 2015-10-13T18:47:30.828-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:48390 #12 (7 connections now open) [js_test:auth] 2015-10-13T18:47:30.829-0400 c20261| 2015-10-13T18:47:30.828-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:48399 #12 (7 connections now open) [js_test:auth] 2015-10-13T18:47:30.862-0400 c20261| 2015-10-13T18:47:30.862-0400 I ACCESS [conn12] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:30.863-0400 c20260| 2015-10-13T18:47:30.863-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51273 #22 (12 connections now open) [js_test:auth] 2015-10-13T18:47:30.863-0400 c20262| 2015-10-13T18:47:30.863-0400 I ACCESS [conn12] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:30.879-0400 c20260| 2015-10-13T18:47:30.879-0400 I ACCESS [conn22] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:30.880-0400 c20260| 2015-10-13T18:47:30.880-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51274 #23 (13 connections now open) [js_test:auth] 2015-10-13T18:47:30.880-0400 c20261| 2015-10-13T18:47:30.880-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:48402 #13 (8 connections now open) [js_test:auth] 2015-10-13T18:47:30.911-0400 c20260| 2015-10-13T18:47:30.911-0400 I ACCESS [conn23] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:30.912-0400 c20261| 2015-10-13T18:47:30.911-0400 I ACCESS [conn13] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:30.912-0400 d20265| 2015-10-13T18:47:30.911-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20260 [js_test:auth] 2015-10-13T18:47:30.912-0400 d20265| 2015-10-13T18:47:30.911-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20261 [js_test:auth] 2015-10-13T18:47:30.912-0400 d20265| 2015-10-13T18:47:30.912-0400 I NETWORK [conn11] Starting new replica set monitor for d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:47:30.912-0400 d20265| 2015-10-13T18:47:30.912-0400 I SHARDING [conn11] remote client 127.0.0.1:53384 initialized this host as shard d1 [js_test:auth] 2015-10-13T18:47:30.913-0400 s20264| 2015-10-13T18:47:30.912-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.913-0400 s20264| 2015-10-13T18:47:30.912-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:30.913-0400 s20264| 2015-10-13T18:47:30.913-0400 I SHARDING [conn1] Placing [test] on: d1 [js_test:auth] 2015-10-13T18:47:30.913-0400 s20264| 2015-10-13T18:47:30.913-0400 I SHARDING [conn1] Enabling sharding for database [test] in config db [js_test:auth] 2015-10-13T18:47:30.913-0400 s20264| 2015-10-13T18:47:30.913-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:00.913-0400 cmd:{ update: "databases", updates: [ { q: { _id: "test" }, u: { _id: "test", primary: "d1", partitioned: true }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:30.913-0400 s20264| 2015-10-13T18:47:30.913-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:30.923-0400 d20265| 2015-10-13T18:47:30.923-0400 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: findAndModify query predicate didn't match any lock document [js_test:auth] 2015-10-13T18:47:31.010-0400 d20266| 2015-10-13T18:47:31.010-0400 I REPL [ReplicationExecutor] Member ubuntu:20265 is now in state PRIMARY [js_test:auth] 2015-10-13T18:47:31.133-0400 c20260| 2015-10-13T18:47:31.133-0400 I COMMAND [conn21] command config.$cmd command: update { update: "databases", updates: [ { q: { _id: "test" }, u: { _id: "test", primary: "d1", partitioned: true }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:387 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, Database: { acquireCount: { w: 4, W: 1 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 219ms [js_test:auth] 2015-10-13T18:47:31.133-0400 s20264| 2015-10-13T18:47:31.133-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:01.133-0400 cmd:{ findAndModify: "locks", query: { ts: ObjectId('561d8a02c06b51335e5d6894') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.134-0400 s20264| 2015-10-13T18:47:31.133-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.158-0400 s20264| 2015-10-13T18:47:31.158-0400 I SHARDING [conn1] distributed lock with ts: 561d8a02c06b51335e5d6894' unlocked. [js_test:auth] 2015-10-13T18:47:31.158-0400 s20264| 2015-10-13T18:47:31.158-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "databases", filter: { _id: "test" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776451000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.158-0400 s20264| 2015-10-13T18:47:31.158-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:31.159-0400 s20264| 2015-10-13T18:47:31.159-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "databases", filter: { _id: "test" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776451000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.159-0400 s20264| 2015-10-13T18:47:31.159-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:31.159-0400 s20264| 2015-10-13T18:47:31.159-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "collections", filter: { _id: /^test\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776451000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.159-0400 s20264| 2015-10-13T18:47:31.159-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:31.159-0400 s20264| 2015-10-13T18:47:31.159-0400 D SHARDING [conn1] found 0 collections left and 0 collections dropped for database test [js_test:auth] 2015-10-13T18:47:31.159-0400 s20264| 2015-10-13T18:47:31.159-0400 D SHARDING [conn1] calling onCreate auth for d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:47:31.159-0400 s20264| 2015-10-13T18:47:31.159-0400 D NETWORK [conn1] creating new connection to:ubuntu:20265 [js_test:auth] 2015-10-13T18:47:31.160-0400 s20264| 2015-10-13T18:47:31.159-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:31.160-0400 s20264| 2015-10-13T18:47:31.160-0400 D NETWORK [conn1] connected to server ubuntu:20265 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:31.160-0400 d20265| 2015-10-13T18:47:31.160-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53400 #12 (7 connections now open) [js_test:auth] 2015-10-13T18:47:31.160-0400 s20264| 2015-10-13T18:47:31.160-0400 D NETWORK [conn1] connected connection! [js_test:auth] 2015-10-13T18:47:31.176-0400 d20265| 2015-10-13T18:47:31.176-0400 I ACCESS [conn12] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:31.177-0400 s20264| 2015-10-13T18:47:31.177-0400 D NETWORK [conn1] creating new connection to:ubuntu:20265 [js_test:auth] 2015-10-13T18:47:31.177-0400 s20264| 2015-10-13T18:47:31.177-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:31.177-0400 s20264| 2015-10-13T18:47:31.177-0400 D NETWORK [conn1] connected to server ubuntu:20265 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:31.177-0400 d20265| 2015-10-13T18:47:31.177-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53404 #13 (8 connections now open) [js_test:auth] 2015-10-13T18:47:31.177-0400 s20264| 2015-10-13T18:47:31.177-0400 D NETWORK [conn1] connected connection! [js_test:auth] 2015-10-13T18:47:31.177-0400 s20264| 2015-10-13T18:47:31.177-0400 D SHARDING [conn1] calling onCreate auth for ubuntu:20265 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:31.195-0400 d20265| 2015-10-13T18:47:31.195-0400 I ACCESS [conn13] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:31.195-0400 s20264| 2015-10-13T18:47:31.195-0400 D SHARDING [conn1] initializing shard connection to d1:d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:47:31.195-0400 s20264| 2015-10-13T18:47:31.195-0400 D SHARDING [conn1] setShardVersion d1 ubuntu:20265 { setShardVersion: "", init: true, authoritative: true, configdb: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", shard: "d1", shardHost: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267" } [js_test:auth] 2015-10-13T18:47:31.344-0400 d20265| 2015-10-13T18:47:31.344-0400 I INDEX [conn13] build index on: test.foo properties: { v: 1, key: { x: 1.0 }, name: "x_1", ns: "test.foo" } [js_test:auth] 2015-10-13T18:47:31.344-0400 d20265| 2015-10-13T18:47:31.344-0400 I INDEX [conn13] building index using bulk method [js_test:auth] 2015-10-13T18:47:31.353-0400 d20265| 2015-10-13T18:47:31.352-0400 I INDEX [conn13] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:31.353-0400 d20265| 2015-10-13T18:47:31.353-0400 I WRITE [conn13] insert test.system.indexes query: { ns: "test.foo", key: { x: 1.0 }, name: "x_1" } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 5, w: 3 } }, Database: { acquireCount: { r: 1, w: 2, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 157ms [js_test:auth] 2015-10-13T18:47:31.353-0400 d20265| 2015-10-13T18:47:31.353-0400 I COMMAND [conn13] command test.$cmd command: insert { insert: "system.indexes", documents: [ { ns: "test.foo", key: { x: 1.0 }, name: "x_1" } ], writeConcern: { w: 1 }, ordered: true, shardVersion: [ Timestamp 0|0, ObjectId('000000000000000000000000') ] } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:217 locks:{ Global: { acquireCount: { r: 5, w: 3 } }, Database: { acquireCount: { r: 1, w: 2, W: 1 } }, Collection: { acquireCount: { r: 1, w: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 157ms [js_test:auth] 2015-10-13T18:47:31.353-0400 s20264| 2015-10-13T18:47:31.353-0400 I COMMAND [conn1] CMD: shardcollection: { shardCollection: "test.foo", key: { x: 1.0 } } [js_test:auth] 2015-10-13T18:47:31.353-0400 s20264| 2015-10-13T18:47:31.353-0400 D SHARDING [conn1] trying to acquire new distributed lock for test.foo ( lock timeout : 900000 ms, ping interval : 30000 ms, process : ubuntu:20264:1444776427:399327856 ) with lockSessionID: 561d8a03c06b51335e5d6895, why: shardCollection [js_test:auth] 2015-10-13T18:47:31.354-0400 s20264| 2015-10-13T18:47:31.353-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:01.353-0400 cmd:{ findAndModify: "locks", query: { _id: "test.foo", state: 0 }, update: { $set: { ts: ObjectId('561d8a03c06b51335e5d6895'), state: 2, who: "ubuntu:20264:1444776427:399327856:conn1", process: "ubuntu:20264:1444776427:399327856", when: new Date(1444776451353), why: "shardCollection" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.354-0400 s20264| 2015-10-13T18:47:31.353-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.371-0400 s20264| 2015-10-13T18:47:31.371-0400 I SHARDING [conn1] distributed lock 'test.foo' acquired for 'shardCollection', ts : 561d8a03c06b51335e5d6895 [js_test:auth] 2015-10-13T18:47:31.372-0400 s20264| 2015-10-13T18:47:31.371-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "databases", filter: { _id: "test" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776451000|4, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.372-0400 s20264| 2015-10-13T18:47:31.371-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.372-0400 s20264| 2015-10-13T18:47:31.372-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20261 db:config expDate:2015-10-13T18:48:01.372-0400 cmd:{ count: "chunks", query: { ns: "test.foo" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776451000|4, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.372-0400 s20264| 2015-10-13T18:47:31.372-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:31.373-0400 s20264| 2015-10-13T18:47:31.373-0400 I SHARDING [conn1] about to log metadata event: { _id: "ubuntu-2015-10-13T18:47:31.373-0400-561d8a03c06b51335e5d6896", server: "ubuntu", clientAddr: "127.0.0.1:54935", time: new Date(1444776451373), what: "shardCollection.start", ns: "test.foo", details: { shardKey: { x: 1.0 }, collection: "test.foo", primary: "d1:d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", initShards: [], numChunks: 1 } } [js_test:auth] 2015-10-13T18:47:31.373-0400 s20264| 2015-10-13T18:47:31.373-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:01.373-0400 cmd:{ insert: "changelog", documents: [ { _id: "ubuntu-2015-10-13T18:47:31.373-0400-561d8a03c06b51335e5d6896", server: "ubuntu", clientAddr: "127.0.0.1:54935", time: new Date(1444776451373), what: "shardCollection.start", ns: "test.foo", details: { shardKey: { x: 1.0 }, collection: "test.foo", primary: "d1:d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", initShards: [], numChunks: 1 } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.374-0400 s20264| 2015-10-13T18:47:31.373-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.389-0400 s20264| 2015-10-13T18:47:31.389-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:test expDate:2015-10-13T18:48:01.389-0400 cmd:{ count: "foo" } [js_test:auth] 2015-10-13T18:47:31.389-0400 s20264| 2015-10-13T18:47:31.389-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:31.390-0400 s20264| 2015-10-13T18:47:31.389-0400 I SHARDING [conn1] going to create 1 chunk(s) for: test.foo using new epoch 561d8a03c06b51335e5d6897 [js_test:auth] 2015-10-13T18:47:31.390-0400 s20264| 2015-10-13T18:47:31.389-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:01.389-0400 cmd:{ update: "chunks", updates: [ { q: { _id: "test.foo-x_MinKey" }, u: { _id: "test.foo-x_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897'), ns: "test.foo", min: { x: MinKey }, max: { x: MaxKey }, shard: "d1" }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.390-0400 s20264| 2015-10-13T18:47:31.389-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.407-0400 s20264| 2015-10-13T18:47:31.406-0400 D SHARDING [conn1] major version query from 0|0||561d8a03c06b51335e5d6897 and over 0 shards is query: { ns: "test.foo", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 } [js_test:auth] 2015-10-13T18:47:31.407-0400 s20264| 2015-10-13T18:47:31.407-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "chunks", filter: { ns: "test.foo", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776451000|6, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.407-0400 s20264| 2015-10-13T18:47:31.407-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.408-0400 s20264| 2015-10-13T18:47:31.407-0400 D SHARDING [conn1] loaded 1 chunks into new chunk manager for test.foo with version 1|0||561d8a03c06b51335e5d6897 [js_test:auth] 2015-10-13T18:47:31.408-0400 s20264| 2015-10-13T18:47:31.407-0400 I SHARDING [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 2 version: 1|0||561d8a03c06b51335e5d6897 based on: (empty) [js_test:auth] 2015-10-13T18:47:31.408-0400 s20264| 2015-10-13T18:47:31.408-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:01.408-0400 cmd:{ update: "collections", updates: [ { q: { _id: "test.foo" }, u: { _id: "test.foo", lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897'), lastmod: new Date(4294967296), dropped: false, key: { x: 1.0 }, unique: false }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.409-0400 s20264| 2015-10-13T18:47:31.408-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.525-0400 c20260| 2015-10-13T18:47:31.524-0400 I WRITE [conn21] update config.collections query: { _id: "test.foo" } update: { _id: "test.foo", lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897'), lastmod: new Date(4294967296), dropped: false, key: { x: 1.0 }, unique: false } keysExamined:0 docsExamined:0 nMatched:1 nModified:1 upsert:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, Database: { acquireCount: { w: 4, W: 1 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 115ms [js_test:auth] 2015-10-13T18:47:31.657-0400 c20260| 2015-10-13T18:47:31.657-0400 I COMMAND [conn21] command config.$cmd command: update { update: "collections", updates: [ { q: { _id: "test.foo" }, u: { _id: "test.foo", lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897'), lastmod: new Date(4294967296), dropped: false, key: { x: 1.0 }, unique: false }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:391 locks:{ Global: { acquireCount: { r: 5, w: 5 } }, Database: { acquireCount: { w: 4, W: 1 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 248ms [js_test:auth] 2015-10-13T18:47:31.657-0400 s20264| 2015-10-13T18:47:31.657-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "databases", filter: { _id: "test" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776451000|8, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.657-0400 s20264| 2015-10-13T18:47:31.657-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:31.657-0400 s20264| 2015-10-13T18:47:31.657-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "collections", filter: { _id: /^test\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776451000|8, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.657-0400 s20264| 2015-10-13T18:47:31.657-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:31.658-0400 s20264| 2015-10-13T18:47:31.658-0400 D SHARDING [conn1] major version query from 0|0||561d8a03c06b51335e5d6897 and over 0 shards is query: { ns: "test.foo", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 } [js_test:auth] 2015-10-13T18:47:31.658-0400 s20264| 2015-10-13T18:47:31.658-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "chunks", filter: { ns: "test.foo", lastmod: { $gte: Timestamp 0|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776451000|8, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.658-0400 s20264| 2015-10-13T18:47:31.658-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:31.659-0400 s20264| 2015-10-13T18:47:31.659-0400 D SHARDING [conn1] loaded 1 chunks into new chunk manager for test.foo with version 1|0||561d8a03c06b51335e5d6897 [js_test:auth] 2015-10-13T18:47:31.659-0400 s20264| 2015-10-13T18:47:31.659-0400 I SHARDING [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|0||561d8a03c06b51335e5d6897 based on: (empty) [js_test:auth] 2015-10-13T18:47:31.659-0400 s20264| 2015-10-13T18:47:31.659-0400 D SHARDING [conn1] found 1 collections left and 0 collections dropped for database test [js_test:auth] 2015-10-13T18:47:31.659-0400 s20264| 2015-10-13T18:47:31.659-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:01.659-0400 cmd:{ setShardVersion: "test.foo", init: false, authoritative: true, configdb: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", shard: "d1", shardHost: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", version: Timestamp 1000|0, versionEpoch: ObjectId('561d8a03c06b51335e5d6897'), noConnectionVersioning: true } [js_test:auth] 2015-10-13T18:47:31.659-0400 s20264| 2015-10-13T18:47:31.659-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:31.660-0400 d20265| 2015-10-13T18:47:31.659-0400 I SHARDING [conn11] remotely refreshing metadata for test.foo with requested shard version 1|0||561d8a03c06b51335e5d6897, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000 [js_test:auth] 2015-10-13T18:47:31.660-0400 d20265| 2015-10-13T18:47:31.660-0400 I SHARDING [conn11] collection test.foo was previously unsharded, new metadata loaded with shard version 1|0||561d8a03c06b51335e5d6897 [js_test:auth] 2015-10-13T18:47:31.660-0400 d20265| 2015-10-13T18:47:31.660-0400 I SHARDING [conn11] collection version was loaded at version 1|0||561d8a03c06b51335e5d6897, took 1ms [js_test:auth] 2015-10-13T18:47:31.660-0400 s20264| 2015-10-13T18:47:31.660-0400 I SHARDING [conn1] about to log metadata event: { _id: "ubuntu-2015-10-13T18:47:31.660-0400-561d8a03c06b51335e5d6898", server: "ubuntu", clientAddr: "127.0.0.1:54935", time: new Date(1444776451660), what: "shardCollection", ns: "test.foo", details: { version: "1|0||561d8a03c06b51335e5d6897" } } [js_test:auth] 2015-10-13T18:47:31.661-0400 s20264| 2015-10-13T18:47:31.660-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:01.660-0400 cmd:{ insert: "changelog", documents: [ { _id: "ubuntu-2015-10-13T18:47:31.660-0400-561d8a03c06b51335e5d6898", server: "ubuntu", clientAddr: "127.0.0.1:54935", time: new Date(1444776451660), what: "shardCollection", ns: "test.foo", details: { version: "1|0||561d8a03c06b51335e5d6897" } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.661-0400 s20264| 2015-10-13T18:47:31.660-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.677-0400 s20264| 2015-10-13T18:47:31.677-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:01.677-0400 cmd:{ findAndModify: "locks", query: { ts: ObjectId('561d8a03c06b51335e5d6895') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.677-0400 s20264| 2015-10-13T18:47:31.677-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.695-0400 s20264| 2015-10-13T18:47:31.695-0400 I SHARDING [conn1] distributed lock with ts: 561d8a03c06b51335e5d6895' unlocked. [js_test:auth] 2015-10-13T18:47:31.696-0400 ReplSetTest waitForIndicator state on connection to ubuntu:20266 [js_test:auth] 2015-10-13T18:47:31.696-0400 [ 2 ] [js_test:auth] 2015-10-13T18:47:31.696-0400 ReplSetTest waitForIndicator from node connection to ubuntu:20266 [js_test:auth] 2015-10-13T18:47:31.697-0400 ReplSetTest waitForIndicator Initial status ( timeout : 300000 ) : [js_test:auth] 2015-10-13T18:47:31.699-0400 { [js_test:auth] 2015-10-13T18:47:31.699-0400 "set" : "d1", [js_test:auth] 2015-10-13T18:47:31.700-0400 "date" : ISODate("2015-10-13T22:47:31.697Z"), [js_test:auth] 2015-10-13T18:47:31.700-0400 "myState" : 1, [js_test:auth] 2015-10-13T18:47:31.700-0400 "term" : NumberLong(1), [js_test:auth] 2015-10-13T18:47:31.701-0400 "heartbeatIntervalMillis" : NumberLong(2000), [js_test:auth] 2015-10-13T18:47:31.701-0400 "members" : [ [js_test:auth] 2015-10-13T18:47:31.701-0400 { [js_test:auth] 2015-10-13T18:47:31.701-0400 "_id" : 0, [js_test:auth] 2015-10-13T18:47:31.701-0400 "name" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:31.701-0400 "health" : 1, [js_test:auth] 2015-10-13T18:47:31.701-0400 "state" : 1, [js_test:auth] 2015-10-13T18:47:31.702-0400 "stateStr" : "PRIMARY", [js_test:auth] 2015-10-13T18:47:31.702-0400 "uptime" : 10, [js_test:auth] 2015-10-13T18:47:31.702-0400 "optime" : { [js_test:auth] 2015-10-13T18:47:31.702-0400 "ts" : Timestamp(1444776451, 2), [js_test:auth] 2015-10-13T18:47:31.702-0400 "t" : NumberLong(1) [js_test:auth] 2015-10-13T18:47:31.703-0400 }, [js_test:auth] 2015-10-13T18:47:31.703-0400 "optimeDate" : ISODate("2015-10-13T22:47:31Z"), [js_test:auth] 2015-10-13T18:47:31.703-0400 "infoMessage" : "could not find member to sync from", [js_test:auth] 2015-10-13T18:47:31.703-0400 "electionTime" : Timestamp(1444776449, 1), [js_test:auth] 2015-10-13T18:47:31.703-0400 "electionDate" : ISODate("2015-10-13T22:47:29Z"), [js_test:auth] 2015-10-13T18:47:31.703-0400 "configVersion" : 1, [js_test:auth] 2015-10-13T18:47:31.703-0400 "self" : true [js_test:auth] 2015-10-13T18:47:31.704-0400 }, [js_test:auth] 2015-10-13T18:47:31.704-0400 { [js_test:auth] 2015-10-13T18:47:31.704-0400 "_id" : 1, [js_test:auth] 2015-10-13T18:47:31.704-0400 "name" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:31.704-0400 "health" : 1, [js_test:auth] 2015-10-13T18:47:31.704-0400 "state" : 2, [js_test:auth] 2015-10-13T18:47:31.704-0400 "stateStr" : "SECONDARY", [js_test:auth] 2015-10-13T18:47:31.705-0400 "uptime" : 7, [js_test:auth] 2015-10-13T18:47:31.705-0400 "optime" : { [js_test:auth] 2015-10-13T18:47:31.705-0400 "ts" : Timestamp(1444776444, 1), [js_test:auth] 2015-10-13T18:47:31.705-0400 "t" : NumberLong(0) [js_test:auth] 2015-10-13T18:47:31.705-0400 }, [js_test:auth] 2015-10-13T18:47:31.705-0400 "optimeDate" : ISODate("2015-10-13T22:47:24Z"), [js_test:auth] 2015-10-13T18:47:31.706-0400 "lastHeartbeat" : ISODate("2015-10-13T22:47:31.589Z"), [js_test:auth] 2015-10-13T18:47:31.706-0400 "lastHeartbeatRecv" : ISODate("2015-10-13T22:47:31.010Z"), [js_test:auth] 2015-10-13T18:47:31.706-0400 "pingMs" : NumberLong(10), [js_test:auth] 2015-10-13T18:47:31.706-0400 "configVersion" : 1 [js_test:auth] 2015-10-13T18:47:31.706-0400 }, [js_test:auth] 2015-10-13T18:47:31.706-0400 { [js_test:auth] 2015-10-13T18:47:31.706-0400 "_id" : 2, [js_test:auth] 2015-10-13T18:47:31.706-0400 "name" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:31.706-0400 "health" : 1, [js_test:auth] 2015-10-13T18:47:31.706-0400 "state" : 2, [js_test:auth] 2015-10-13T18:47:31.706-0400 "stateStr" : "SECONDARY", [js_test:auth] 2015-10-13T18:47:31.706-0400 "uptime" : 7, [js_test:auth] 2015-10-13T18:47:31.707-0400 "optime" : { [js_test:auth] 2015-10-13T18:47:31.707-0400 "ts" : Timestamp(1444776444, 1), [js_test:auth] 2015-10-13T18:47:31.707-0400 "t" : NumberLong(0) [js_test:auth] 2015-10-13T18:47:31.707-0400 }, [js_test:auth] 2015-10-13T18:47:31.707-0400 "optimeDate" : ISODate("2015-10-13T22:47:24Z"), [js_test:auth] 2015-10-13T18:47:31.707-0400 "lastHeartbeat" : ISODate("2015-10-13T22:47:31.556Z"), [js_test:auth] 2015-10-13T18:47:31.707-0400 "lastHeartbeatRecv" : ISODate("2015-10-13T22:47:30.012Z"), [js_test:auth] 2015-10-13T18:47:31.707-0400 "pingMs" : NumberLong(0), [js_test:auth] 2015-10-13T18:47:31.707-0400 "configVersion" : 1 [js_test:auth] 2015-10-13T18:47:31.707-0400 } [js_test:auth] 2015-10-13T18:47:31.707-0400 ], [js_test:auth] 2015-10-13T18:47:31.707-0400 "ok" : 1 [js_test:auth] 2015-10-13T18:47:31.708-0400 } [js_test:auth] 2015-10-13T18:47:31.708-0400 Status for : ubuntu:20265, checking ubuntu:20266/ubuntu:20266 [js_test:auth] 2015-10-13T18:47:31.708-0400 Status for : ubuntu:20266, checking ubuntu:20266/ubuntu:20266 [js_test:auth] 2015-10-13T18:47:31.708-0400 Status : 2 target state : 2 [js_test:auth] 2015-10-13T18:47:31.708-0400 ReplSetTest waitForIndicator final status: [js_test:auth] 2015-10-13T18:47:31.708-0400 { [js_test:auth] 2015-10-13T18:47:31.708-0400 "set" : "d1", [js_test:auth] 2015-10-13T18:47:31.708-0400 "date" : ISODate("2015-10-13T22:47:31.697Z"), [js_test:auth] 2015-10-13T18:47:31.708-0400 "myState" : 1, [js_test:auth] 2015-10-13T18:47:31.708-0400 "term" : NumberLong(1), [js_test:auth] 2015-10-13T18:47:31.708-0400 "heartbeatIntervalMillis" : NumberLong(2000), [js_test:auth] 2015-10-13T18:47:31.708-0400 "members" : [ [js_test:auth] 2015-10-13T18:47:31.709-0400 { [js_test:auth] 2015-10-13T18:47:31.709-0400 "_id" : 0, [js_test:auth] 2015-10-13T18:47:31.709-0400 "name" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:31.709-0400 "health" : 1, [js_test:auth] 2015-10-13T18:47:31.709-0400 "state" : 1, [js_test:auth] 2015-10-13T18:47:31.709-0400 "stateStr" : "PRIMARY", [js_test:auth] 2015-10-13T18:47:31.709-0400 "uptime" : 10, [js_test:auth] 2015-10-13T18:47:31.709-0400 "optime" : { [js_test:auth] 2015-10-13T18:47:31.709-0400 "ts" : Timestamp(1444776451, 2), [js_test:auth] 2015-10-13T18:47:31.709-0400 "t" : NumberLong(1) [js_test:auth] 2015-10-13T18:47:31.709-0400 }, [js_test:auth] 2015-10-13T18:47:31.709-0400 "optimeDate" : ISODate("2015-10-13T22:47:31Z"), [js_test:auth] 2015-10-13T18:47:31.709-0400 "infoMessage" : "could not find member to sync from", [js_test:auth] 2015-10-13T18:47:31.709-0400 "electionTime" : Timestamp(1444776449, 1), [js_test:auth] 2015-10-13T18:47:31.709-0400 "electionDate" : ISODate("2015-10-13T22:47:29Z"), [js_test:auth] 2015-10-13T18:47:31.709-0400 "configVersion" : 1, [js_test:auth] 2015-10-13T18:47:31.710-0400 "self" : true [js_test:auth] 2015-10-13T18:47:31.710-0400 }, [js_test:auth] 2015-10-13T18:47:31.710-0400 { [js_test:auth] 2015-10-13T18:47:31.710-0400 "_id" : 1, [js_test:auth] 2015-10-13T18:47:31.710-0400 "name" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:31.710-0400 "health" : 1, [js_test:auth] 2015-10-13T18:47:31.711-0400 "state" : 2, [js_test:auth] 2015-10-13T18:47:31.711-0400 "stateStr" : "SECONDARY", [js_test:auth] 2015-10-13T18:47:31.711-0400 "uptime" : 7, [js_test:auth] 2015-10-13T18:47:31.712-0400 "optime" : { [js_test:auth] 2015-10-13T18:47:31.712-0400 "ts" : Timestamp(1444776444, 1), [js_test:auth] 2015-10-13T18:47:31.712-0400 "t" : NumberLong(0) [js_test:auth] 2015-10-13T18:47:31.712-0400 }, [js_test:auth] 2015-10-13T18:47:31.713-0400 "optimeDate" : ISODate("2015-10-13T22:47:24Z"), [js_test:auth] 2015-10-13T18:47:31.713-0400 "lastHeartbeat" : ISODate("2015-10-13T22:47:31.589Z"), [js_test:auth] 2015-10-13T18:47:31.713-0400 "lastHeartbeatRecv" : ISODate("2015-10-13T22:47:31.010Z"), [js_test:auth] 2015-10-13T18:47:31.713-0400 "pingMs" : NumberLong(10), [js_test:auth] 2015-10-13T18:47:31.713-0400 "configVersion" : 1 [js_test:auth] 2015-10-13T18:47:31.714-0400 }, [js_test:auth] 2015-10-13T18:47:31.714-0400 { [js_test:auth] 2015-10-13T18:47:31.714-0400 "_id" : 2, [js_test:auth] 2015-10-13T18:47:31.714-0400 "name" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:31.714-0400 "health" : 1, [js_test:auth] 2015-10-13T18:47:31.715-0400 "state" : 2, [js_test:auth] 2015-10-13T18:47:31.715-0400 "stateStr" : "SECONDARY", [js_test:auth] 2015-10-13T18:47:31.715-0400 "uptime" : 7, [js_test:auth] 2015-10-13T18:47:31.715-0400 "optime" : { [js_test:auth] 2015-10-13T18:47:31.715-0400 "ts" : Timestamp(1444776444, 1), [js_test:auth] 2015-10-13T18:47:31.716-0400 "t" : NumberLong(0) [js_test:auth] 2015-10-13T18:47:31.716-0400 }, [js_test:auth] 2015-10-13T18:47:31.716-0400 "optimeDate" : ISODate("2015-10-13T22:47:24Z"), [js_test:auth] 2015-10-13T18:47:31.716-0400 "lastHeartbeat" : ISODate("2015-10-13T22:47:31.556Z"), [js_test:auth] 2015-10-13T18:47:31.717-0400 "lastHeartbeatRecv" : ISODate("2015-10-13T22:47:30.012Z"), [js_test:auth] 2015-10-13T18:47:31.717-0400 "pingMs" : NumberLong(0), [js_test:auth] 2015-10-13T18:47:31.717-0400 "configVersion" : 1 [js_test:auth] 2015-10-13T18:47:31.717-0400 } [js_test:auth] 2015-10-13T18:47:31.717-0400 ], [js_test:auth] 2015-10-13T18:47:31.718-0400 "ok" : 1 [js_test:auth] 2015-10-13T18:47:31.718-0400 } [js_test:auth] 2015-10-13T18:47:31.718-0400 ReplSetTest waitForIndicator state on connection to ubuntu:20267 [js_test:auth] 2015-10-13T18:47:31.718-0400 [ 2 ] [js_test:auth] 2015-10-13T18:47:31.719-0400 ReplSetTest waitForIndicator from node connection to ubuntu:20267 [js_test:auth] 2015-10-13T18:47:31.719-0400 ReplSetTest waitForIndicator Initial status ( timeout : 300000 ) : [js_test:auth] 2015-10-13T18:47:31.719-0400 { [js_test:auth] 2015-10-13T18:47:31.719-0400 "set" : "d1", [js_test:auth] 2015-10-13T18:47:31.719-0400 "date" : ISODate("2015-10-13T22:47:31.701Z"), [js_test:auth] 2015-10-13T18:47:31.719-0400 "myState" : 1, [js_test:auth] 2015-10-13T18:47:31.720-0400 "term" : NumberLong(1), [js_test:auth] 2015-10-13T18:47:31.720-0400 "heartbeatIntervalMillis" : NumberLong(2000), [js_test:auth] 2015-10-13T18:47:31.720-0400 "members" : [ [js_test:auth] 2015-10-13T18:47:31.720-0400 { [js_test:auth] 2015-10-13T18:47:31.720-0400 "_id" : 0, [js_test:auth] 2015-10-13T18:47:31.721-0400 "name" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:31.721-0400 "health" : 1, [js_test:auth] 2015-10-13T18:47:31.721-0400 "state" : 1, [js_test:auth] 2015-10-13T18:47:31.722-0400 "stateStr" : "PRIMARY", [js_test:auth] 2015-10-13T18:47:31.722-0400 "uptime" : 10, [js_test:auth] 2015-10-13T18:47:31.722-0400 "optime" : { [js_test:auth] 2015-10-13T18:47:31.722-0400 "ts" : Timestamp(1444776451, 2), [js_test:auth] 2015-10-13T18:47:31.723-0400 "t" : NumberLong(1) [js_test:auth] 2015-10-13T18:47:31.723-0400 }, [js_test:auth] 2015-10-13T18:47:31.723-0400 "optimeDate" : ISODate("2015-10-13T22:47:31Z"), [js_test:auth] 2015-10-13T18:47:31.724-0400 "infoMessage" : "could not find member to sync from", [js_test:auth] 2015-10-13T18:47:31.724-0400 "electionTime" : Timestamp(1444776449, 1), [js_test:auth] 2015-10-13T18:47:31.724-0400 "electionDate" : ISODate("2015-10-13T22:47:29Z"), [js_test:auth] 2015-10-13T18:47:31.725-0400 "configVersion" : 1, [js_test:auth] 2015-10-13T18:47:31.725-0400 "self" : true [js_test:auth] 2015-10-13T18:47:31.725-0400 }, [js_test:auth] 2015-10-13T18:47:31.726-0400 { [js_test:auth] 2015-10-13T18:47:31.726-0400 "_id" : 1, [js_test:auth] 2015-10-13T18:47:31.726-0400 "name" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:31.726-0400 "health" : 1, [js_test:auth] 2015-10-13T18:47:31.727-0400 "state" : 2, [js_test:auth] 2015-10-13T18:47:31.727-0400 "stateStr" : "SECONDARY", [js_test:auth] 2015-10-13T18:47:31.728-0400 "uptime" : 7, [js_test:auth] 2015-10-13T18:47:31.728-0400 "optime" : { [js_test:auth] 2015-10-13T18:47:31.729-0400 "ts" : Timestamp(1444776444, 1), [js_test:auth] 2015-10-13T18:47:31.729-0400 "t" : NumberLong(0) [js_test:auth] 2015-10-13T18:47:31.729-0400 }, [js_test:auth] 2015-10-13T18:47:31.729-0400 "optimeDate" : ISODate("2015-10-13T22:47:24Z"), [js_test:auth] 2015-10-13T18:47:31.730-0400 "lastHeartbeat" : ISODate("2015-10-13T22:47:31.589Z"), [js_test:auth] 2015-10-13T18:47:31.730-0400 "lastHeartbeatRecv" : ISODate("2015-10-13T22:47:31.010Z"), [js_test:auth] 2015-10-13T18:47:31.730-0400 "pingMs" : NumberLong(10), [js_test:auth] 2015-10-13T18:47:31.730-0400 "configVersion" : 1 [js_test:auth] 2015-10-13T18:47:31.731-0400 }, [js_test:auth] 2015-10-13T18:47:31.731-0400 { [js_test:auth] 2015-10-13T18:47:31.731-0400 "_id" : 2, [js_test:auth] 2015-10-13T18:47:31.731-0400 "name" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:31.731-0400 "health" : 1, [js_test:auth] 2015-10-13T18:47:31.731-0400 "state" : 2, [js_test:auth] 2015-10-13T18:47:31.731-0400 "stateStr" : "SECONDARY", [js_test:auth] 2015-10-13T18:47:31.731-0400 "uptime" : 7, [js_test:auth] 2015-10-13T18:47:31.731-0400 "optime" : { [js_test:auth] 2015-10-13T18:47:31.731-0400 "ts" : Timestamp(1444776444, 1), [js_test:auth] 2015-10-13T18:47:31.731-0400 "t" : NumberLong(0) [js_test:auth] 2015-10-13T18:47:31.731-0400 }, [js_test:auth] 2015-10-13T18:47:31.731-0400 "optimeDate" : ISODate("2015-10-13T22:47:24Z"), [js_test:auth] 2015-10-13T18:47:31.731-0400 "lastHeartbeat" : ISODate("2015-10-13T22:47:31.556Z"), [js_test:auth] 2015-10-13T18:47:31.732-0400 "lastHeartbeatRecv" : ISODate("2015-10-13T22:47:30.012Z"), [js_test:auth] 2015-10-13T18:47:31.732-0400 "pingMs" : NumberLong(0), [js_test:auth] 2015-10-13T18:47:31.732-0400 "configVersion" : 1 [js_test:auth] 2015-10-13T18:47:31.732-0400 } [js_test:auth] 2015-10-13T18:47:31.732-0400 ], [js_test:auth] 2015-10-13T18:47:31.732-0400 "ok" : 1 [js_test:auth] 2015-10-13T18:47:31.732-0400 } [js_test:auth] 2015-10-13T18:47:31.732-0400 Status for : ubuntu:20265, checking ubuntu:20267/ubuntu:20267 [js_test:auth] 2015-10-13T18:47:31.732-0400 Status for : ubuntu:20266, checking ubuntu:20267/ubuntu:20267 [js_test:auth] 2015-10-13T18:47:31.732-0400 Status for : ubuntu:20267, checking ubuntu:20267/ubuntu:20267 [js_test:auth] 2015-10-13T18:47:31.732-0400 Status : 2 target state : 2 [js_test:auth] 2015-10-13T18:47:31.732-0400 ReplSetTest waitForIndicator final status: [js_test:auth] 2015-10-13T18:47:31.732-0400 { [js_test:auth] 2015-10-13T18:47:31.733-0400 "set" : "d1", [js_test:auth] 2015-10-13T18:47:31.733-0400 "date" : ISODate("2015-10-13T22:47:31.701Z"), [js_test:auth] 2015-10-13T18:47:31.733-0400 "myState" : 1, [js_test:auth] 2015-10-13T18:47:31.733-0400 "term" : NumberLong(1), [js_test:auth] 2015-10-13T18:47:31.733-0400 "heartbeatIntervalMillis" : NumberLong(2000), [js_test:auth] 2015-10-13T18:47:31.733-0400 "members" : [ [js_test:auth] 2015-10-13T18:47:31.733-0400 { [js_test:auth] 2015-10-13T18:47:31.733-0400 "_id" : 0, [js_test:auth] 2015-10-13T18:47:31.733-0400 "name" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:31.733-0400 "health" : 1, [js_test:auth] 2015-10-13T18:47:31.733-0400 "state" : 1, [js_test:auth] 2015-10-13T18:47:31.733-0400 "stateStr" : "PRIMARY", [js_test:auth] 2015-10-13T18:47:31.733-0400 "uptime" : 10, [js_test:auth] 2015-10-13T18:47:31.733-0400 "optime" : { [js_test:auth] 2015-10-13T18:47:31.734-0400 "ts" : Timestamp(1444776451, 2), [js_test:auth] 2015-10-13T18:47:31.734-0400 "t" : NumberLong(1) [js_test:auth] 2015-10-13T18:47:31.734-0400 }, [js_test:auth] 2015-10-13T18:47:31.734-0400 "optimeDate" : ISODate("2015-10-13T22:47:31Z"), [js_test:auth] 2015-10-13T18:47:31.734-0400 "infoMessage" : "could not find member to sync from", [js_test:auth] 2015-10-13T18:47:31.734-0400 "electionTime" : Timestamp(1444776449, 1), [js_test:auth] 2015-10-13T18:47:31.734-0400 "electionDate" : ISODate("2015-10-13T22:47:29Z"), [js_test:auth] 2015-10-13T18:47:31.734-0400 "configVersion" : 1, [js_test:auth] 2015-10-13T18:47:31.734-0400 "self" : true [js_test:auth] 2015-10-13T18:47:31.734-0400 }, [js_test:auth] 2015-10-13T18:47:31.734-0400 { [js_test:auth] 2015-10-13T18:47:31.735-0400 "_id" : 1, [js_test:auth] 2015-10-13T18:47:31.735-0400 "name" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:31.735-0400 "health" : 1, [js_test:auth] 2015-10-13T18:47:31.735-0400 "state" : 2, [js_test:auth] 2015-10-13T18:47:31.735-0400 "stateStr" : "SECONDARY", [js_test:auth] 2015-10-13T18:47:31.735-0400 "uptime" : 7, [js_test:auth] 2015-10-13T18:47:31.735-0400 "optime" : { [js_test:auth] 2015-10-13T18:47:31.735-0400 "ts" : Timestamp(1444776444, 1), [js_test:auth] 2015-10-13T18:47:31.735-0400 "t" : NumberLong(0) [js_test:auth] 2015-10-13T18:47:31.735-0400 }, [js_test:auth] 2015-10-13T18:47:31.735-0400 "optimeDate" : ISODate("2015-10-13T22:47:24Z"), [js_test:auth] 2015-10-13T18:47:31.735-0400 "lastHeartbeat" : ISODate("2015-10-13T22:47:31.589Z"), [js_test:auth] 2015-10-13T18:47:31.735-0400 "lastHeartbeatRecv" : ISODate("2015-10-13T22:47:31.010Z"), [js_test:auth] 2015-10-13T18:47:31.736-0400 "pingMs" : NumberLong(10), [js_test:auth] 2015-10-13T18:47:31.736-0400 "configVersion" : 1 [js_test:auth] 2015-10-13T18:47:31.736-0400 }, [js_test:auth] 2015-10-13T18:47:31.736-0400 { [js_test:auth] 2015-10-13T18:47:31.736-0400 "_id" : 2, [js_test:auth] 2015-10-13T18:47:31.736-0400 "name" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:31.736-0400 "health" : 1, [js_test:auth] 2015-10-13T18:47:31.736-0400 "state" : 2, [js_test:auth] 2015-10-13T18:47:31.736-0400 "stateStr" : "SECONDARY", [js_test:auth] 2015-10-13T18:47:31.736-0400 "uptime" : 7, [js_test:auth] 2015-10-13T18:47:31.736-0400 "optime" : { [js_test:auth] 2015-10-13T18:47:31.736-0400 "ts" : Timestamp(1444776444, 1), [js_test:auth] 2015-10-13T18:47:31.736-0400 "t" : NumberLong(0) [js_test:auth] 2015-10-13T18:47:31.736-0400 }, [js_test:auth] 2015-10-13T18:47:31.737-0400 "optimeDate" : ISODate("2015-10-13T22:47:24Z"), [js_test:auth] 2015-10-13T18:47:31.737-0400 "lastHeartbeat" : ISODate("2015-10-13T22:47:31.556Z"), [js_test:auth] 2015-10-13T18:47:31.737-0400 "lastHeartbeatRecv" : ISODate("2015-10-13T22:47:30.012Z"), [js_test:auth] 2015-10-13T18:47:31.737-0400 "pingMs" : NumberLong(0), [js_test:auth] 2015-10-13T18:47:31.737-0400 "configVersion" : 1 [js_test:auth] 2015-10-13T18:47:31.737-0400 } [js_test:auth] 2015-10-13T18:47:31.737-0400 ], [js_test:auth] 2015-10-13T18:47:31.737-0400 "ok" : 1 [js_test:auth] 2015-10-13T18:47:31.737-0400 } [js_test:auth] 2015-10-13T18:47:31.737-0400 s20264| 2015-10-13T18:47:31.703-0400 D SHARDING [conn1] trying to acquire new distributed lock for authorizationData ( lock timeout : 900000 ms, ping interval : 30000 ms, process : ubuntu:20264:1444776427:399327856 ) with lockSessionID: 561d8a03c06b51335e5d6899, why: createUser [js_test:auth] 2015-10-13T18:47:31.737-0400 s20264| 2015-10-13T18:47:31.703-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:01.703-0400 cmd:{ findAndModify: "locks", query: { _id: "authorizationData", state: 0 }, update: { $set: { ts: ObjectId('561d8a03c06b51335e5d6899'), state: 2, who: "ubuntu:20264:1444776427:399327856:conn1", process: "ubuntu:20264:1444776427:399327856", when: new Date(1444776451703), why: "createUser" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.738-0400 s20264| 2015-10-13T18:47:31.703-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.738-0400 s20264| 2015-10-13T18:47:31.718-0400 I SHARDING [conn1] distributed lock 'authorizationData' acquired for 'createUser', ts : 561d8a03c06b51335e5d6899 [js_test:auth] 2015-10-13T18:47:31.738-0400 s20264| 2015-10-13T18:47:31.719-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:test expDate:2015-10-13T18:48:01.718-0400 cmd:{ createUser: "bar", pwd: "131d1786e1320446336c3943bfc7ba1c", roles: [ "dbOwner" ], digestPassword: false, writeConcern: { w: "majority", wtimeout: 30000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.738-0400 s20264| 2015-10-13T18:47:31.719-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.738-0400 s20264| 2015-10-13T18:47:31.734-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:01.734-0400 cmd:{ findAndModify: "locks", query: { ts: ObjectId('561d8a03c06b51335e5d6899') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.738-0400 s20264| 2015-10-13T18:47:31.734-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.752-0400 s20264| 2015-10-13T18:47:31.751-0400 I SHARDING [conn1] distributed lock with ts: 561d8a03c06b51335e5d6899' unlocked. [js_test:auth] 2015-10-13T18:47:31.752-0400 Successfully added user: { "user" : "bar", "roles" : [ "dbOwner" ] } [js_test:auth] 2015-10-13T18:47:31.752-0400 s20264| 2015-10-13T18:47:31.752-0400 D SHARDING [conn1] trying to acquire new distributed lock for authorizationData ( lock timeout : 900000 ms, ping interval : 30000 ms, process : ubuntu:20264:1444776427:399327856 ) with lockSessionID: 561d8a03c06b51335e5d689a, why: createUser [js_test:auth] 2015-10-13T18:47:31.753-0400 s20264| 2015-10-13T18:47:31.752-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:01.752-0400 cmd:{ findAndModify: "locks", query: { _id: "authorizationData", state: 0 }, update: { $set: { ts: ObjectId('561d8a03c06b51335e5d689a'), state: 2, who: "ubuntu:20264:1444776427:399327856:conn1", process: "ubuntu:20264:1444776427:399327856", when: new Date(1444776451752), why: "createUser" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.753-0400 s20264| 2015-10-13T18:47:31.752-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.781-0400 s20264| 2015-10-13T18:47:31.780-0400 I SHARDING [conn1] distributed lock 'authorizationData' acquired for 'createUser', ts : 561d8a03c06b51335e5d689a [js_test:auth] 2015-10-13T18:47:31.781-0400 s20264| 2015-10-13T18:47:31.780-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:test expDate:2015-10-13T18:48:01.780-0400 cmd:{ createUser: "sad", pwd: "b874a27b7105ec1cfd1f26a5f7d27eca", roles: [ "read" ], digestPassword: false, writeConcern: { w: "majority", wtimeout: 30000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.781-0400 s20264| 2015-10-13T18:47:31.781-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.797-0400 s20264| 2015-10-13T18:47:31.797-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:01.797-0400 cmd:{ findAndModify: "locks", query: { ts: ObjectId('561d8a03c06b51335e5d689a') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.798-0400 s20264| 2015-10-13T18:47:31.797-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.819-0400 s20264| 2015-10-13T18:47:31.819-0400 I SHARDING [conn1] distributed lock with ts: 561d8a03c06b51335e5d689a' unlocked. [js_test:auth] 2015-10-13T18:47:31.819-0400 Successfully added user: { "user" : "sad", "roles" : [ "read" ] } [js_test:auth] 2015-10-13T18:47:31.819-0400 query try [js_test:auth] 2015-10-13T18:47:31.820-0400 s20264| 2015-10-13T18:47:31.820-0400 I ACCESS [conn1] Unauthorized not authorized on foo to execute command { find: "bar", filter: {}, limit: 1.0, singleBatch: true } [js_test:auth] 2015-10-13T18:47:31.820-0400 Error: error: { [js_test:auth] 2015-10-13T18:47:31.820-0400 "ok" : 0, [js_test:auth] 2015-10-13T18:47:31.820-0400 "errmsg" : "not authorized on foo to execute command { find: \"bar\", filter: {}, limit: 1.0, singleBatch: true }", [js_test:auth] 2015-10-13T18:47:31.820-0400 "code" : 13 [js_test:auth] 2015-10-13T18:47:31.820-0400 } [js_test:auth] 2015-10-13T18:47:31.821-0400 cmd try [js_test:auth] 2015-10-13T18:47:31.821-0400 s20264| 2015-10-13T18:47:31.820-0400 I ACCESS [conn1] Unauthorized listDatabases may only be run against the admin database. [js_test:auth] 2015-10-13T18:47:31.821-0400 insert try 1 [js_test:auth] 2015-10-13T18:47:31.821-0400 s20264| 2015-10-13T18:47:31.821-0400 I ACCESS [conn1] Unauthorized not authorized on test to execute command { insert: "foo", documents: [ { _id: ObjectId('561d8a03153cfb9a24a60506'), x: 1.0 } ], ordered: true } [js_test:auth] 2015-10-13T18:47:31.823-0400 s20264| 2015-10-13T18:47:31.823-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:48:01.823-0400 cmd:{ usersInfo: [ { user: "bar", db: "test" } ], showPrivileges: true, showCredentials: true, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.824-0400 s20264| 2015-10-13T18:47:31.823-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.842-0400 s20264| 2015-10-13T18:47:31.842-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:48:01.842-0400 cmd:{ usersInfo: [ { user: "bar", db: "test" } ], showPrivileges: true, showCredentials: true, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:31.843-0400 s20264| 2015-10-13T18:47:31.842-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:31.843-0400 s20264| 2015-10-13T18:47:31.843-0400 I ACCESS [conn1] Successfully authenticated as principal bar on test [js_test:auth] 2015-10-13T18:47:31.844-0400 s20264| 2015-10-13T18:47:31.843-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:test cmd:{ find: "foo", limit: 1, singleBatch: true, shardVersion: [ Timestamp 1000|0, ObjectId('561d8a03c06b51335e5d6897') ] } [js_test:auth] 2015-10-13T18:47:31.844-0400 s20264| 2015-10-13T18:47:31.843-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:31.844-0400 insert try 2 [js_test:auth] 2015-10-13T18:47:31.846-0400 s20264| 2015-10-13T18:47:31.846-0400 D SHARDING [conn1] about to initiate autosplit: ns: test.foo, shard: d1, lastmod: 1|0||561d8a03c06b51335e5d6897, min: { x: MinKey }, max: { x: MaxKey } dataWritten: 133979 splitThreshold: 921 [js_test:auth] 2015-10-13T18:47:31.846-0400 s20264| 2015-10-13T18:47:31.846-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:01.846-0400 cmd:{ splitVector: "test.foo", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, maxChunkSizeBytes: 133979, maxSplitPoints: 0, maxChunkObjects: 250000 } [js_test:auth] 2015-10-13T18:47:31.846-0400 s20264| 2015-10-13T18:47:31.846-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:31.846-0400 s20264| 2015-10-13T18:47:31.846-0400 D SHARDING [conn1] chunk not full enough to trigger auto-split [js_test:auth] 2015-10-13T18:47:31.847-0400 s20264| 2015-10-13T18:47:31.847-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:test cmd:{ find: "foo", shardVersion: [ Timestamp 1000|0, ObjectId('561d8a03c06b51335e5d6897') ] } [js_test:auth] 2015-10-13T18:47:31.847-0400 s20264| 2015-10-13T18:47:31.847-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:31.848-0400 ReplSetTest Starting Set [js_test:auth] 2015-10-13T18:47:31.848-0400 ReplSetTest n is : 0 [js_test:auth] 2015-10-13T18:47:31.848-0400 ReplSetTest n: 0 ports: [ 20268, 20269, 20270 ] 20268 number [js_test:auth] 2015-10-13T18:47:31.848-0400 { [js_test:auth] 2015-10-13T18:47:31.848-0400 "useHostName" : true, [js_test:auth] 2015-10-13T18:47:31.848-0400 "oplogSize" : 40, [js_test:auth] 2015-10-13T18:47:31.848-0400 "keyFile" : "jstests/libs/key1", [js_test:auth] 2015-10-13T18:47:31.848-0400 "port" : 20268, [js_test:auth] 2015-10-13T18:47:31.848-0400 "noprealloc" : "", [js_test:auth] 2015-10-13T18:47:31.848-0400 "smallfiles" : "", [js_test:auth] 2015-10-13T18:47:31.848-0400 "replSet" : "d2", [js_test:auth] 2015-10-13T18:47:31.849-0400 "dbpath" : "$set-$node", [js_test:auth] 2015-10-13T18:47:31.849-0400 "verbose" : 0, [js_test:auth] 2015-10-13T18:47:31.849-0400 "restart" : undefined, [js_test:auth] 2015-10-13T18:47:31.849-0400 "pathOpts" : { [js_test:auth] 2015-10-13T18:47:31.849-0400 "node" : 0, [js_test:auth] 2015-10-13T18:47:31.849-0400 "set" : "d2" [js_test:auth] 2015-10-13T18:47:31.849-0400 } [js_test:auth] 2015-10-13T18:47:31.849-0400 } [js_test:auth] 2015-10-13T18:47:31.849-0400 ReplSetTest Starting.... [js_test:auth] 2015-10-13T18:47:31.849-0400 Resetting db path '/data/db/job1/mongorunner/d2-0' [js_test:auth] 2015-10-13T18:47:31.850-0400 2015-10-13T18:47:31.850-0400 I - [thread1] shell: started program (sh25910): /media/ssd/mongo1/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 20268 --noprealloc --smallfiles --replSet d2 --dbpath /data/db/job1/mongorunner/d2-0 --nopreallocj --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:47:31.850-0400 2015-10-13T18:47:31.850-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20268, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:31.867-0400 d20268| note: noprealloc may hurt performance in many applications [js_test:auth] 2015-10-13T18:47:31.916-0400 d20268| 2015-10-13T18:47:31.916-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:auth] 2015-10-13T18:47:32.010-0400 d20266| 2015-10-13T18:47:32.009-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20265 [js_test:auth] 2015-10-13T18:47:32.011-0400 d20265| 2015-10-13T18:47:32.010-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53440 #14 (9 connections now open) [js_test:auth] 2015-10-13T18:47:32.027-0400 d20265| 2015-10-13T18:47:32.026-0400 I ACCESS [conn14] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:32.027-0400 d20266| 2015-10-13T18:47:32.027-0400 I REPL [SyncSourceFeedback] setting syncSourceFeedback to ubuntu:20265 [js_test:auth] 2015-10-13T18:47:32.027-0400 d20265| 2015-10-13T18:47:32.027-0400 I NETWORK [conn14] end connection 127.0.0.1:53440 (8 connections now open) [js_test:auth] 2015-10-13T18:47:32.027-0400 d20265| 2015-10-13T18:47:32.027-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53441 #15 (9 connections now open) [js_test:auth] 2015-10-13T18:47:32.027-0400 d20265| 2015-10-13T18:47:32.027-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53442 #16 (10 connections now open) [js_test:auth] 2015-10-13T18:47:32.048-0400 d20265| 2015-10-13T18:47:32.047-0400 I ACCESS [conn15] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:32.048-0400 d20265| 2015-10-13T18:47:32.048-0400 I ACCESS [conn16] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:32.048-0400 d20266| 2015-10-13T18:47:32.048-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20265 [js_test:auth] 2015-10-13T18:47:32.050-0400 2015-10-13T18:47:32.050-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20268, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:32.227-0400 d20268| 2015-10-13T18:47:32.227-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:auth] 2015-10-13T18:47:32.227-0400 d20268| 2015-10-13T18:47:32.227-0400 I CONTROL [initandlisten] MongoDB starting : pid=25910 port=20268 dbpath=/data/db/job1/mongorunner/d2-0 64-bit host=ubuntu [js_test:auth] 2015-10-13T18:47:32.227-0400 d20268| 2015-10-13T18:47:32.227-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:32.227-0400 d20268| 2015-10-13T18:47:32.227-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:47:32.228-0400 d20268| 2015-10-13T18:47:32.227-0400 I CONTROL [initandlisten] ** Not recommended for production. [js_test:auth] 2015-10-13T18:47:32.228-0400 d20268| 2015-10-13T18:47:32.227-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:32.228-0400 d20266| 2015-10-13T18:47:32.227-0400 I INDEX [repl writer worker 1] build index on: test.foo properties: { v: 1, key: { x: 1.0 }, name: "x_1", ns: "test.foo" } [js_test:auth] 2015-10-13T18:47:32.228-0400 d20266| 2015-10-13T18:47:32.227-0400 I INDEX [repl writer worker 1] building index using bulk method [js_test:auth] 2015-10-13T18:47:32.228-0400 d20268| 2015-10-13T18:47:32.228-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:32.228-0400 d20268| 2015-10-13T18:47:32.228-0400 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine. [js_test:auth] 2015-10-13T18:47:32.228-0400 d20268| 2015-10-13T18:47:32.228-0400 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems: [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.228-0400 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options] [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.228-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.228-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.228-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.228-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.228-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.228-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.228-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.228-0400 I CONTROL [initandlisten] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.229-0400 I CONTROL [initandlisten] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.229-0400 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.229-0400 I CONTROL [initandlisten] allocator: tcmalloc [js_test:auth] 2015-10-13T18:47:32.229-0400 d20268| 2015-10-13T18:47:32.229-0400 I CONTROL [initandlisten] modules: subscription [js_test:auth] 2015-10-13T18:47:32.230-0400 d20268| 2015-10-13T18:47:32.229-0400 I CONTROL [initandlisten] build environment: [js_test:auth] 2015-10-13T18:47:32.230-0400 d20268| 2015-10-13T18:47:32.229-0400 I CONTROL [initandlisten] distarch: x86_64 [js_test:auth] 2015-10-13T18:47:32.230-0400 d20268| 2015-10-13T18:47:32.229-0400 I CONTROL [initandlisten] target_arch: x86_64 [js_test:auth] 2015-10-13T18:47:32.230-0400 d20268| 2015-10-13T18:47:32.229-0400 I CONTROL [initandlisten] options: { net: { port: 20268 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "d2" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/job1/mongorunner/d2-0", mmapv1: { preallocDataFiles: false, smallFiles: true } } } [js_test:auth] 2015-10-13T18:47:32.233-0400 d20266| 2015-10-13T18:47:32.232-0400 I INDEX [repl writer worker 1] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:32.251-0400 2015-10-13T18:47:32.251-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20268, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:32.327-0400 d20268| 2015-10-13T18:47:32.326-0400 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument Did not find replica set lastVote document in local.replset.election [js_test:auth] 2015-10-13T18:47:32.327-0400 d20268| 2015-10-13T18:47:32.326-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset [js_test:auth] 2015-10-13T18:47:32.328-0400 d20268| 2015-10-13T18:47:32.328-0400 I FTDC [initandlisten] Starting full-time diagnostic data capture with directory '/data/db/job1/mongorunner/d2-0/diagnostic.data' [js_test:auth] 2015-10-13T18:47:32.432-0400 d20268| 2015-10-13T18:47:32.432-0400 I NETWORK [initandlisten] waiting for connections on port 20268 [js_test:auth] 2015-10-13T18:47:32.452-0400 d20268| 2015-10-13T18:47:32.451-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:36786 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:47:32.452-0400 d20268| 2015-10-13T18:47:32.452-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:47:32.452-0400 [ connection to ubuntu:20268 ] [js_test:auth] 2015-10-13T18:47:32.452-0400 ReplSetTest n is : 1 [js_test:auth] 2015-10-13T18:47:32.452-0400 ReplSetTest n: 1 ports: [ 20268, 20269, 20270 ] 20269 number [js_test:auth] 2015-10-13T18:47:32.453-0400 { [js_test:auth] 2015-10-13T18:47:32.453-0400 "useHostName" : true, [js_test:auth] 2015-10-13T18:47:32.453-0400 "oplogSize" : 40, [js_test:auth] 2015-10-13T18:47:32.453-0400 "keyFile" : "jstests/libs/key1", [js_test:auth] 2015-10-13T18:47:32.453-0400 "port" : 20269, [js_test:auth] 2015-10-13T18:47:32.453-0400 "noprealloc" : "", [js_test:auth] 2015-10-13T18:47:32.453-0400 "smallfiles" : "", [js_test:auth] 2015-10-13T18:47:32.453-0400 "replSet" : "d2", [js_test:auth] 2015-10-13T18:47:32.453-0400 "dbpath" : "$set-$node", [js_test:auth] 2015-10-13T18:47:32.453-0400 "verbose" : 0, [js_test:auth] 2015-10-13T18:47:32.453-0400 "restart" : undefined, [js_test:auth] 2015-10-13T18:47:32.453-0400 "pathOpts" : { [js_test:auth] 2015-10-13T18:47:32.453-0400 "node" : 1, [js_test:auth] 2015-10-13T18:47:32.453-0400 "set" : "d2" [js_test:auth] 2015-10-13T18:47:32.453-0400 } [js_test:auth] 2015-10-13T18:47:32.453-0400 } [js_test:auth] 2015-10-13T18:47:32.453-0400 ReplSetTest Starting.... [js_test:auth] 2015-10-13T18:47:32.454-0400 Resetting db path '/data/db/job1/mongorunner/d2-1' [js_test:auth] 2015-10-13T18:47:32.454-0400 2015-10-13T18:47:32.454-0400 I - [thread1] shell: started program (sh26103): /media/ssd/mongo1/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 20269 --noprealloc --smallfiles --replSet d2 --dbpath /data/db/job1/mongorunner/d2-1 --nopreallocj --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:47:32.455-0400 2015-10-13T18:47:32.455-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20269, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:32.468-0400 d20269| note: noprealloc may hurt performance in many applications [js_test:auth] 2015-10-13T18:47:32.516-0400 d20269| 2015-10-13T18:47:32.515-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:auth] 2015-10-13T18:47:32.655-0400 2015-10-13T18:47:32.655-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20269, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:32.828-0400 d20269| 2015-10-13T18:47:32.828-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:auth] 2015-10-13T18:47:32.828-0400 d20269| 2015-10-13T18:47:32.828-0400 I CONTROL [initandlisten] MongoDB starting : pid=26103 port=20269 dbpath=/data/db/job1/mongorunner/d2-1 64-bit host=ubuntu [js_test:auth] 2015-10-13T18:47:32.828-0400 d20269| 2015-10-13T18:47:32.828-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:32.828-0400 d20269| 2015-10-13T18:47:32.828-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:47:32.829-0400 d20269| 2015-10-13T18:47:32.828-0400 I CONTROL [initandlisten] ** Not recommended for production. [js_test:auth] 2015-10-13T18:47:32.829-0400 d20269| 2015-10-13T18:47:32.828-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:32.829-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:32.829-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine. [js_test:auth] 2015-10-13T18:47:32.829-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems: [js_test:auth] 2015-10-13T18:47:32.829-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options] [js_test:auth] 2015-10-13T18:47:32.829-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:32.829-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. [js_test:auth] 2015-10-13T18:47:32.829-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:32.829-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:32.829-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. [js_test:auth] 2015-10-13T18:47:32.829-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:32.830-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:32.830-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:47:32.830-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:47:32.830-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:47:32.830-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] allocator: tcmalloc [js_test:auth] 2015-10-13T18:47:32.830-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] modules: subscription [js_test:auth] 2015-10-13T18:47:32.830-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] build environment: [js_test:auth] 2015-10-13T18:47:32.830-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] distarch: x86_64 [js_test:auth] 2015-10-13T18:47:32.830-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] target_arch: x86_64 [js_test:auth] 2015-10-13T18:47:32.830-0400 d20269| 2015-10-13T18:47:32.829-0400 I CONTROL [initandlisten] options: { net: { port: 20269 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "d2" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/job1/mongorunner/d2-1", mmapv1: { preallocDataFiles: false, smallFiles: true } } } [js_test:auth] 2015-10-13T18:47:32.856-0400 2015-10-13T18:47:32.855-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20269, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:32.955-0400 d20269| 2015-10-13T18:47:32.955-0400 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument Did not find replica set lastVote document in local.replset.election [js_test:auth] 2015-10-13T18:47:32.955-0400 d20269| 2015-10-13T18:47:32.955-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset [js_test:auth] 2015-10-13T18:47:32.955-0400 d20269| 2015-10-13T18:47:32.955-0400 I FTDC [initandlisten] Starting full-time diagnostic data capture with directory '/data/db/job1/mongorunner/d2-1/diagnostic.data' [js_test:auth] 2015-10-13T18:47:33.012-0400 d20267| 2015-10-13T18:47:33.012-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20265 [js_test:auth] 2015-10-13T18:47:33.012-0400 d20265| 2015-10-13T18:47:33.012-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53502 #17 (11 connections now open) [js_test:auth] 2015-10-13T18:47:33.028-0400 d20265| 2015-10-13T18:47:33.028-0400 I ACCESS [conn17] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:33.028-0400 d20265| 2015-10-13T18:47:33.028-0400 I NETWORK [conn17] end connection 127.0.0.1:53502 (10 connections now open) [js_test:auth] 2015-10-13T18:47:33.029-0400 d20267| 2015-10-13T18:47:33.028-0400 I REPL [SyncSourceFeedback] setting syncSourceFeedback to ubuntu:20265 [js_test:auth] 2015-10-13T18:47:33.029-0400 d20265| 2015-10-13T18:47:33.029-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53503 #18 (11 connections now open) [js_test:auth] 2015-10-13T18:47:33.029-0400 d20265| 2015-10-13T18:47:33.029-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53504 #19 (12 connections now open) [js_test:auth] 2015-10-13T18:47:33.047-0400 d20265| 2015-10-13T18:47:33.047-0400 I ACCESS [conn18] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:33.047-0400 d20265| 2015-10-13T18:47:33.047-0400 I ACCESS [conn19] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:33.047-0400 d20267| 2015-10-13T18:47:33.047-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20265 [js_test:auth] 2015-10-13T18:47:33.056-0400 2015-10-13T18:47:33.056-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20269, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:33.091-0400 d20269| 2015-10-13T18:47:33.091-0400 I NETWORK [initandlisten] waiting for connections on port 20269 [js_test:auth] 2015-10-13T18:47:33.257-0400 d20269| 2015-10-13T18:47:33.256-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:57617 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:47:33.257-0400 d20269| 2015-10-13T18:47:33.257-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:47:33.257-0400 [ connection to ubuntu:20268, connection to ubuntu:20269 ] [js_test:auth] 2015-10-13T18:47:33.257-0400 ReplSetTest n is : 2 [js_test:auth] 2015-10-13T18:47:33.258-0400 ReplSetTest n: 2 ports: [ 20268, 20269, 20270 ] 20270 number [js_test:auth] 2015-10-13T18:47:33.258-0400 { [js_test:auth] 2015-10-13T18:47:33.258-0400 "useHostName" : true, [js_test:auth] 2015-10-13T18:47:33.258-0400 "oplogSize" : 40, [js_test:auth] 2015-10-13T18:47:33.258-0400 "keyFile" : "jstests/libs/key1", [js_test:auth] 2015-10-13T18:47:33.258-0400 "port" : 20270, [js_test:auth] 2015-10-13T18:47:33.258-0400 "noprealloc" : "", [js_test:auth] 2015-10-13T18:47:33.258-0400 "smallfiles" : "", [js_test:auth] 2015-10-13T18:47:33.258-0400 "replSet" : "d2", [js_test:auth] 2015-10-13T18:47:33.258-0400 "dbpath" : "$set-$node", [js_test:auth] 2015-10-13T18:47:33.258-0400 "verbose" : 0, [js_test:auth] 2015-10-13T18:47:33.258-0400 "restart" : undefined, [js_test:auth] 2015-10-13T18:47:33.258-0400 "pathOpts" : { [js_test:auth] 2015-10-13T18:47:33.259-0400 "node" : 2, [js_test:auth] 2015-10-13T18:47:33.259-0400 "set" : "d2" [js_test:auth] 2015-10-13T18:47:33.259-0400 } [js_test:auth] 2015-10-13T18:47:33.259-0400 } [js_test:auth] 2015-10-13T18:47:33.259-0400 ReplSetTest Starting.... [js_test:auth] 2015-10-13T18:47:33.259-0400 Resetting db path '/data/db/job1/mongorunner/d2-2' [js_test:auth] 2015-10-13T18:47:33.260-0400 2015-10-13T18:47:33.259-0400 I - [thread1] shell: started program (sh26286): /media/ssd/mongo1/mongod --oplogSize 40 --keyFile jstests/libs/key1 --port 20270 --noprealloc --smallfiles --replSet d2 --dbpath /data/db/job1/mongorunner/d2-2 --nopreallocj --setParameter enableTestCommands=1 [js_test:auth] 2015-10-13T18:47:33.260-0400 2015-10-13T18:47:33.260-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20270, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:33.276-0400 d20270| note: noprealloc may hurt performance in many applications [js_test:auth] 2015-10-13T18:47:33.293-0400 d20267| 2015-10-13T18:47:33.292-0400 I INDEX [repl writer worker 1] build index on: test.foo properties: { v: 1, key: { x: 1.0 }, name: "x_1", ns: "test.foo" } [js_test:auth] 2015-10-13T18:47:33.293-0400 d20267| 2015-10-13T18:47:33.292-0400 I INDEX [repl writer worker 1] building index using bulk method [js_test:auth] 2015-10-13T18:47:33.302-0400 d20267| 2015-10-13T18:47:33.302-0400 I INDEX [repl writer worker 1] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:33.324-0400 d20270| 2015-10-13T18:47:33.323-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=18G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0), [js_test:auth] 2015-10-13T18:47:33.460-0400 2015-10-13T18:47:33.460-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20270, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:33.659-0400 d20270| 2015-10-13T18:47:33.659-0400 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger [js_test:auth] 2015-10-13T18:47:33.659-0400 d20270| 2015-10-13T18:47:33.659-0400 I CONTROL [initandlisten] MongoDB starting : pid=26286 port=20270 dbpath=/data/db/job1/mongorunner/d2-2 64-bit host=ubuntu [js_test:auth] 2015-10-13T18:47:33.659-0400 d20270| 2015-10-13T18:47:33.659-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:33.659-0400 d20270| 2015-10-13T18:47:33.659-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (3.1.10-pre-) of MongoDB. [js_test:auth] 2015-10-13T18:47:33.660-0400 d20270| 2015-10-13T18:47:33.659-0400 I CONTROL [initandlisten] ** Not recommended for production. [js_test:auth] 2015-10-13T18:47:33.660-0400 d20270| 2015-10-13T18:47:33.659-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:33.660-0400 d20270| 2015-10-13T18:47:33.659-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:33.660-0400 d20270| 2015-10-13T18:47:33.659-0400 I CONTROL [initandlisten] ** WARNING: You are running on a NUMA machine. [js_test:auth] 2015-10-13T18:47:33.660-0400 d20270| 2015-10-13T18:47:33.659-0400 I CONTROL [initandlisten] ** We suggest launching mongod like this to avoid performance problems: [js_test:auth] 2015-10-13T18:47:33.660-0400 d20270| 2015-10-13T18:47:33.659-0400 I CONTROL [initandlisten] ** numactl --interleave=all mongod [other options] [js_test:auth] 2015-10-13T18:47:33.660-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:33.661-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. [js_test:auth] 2015-10-13T18:47:33.661-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:33.661-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:33.661-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. [js_test:auth] 2015-10-13T18:47:33.661-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] ** We suggest setting it to 'never' [js_test:auth] 2015-10-13T18:47:33.661-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] [js_test:auth] 2015-10-13T18:47:33.661-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] db version v3.1.10-pre- [js_test:auth] 2015-10-13T18:47:33.661-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] git version: 9c9100212f7f8f3afb5f240d405f853894c376f1 [js_test:auth] 2015-10-13T18:47:33.661-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014 [js_test:auth] 2015-10-13T18:47:33.662-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] allocator: tcmalloc [js_test:auth] 2015-10-13T18:47:33.662-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] modules: subscription [js_test:auth] 2015-10-13T18:47:33.662-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] build environment: [js_test:auth] 2015-10-13T18:47:33.662-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] distarch: x86_64 [js_test:auth] 2015-10-13T18:47:33.662-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] target_arch: x86_64 [js_test:auth] 2015-10-13T18:47:33.662-0400 d20270| 2015-10-13T18:47:33.660-0400 I CONTROL [initandlisten] options: { net: { port: 20270 }, nopreallocj: true, replication: { oplogSizeMB: 40, replSet: "d2" }, security: { keyFile: "jstests/libs/key1" }, setParameter: { enableTestCommands: "1" }, storage: { dbPath: "/data/db/job1/mongorunner/d2-2", mmapv1: { preallocDataFiles: false, smallFiles: true } } } [js_test:auth] 2015-10-13T18:47:33.662-0400 2015-10-13T18:47:33.661-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20270, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:33.763-0400 d20270| 2015-10-13T18:47:33.762-0400 I REPL [initandlisten] Did not find local voted for document at startup; NoMatchingDocument Did not find replica set lastVote document in local.replset.election [js_test:auth] 2015-10-13T18:47:33.763-0400 d20270| 2015-10-13T18:47:33.762-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument Did not find replica set configuration document in local.system.replset [js_test:auth] 2015-10-13T18:47:33.763-0400 d20270| 2015-10-13T18:47:33.763-0400 I FTDC [initandlisten] Starting full-time diagnostic data capture with directory '/data/db/job1/mongorunner/d2-2/diagnostic.data' [js_test:auth] 2015-10-13T18:47:33.861-0400 2015-10-13T18:47:33.861-0400 W NETWORK [thread1] Failed to connect to 127.0.0.1:20270, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:47:33.875-0400 d20270| 2015-10-13T18:47:33.875-0400 I NETWORK [initandlisten] waiting for connections on port 20270 [js_test:auth] 2015-10-13T18:47:34.062-0400 d20270| 2015-10-13T18:47:34.061-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:60988 #1 (1 connection now open) [js_test:auth] 2015-10-13T18:47:34.062-0400 d20270| 2015-10-13T18:47:34.062-0400 I ACCESS [conn1] note: no users configured in admin.system.users, allowing localhost access [js_test:auth] 2015-10-13T18:47:34.062-0400 [ [js_test:auth] 2015-10-13T18:47:34.062-0400 connection to ubuntu:20268, [js_test:auth] 2015-10-13T18:47:34.063-0400 connection to ubuntu:20269, [js_test:auth] 2015-10-13T18:47:34.063-0400 connection to ubuntu:20270 [js_test:auth] 2015-10-13T18:47:34.063-0400 ] [js_test:auth] 2015-10-13T18:47:34.063-0400 { [js_test:auth] 2015-10-13T18:47:34.063-0400 "replSetInitiate" : { [js_test:auth] 2015-10-13T18:47:34.063-0400 "_id" : "d2", [js_test:auth] 2015-10-13T18:47:34.063-0400 "members" : [ [js_test:auth] 2015-10-13T18:47:34.063-0400 { [js_test:auth] 2015-10-13T18:47:34.064-0400 "_id" : 0, [js_test:auth] 2015-10-13T18:47:34.064-0400 "host" : "ubuntu:20268" [js_test:auth] 2015-10-13T18:47:34.064-0400 }, [js_test:auth] 2015-10-13T18:47:34.064-0400 { [js_test:auth] 2015-10-13T18:47:34.064-0400 "_id" : 1, [js_test:auth] 2015-10-13T18:47:34.064-0400 "host" : "ubuntu:20269" [js_test:auth] 2015-10-13T18:47:34.064-0400 }, [js_test:auth] 2015-10-13T18:47:34.064-0400 { [js_test:auth] 2015-10-13T18:47:34.064-0400 "_id" : 2, [js_test:auth] 2015-10-13T18:47:34.064-0400 "host" : "ubuntu:20270" [js_test:auth] 2015-10-13T18:47:34.064-0400 } [js_test:auth] 2015-10-13T18:47:34.064-0400 ] [js_test:auth] 2015-10-13T18:47:34.064-0400 } [js_test:auth] 2015-10-13T18:47:34.064-0400 } [js_test:auth] 2015-10-13T18:47:34.065-0400 d20268| 2015-10-13T18:47:34.063-0400 I REPL [conn1] replSetInitiate admin command received from client [js_test:auth] 2015-10-13T18:47:34.065-0400 d20268| 2015-10-13T18:47:34.064-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53787 #2 (2 connections now open) [js_test:auth] 2015-10-13T18:47:34.082-0400 d20268| 2015-10-13T18:47:34.082-0400 I ACCESS [conn2] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:34.083-0400 d20268| 2015-10-13T18:47:34.082-0400 I NETWORK [conn2] end connection 127.0.0.1:53787 (1 connection now open) [js_test:auth] 2015-10-13T18:47:34.083-0400 d20269| 2015-10-13T18:47:34.083-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47747 #2 (2 connections now open) [js_test:auth] 2015-10-13T18:47:34.101-0400 d20269| 2015-10-13T18:47:34.101-0400 I ACCESS [conn2] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:34.101-0400 d20269| 2015-10-13T18:47:34.101-0400 I NETWORK [conn2] end connection 127.0.0.1:47747 (1 connection now open) [js_test:auth] 2015-10-13T18:47:34.101-0400 d20270| 2015-10-13T18:47:34.101-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50124 #2 (2 connections now open) [js_test:auth] 2015-10-13T18:47:34.117-0400 d20270| 2015-10-13T18:47:34.117-0400 I ACCESS [conn2] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:34.117-0400 d20268| 2015-10-13T18:47:34.117-0400 I REPL [conn1] replSetInitiate config object with 3 members parses ok [js_test:auth] 2015-10-13T18:47:34.117-0400 d20270| 2015-10-13T18:47:34.117-0400 I NETWORK [conn2] end connection 127.0.0.1:50124 (1 connection now open) [js_test:auth] 2015-10-13T18:47:34.118-0400 d20269| 2015-10-13T18:47:34.117-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47756 #3 (2 connections now open) [js_test:auth] 2015-10-13T18:47:34.118-0400 d20270| 2015-10-13T18:47:34.117-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50127 #3 (2 connections now open) [js_test:auth] 2015-10-13T18:47:34.150-0400 d20270| 2015-10-13T18:47:34.149-0400 I ACCESS [conn3] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:34.150-0400 d20268| 2015-10-13T18:47:34.150-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20270 [js_test:auth] 2015-10-13T18:47:34.150-0400 d20269| 2015-10-13T18:47:34.150-0400 I ACCESS [conn3] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:34.150-0400 d20268| 2015-10-13T18:47:34.150-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20269 [js_test:auth] 2015-10-13T18:47:34.150-0400 d20268| 2015-10-13T18:47:34.150-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53807 #3 (2 connections now open) [js_test:auth] 2015-10-13T18:47:34.151-0400 d20268| 2015-10-13T18:47:34.150-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53808 #4 (3 connections now open) [js_test:auth] 2015-10-13T18:47:34.167-0400 d20268| 2015-10-13T18:47:34.167-0400 I ACCESS [conn3] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:34.167-0400 d20269| 2015-10-13T18:47:34.167-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20268 [js_test:auth] 2015-10-13T18:47:34.168-0400 d20268| 2015-10-13T18:47:34.168-0400 I ACCESS [conn4] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:34.168-0400 d20270| 2015-10-13T18:47:34.168-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20268 [js_test:auth] 2015-10-13T18:47:34.267-0400 d20268| 2015-10-13T18:47:34.266-0400 I REPL [ReplicationExecutor] New replica set config in use: { _id: "d2", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "ubuntu:20268", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ubuntu:20269", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "ubuntu:20270", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } [js_test:auth] 2015-10-13T18:47:34.267-0400 d20268| 2015-10-13T18:47:34.266-0400 I REPL [ReplicationExecutor] This node is ubuntu:20268 in the config [js_test:auth] 2015-10-13T18:47:34.267-0400 d20268| 2015-10-13T18:47:34.266-0400 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:auth] 2015-10-13T18:47:34.267-0400 d20268| 2015-10-13T18:47:34.266-0400 I REPL [conn1] ****** [js_test:auth] 2015-10-13T18:47:34.268-0400 d20268| 2015-10-13T18:47:34.266-0400 I REPL [conn1] creating replication oplog of size: 40MB... [js_test:auth] 2015-10-13T18:47:34.268-0400 d20268| 2015-10-13T18:47:34.267-0400 I REPL [ReplicationExecutor] Member ubuntu:20269 is now in state STARTUP [js_test:auth] 2015-10-13T18:47:34.268-0400 d20268| 2015-10-13T18:47:34.267-0400 I REPL [ReplicationExecutor] Member ubuntu:20270 is now in state STARTUP [js_test:auth] 2015-10-13T18:47:34.315-0400 d20268| 2015-10-13T18:47:34.315-0400 I STORAGE [conn1] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:auth] 2015-10-13T18:47:34.315-0400 d20268| 2015-10-13T18:47:34.315-0400 I STORAGE [conn1] Scanning the oplog to determine where to place markers for when to truncate [js_test:auth] 2015-10-13T18:47:34.607-0400 d20268| 2015-10-13T18:47:34.607-0400 I REPL [conn1] ****** [js_test:auth] 2015-10-13T18:47:34.607-0400 d20268| 2015-10-13T18:47:34.607-0400 I REPL [conn1] Starting replication applier threads [js_test:auth] 2015-10-13T18:47:34.607-0400 d20268| 2015-10-13T18:47:34.607-0400 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:auth] 2015-10-13T18:47:34.608-0400 d20268| 2015-10-13T18:47:34.607-0400 I COMMAND [conn1] command local.oplog.rs command: replSetInitiate { replSetInitiate: { _id: "d2", members: [ { _id: 0.0, host: "ubuntu:20268" }, { _id: 1.0, host: "ubuntu:20269" }, { _id: 2.0, host: "ubuntu:20270" } ] } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{ Global: { acquireCount: { r: 8, w: 4, W: 2 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 79 } }, Database: { acquireCount: { r: 1, w: 2, W: 2 } }, Collection: { acquireCount: { r: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 544ms [js_test:auth] 2015-10-13T18:47:34.608-0400 d20268| 2015-10-13T18:47:34.608-0400 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:auth] 2015-10-13T18:47:36.168-0400 d20268| 2015-10-13T18:47:36.168-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53919 #5 (4 connections now open) [js_test:auth] 2015-10-13T18:47:36.169-0400 d20268| 2015-10-13T18:47:36.169-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53920 #6 (5 connections now open) [js_test:auth] 2015-10-13T18:47:36.186-0400 d20268| 2015-10-13T18:47:36.186-0400 I ACCESS [conn6] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:36.186-0400 d20268| 2015-10-13T18:47:36.186-0400 I NETWORK [conn6] end connection 127.0.0.1:53920 (4 connections now open) [js_test:auth] 2015-10-13T18:47:36.186-0400 d20269| 2015-10-13T18:47:36.186-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47878 #4 (3 connections now open) [js_test:auth] 2015-10-13T18:47:36.186-0400 d20268| 2015-10-13T18:47:36.186-0400 I ACCESS [conn5] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:36.186-0400 d20268| 2015-10-13T18:47:36.186-0400 I NETWORK [conn5] end connection 127.0.0.1:53919 (3 connections now open) [js_test:auth] 2015-10-13T18:47:36.187-0400 d20269| 2015-10-13T18:47:36.187-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47879 #5 (4 connections now open) [js_test:auth] 2015-10-13T18:47:36.203-0400 d20269| 2015-10-13T18:47:36.203-0400 I ACCESS [conn4] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:36.203-0400 d20269| 2015-10-13T18:47:36.203-0400 I NETWORK [conn4] end connection 127.0.0.1:47878 (3 connections now open) [js_test:auth] 2015-10-13T18:47:36.203-0400 d20270| 2015-10-13T18:47:36.203-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50250 #4 (3 connections now open) [js_test:auth] 2015-10-13T18:47:36.204-0400 d20269| 2015-10-13T18:47:36.204-0400 I ACCESS [conn5] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:36.204-0400 d20269| 2015-10-13T18:47:36.204-0400 I NETWORK [conn5] end connection 127.0.0.1:47879 (2 connections now open) [js_test:auth] 2015-10-13T18:47:36.204-0400 d20270| 2015-10-13T18:47:36.204-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50251 #5 (4 connections now open) [js_test:auth] 2015-10-13T18:47:36.220-0400 d20270| 2015-10-13T18:47:36.220-0400 I ACCESS [conn4] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:36.220-0400 d20270| 2015-10-13T18:47:36.220-0400 I NETWORK [conn4] end connection 127.0.0.1:50250 (3 connections now open) [js_test:auth] 2015-10-13T18:47:36.221-0400 d20270| 2015-10-13T18:47:36.221-0400 I ACCESS [conn5] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:36.221-0400 d20270| 2015-10-13T18:47:36.221-0400 I NETWORK [conn5] end connection 127.0.0.1:50251 (2 connections now open) [js_test:auth] 2015-10-13T18:47:36.328-0400 d20270| 2015-10-13T18:47:36.328-0400 I REPL [replExecDBWorker-2] Starting replication applier threads [js_test:auth] 2015-10-13T18:47:36.328-0400 d20269| 2015-10-13T18:47:36.328-0400 I REPL [replExecDBWorker-0] Starting replication applier threads [js_test:auth] 2015-10-13T18:47:36.328-0400 d20269| 2015-10-13T18:47:36.328-0400 W REPL [rsSync] did not receive a valid config yet [js_test:auth] 2015-10-13T18:47:36.328-0400 d20270| 2015-10-13T18:47:36.328-0400 W REPL [rsSync] did not receive a valid config yet [js_test:auth] 2015-10-13T18:47:36.329-0400 d20269| 2015-10-13T18:47:36.329-0400 I REPL [ReplicationExecutor] New replica set config in use: { _id: "d2", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "ubuntu:20268", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ubuntu:20269", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "ubuntu:20270", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } [js_test:auth] 2015-10-13T18:47:36.329-0400 d20269| 2015-10-13T18:47:36.329-0400 I REPL [ReplicationExecutor] This node is ubuntu:20269 in the config [js_test:auth] 2015-10-13T18:47:36.330-0400 d20269| 2015-10-13T18:47:36.329-0400 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:auth] 2015-10-13T18:47:36.330-0400 d20270| 2015-10-13T18:47:36.329-0400 I REPL [ReplicationExecutor] New replica set config in use: { _id: "d2", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "ubuntu:20268", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "ubuntu:20269", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "ubuntu:20270", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 5000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 } } } [js_test:auth] 2015-10-13T18:47:36.330-0400 d20270| 2015-10-13T18:47:36.329-0400 I REPL [ReplicationExecutor] This node is ubuntu:20270 in the config [js_test:auth] 2015-10-13T18:47:36.330-0400 d20270| 2015-10-13T18:47:36.329-0400 I REPL [ReplicationExecutor] transition to STARTUP2 [js_test:auth] 2015-10-13T18:47:36.330-0400 d20269| 2015-10-13T18:47:36.329-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:47896 #6 (3 connections now open) [js_test:auth] 2015-10-13T18:47:36.330-0400 d20270| 2015-10-13T18:47:36.329-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50266 #6 (3 connections now open) [js_test:auth] 2015-10-13T18:47:36.330-0400 d20269| 2015-10-13T18:47:36.329-0400 I REPL [ReplicationExecutor] Member ubuntu:20268 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:36.330-0400 d20270| 2015-10-13T18:47:36.329-0400 I REPL [ReplicationExecutor] Member ubuntu:20268 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:36.346-0400 d20270| 2015-10-13T18:47:36.346-0400 I ACCESS [conn6] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:36.347-0400 d20269| 2015-10-13T18:47:36.346-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20270 [js_test:auth] 2015-10-13T18:47:36.347-0400 d20269| 2015-10-13T18:47:36.347-0400 I REPL [ReplicationExecutor] Member ubuntu:20270 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:47:36.354-0400 d20269| 2015-10-13T18:47:36.354-0400 I ACCESS [conn6] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:36.354-0400 d20270| 2015-10-13T18:47:36.354-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20269 [js_test:auth] 2015-10-13T18:47:36.354-0400 d20270| 2015-10-13T18:47:36.354-0400 I REPL [ReplicationExecutor] Member ubuntu:20269 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:47:36.608-0400 d20268| 2015-10-13T18:47:36.608-0400 I REPL [ReplicationExecutor] Member ubuntu:20269 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:47:36.608-0400 d20268| 2015-10-13T18:47:36.608-0400 I REPL [ReplicationExecutor] Member ubuntu:20270 is now in state STARTUP2 [js_test:auth] 2015-10-13T18:47:37.328-0400 d20270| 2015-10-13T18:47:37.328-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:47:37.329-0400 d20269| 2015-10-13T18:47:37.328-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:47:37.329-0400 d20270| 2015-10-13T18:47:37.328-0400 I REPL [rsSync] creating replication oplog of size: 40MB... [js_test:auth] 2015-10-13T18:47:37.329-0400 d20269| 2015-10-13T18:47:37.328-0400 I REPL [rsSync] creating replication oplog of size: 40MB... [js_test:auth] 2015-10-13T18:47:37.373-0400 d20270| 2015-10-13T18:47:37.373-0400 I STORAGE [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:auth] 2015-10-13T18:47:37.374-0400 d20269| 2015-10-13T18:47:37.373-0400 I STORAGE [rsSync] Starting WiredTigerRecordStoreThread local.oplog.rs [js_test:auth] 2015-10-13T18:47:37.374-0400 d20269| 2015-10-13T18:47:37.373-0400 I STORAGE [rsSync] Scanning the oplog to determine where to place markers for when to truncate [js_test:auth] 2015-10-13T18:47:37.374-0400 d20270| 2015-10-13T18:47:37.373-0400 I STORAGE [rsSync] Scanning the oplog to determine where to place markers for when to truncate [js_test:auth] 2015-10-13T18:47:37.699-0400 d20270| 2015-10-13T18:47:37.698-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:47:37.699-0400 d20269| 2015-10-13T18:47:37.698-0400 I REPL [rsSync] ****** [js_test:auth] 2015-10-13T18:47:37.699-0400 d20270| 2015-10-13T18:47:37.698-0400 I REPL [rsSync] initial sync pending [js_test:auth] 2015-10-13T18:47:37.699-0400 d20269| 2015-10-13T18:47:37.698-0400 I REPL [rsSync] initial sync pending [js_test:auth] 2015-10-13T18:47:37.781-0400 s20264| 2015-10-13T18:47:37.781-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d1 [js_test:auth] 2015-10-13T18:47:37.782-0400 s20264| 2015-10-13T18:47:37.781-0400 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:ubuntu:20266 [js_test:auth] 2015-10-13T18:47:37.782-0400 s20264| 2015-10-13T18:47:37.781-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:37.782-0400 s20264| 2015-10-13T18:47:37.782-0400 D NETWORK [ReplicaSetMonitorWatcher] connected to server ubuntu:20266 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:37.782-0400 d20266| 2015-10-13T18:47:37.782-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35722 #9 (5 connections now open) [js_test:auth] 2015-10-13T18:47:37.782-0400 s20264| 2015-10-13T18:47:37.782-0400 D NETWORK [ReplicaSetMonitorWatcher] connected connection! [js_test:auth] 2015-10-13T18:47:37.782-0400 s20264| 2015-10-13T18:47:37.782-0400 D SHARDING [ReplicaSetMonitorWatcher] calling onCreate auth for ubuntu:20266 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:37.796-0400 d20270| 2015-10-13T18:47:37.795-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20268 [js_test:auth] 2015-10-13T18:47:37.796-0400 d20268| 2015-10-13T18:47:37.796-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54030 #7 (4 connections now open) [js_test:auth] 2015-10-13T18:47:37.798-0400 d20266| 2015-10-13T18:47:37.798-0400 I ACCESS [conn9] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:37.798-0400 s20264| 2015-10-13T18:47:37.798-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: auth-configRS [js_test:auth] 2015-10-13T18:47:37.798-0400 s20264| 2015-10-13T18:47:37.798-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set auth-configRS [js_test:auth] 2015-10-13T18:47:37.798-0400 s20264| 2015-10-13T18:47:37.798-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:47:37.798-0400 s20264| 2015-10-13T18:47:37.798-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20262, no events [js_test:auth] 2015-10-13T18:47:37.799-0400 s20264| 2015-10-13T18:47:37.798-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20261, no events [js_test:auth] 2015-10-13T18:47:37.804-0400 d20269| 2015-10-13T18:47:37.804-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20268 [js_test:auth] 2015-10-13T18:47:37.805-0400 d20268| 2015-10-13T18:47:37.805-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54031 #8 (5 connections now open) [js_test:auth] 2015-10-13T18:47:37.811-0400 d20268| 2015-10-13T18:47:37.811-0400 I ACCESS [conn7] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:37.822-0400 d20268| 2015-10-13T18:47:37.822-0400 I ACCESS [conn8] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:37.828-0400 d20270| 2015-10-13T18:47:37.827-0400 I REPL [rsSync] initial sync drop all databases [js_test:auth] 2015-10-13T18:47:37.828-0400 d20270| 2015-10-13T18:47:37.827-0400 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 [js_test:auth] 2015-10-13T18:47:37.828-0400 d20270| 2015-10-13T18:47:37.827-0400 I REPL [rsSync] initial sync clone all databases [js_test:auth] 2015-10-13T18:47:37.828-0400 d20270| 2015-10-13T18:47:37.828-0400 I REPL [rsSync] initial sync data copy, starting syncup [js_test:auth] 2015-10-13T18:47:37.828-0400 d20270| 2015-10-13T18:47:37.828-0400 I REPL [rsSync] oplog sync 1 of 3 [js_test:auth] 2015-10-13T18:47:37.828-0400 d20270| 2015-10-13T18:47:37.828-0400 I REPL [rsSync] oplog sync 2 of 3 [js_test:auth] 2015-10-13T18:47:37.829-0400 d20270| 2015-10-13T18:47:37.828-0400 I REPL [rsSync] initial sync building indexes [js_test:auth] 2015-10-13T18:47:37.829-0400 d20270| 2015-10-13T18:47:37.828-0400 I REPL [rsSync] oplog sync 3 of 3 [js_test:auth] 2015-10-13T18:47:37.829-0400 d20270| 2015-10-13T18:47:37.829-0400 I REPL [rsSync] initial sync finishing up [js_test:auth] 2015-10-13T18:47:37.829-0400 d20270| 2015-10-13T18:47:37.829-0400 I REPL [rsSync] set minValid=(term: 0, timestamp: Oct 13 18:47:34:1) [js_test:auth] 2015-10-13T18:47:37.834-0400 d20269| 2015-10-13T18:47:37.834-0400 I REPL [rsSync] initial sync drop all databases [js_test:auth] 2015-10-13T18:47:37.834-0400 d20269| 2015-10-13T18:47:37.834-0400 I STORAGE [rsSync] dropAllDatabasesExceptLocal 1 [js_test:auth] 2015-10-13T18:47:37.835-0400 d20269| 2015-10-13T18:47:37.834-0400 I REPL [rsSync] initial sync clone all databases [js_test:auth] 2015-10-13T18:47:37.835-0400 d20269| 2015-10-13T18:47:37.834-0400 I REPL [rsSync] initial sync data copy, starting syncup [js_test:auth] 2015-10-13T18:47:37.835-0400 d20269| 2015-10-13T18:47:37.834-0400 I REPL [rsSync] oplog sync 1 of 3 [js_test:auth] 2015-10-13T18:47:37.835-0400 d20269| 2015-10-13T18:47:37.834-0400 I REPL [rsSync] oplog sync 2 of 3 [js_test:auth] 2015-10-13T18:47:37.835-0400 d20269| 2015-10-13T18:47:37.834-0400 I REPL [rsSync] initial sync building indexes [js_test:auth] 2015-10-13T18:47:37.835-0400 d20269| 2015-10-13T18:47:37.834-0400 I REPL [rsSync] oplog sync 3 of 3 [js_test:auth] 2015-10-13T18:47:37.835-0400 d20269| 2015-10-13T18:47:37.835-0400 I REPL [rsSync] initial sync finishing up [js_test:auth] 2015-10-13T18:47:37.835-0400 d20269| 2015-10-13T18:47:37.835-0400 I REPL [rsSync] set minValid=(term: 0, timestamp: Oct 13 18:47:34:1) [js_test:auth] 2015-10-13T18:47:37.848-0400 d20270| 2015-10-13T18:47:37.848-0400 I REPL [rsSync] initial sync done [js_test:auth] 2015-10-13T18:47:37.848-0400 d20269| 2015-10-13T18:47:37.848-0400 I REPL [rsSync] initial sync done [js_test:auth] 2015-10-13T18:47:37.849-0400 d20268| 2015-10-13T18:47:37.849-0400 I NETWORK [conn7] end connection 127.0.0.1:54030 (4 connections now open) [js_test:auth] 2015-10-13T18:47:37.849-0400 d20270| 2015-10-13T18:47:37.849-0400 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:auth] 2015-10-13T18:47:37.849-0400 d20268| 2015-10-13T18:47:37.849-0400 I NETWORK [conn8] end connection 127.0.0.1:54031 (3 connections now open) [js_test:auth] 2015-10-13T18:47:37.849-0400 d20269| 2015-10-13T18:47:37.849-0400 I REPL [ReplicationExecutor] transition to RECOVERING [js_test:auth] 2015-10-13T18:47:37.850-0400 d20270| 2015-10-13T18:47:37.850-0400 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:auth] 2015-10-13T18:47:37.850-0400 d20269| 2015-10-13T18:47:37.850-0400 I REPL [ReplicationExecutor] transition to SECONDARY [js_test:auth] 2015-10-13T18:47:37.878-0400 s20264| 2015-10-13T18:47:37.878-0400 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:07.877-0400 cmd:{ findAndModify: "lockpings", query: { _id: "ubuntu:20264:1444776427:399327856" }, update: { $set: { ping: new Date(1444776457877) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:37.878-0400 s20264| 2015-10-13T18:47:37.878-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:37.880-0400 s20264| 2015-10-13T18:47:37.880-0400 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:48:07.880-0400 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:37.880-0400 s20264| 2015-10-13T18:47:37.880-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:37.880-0400 s20264| 2015-10-13T18:47:37.880-0400 I ACCESS [UserCacheInvalidator] User cache generation changed from 561d89eac4c5976d049255ce to 561d8a03c4c5976d049255d8; invalidating user cache [js_test:auth] 2015-10-13T18:47:38.024-0400 s20264| 2015-10-13T18:47:38.023-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:08.023-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776458023), up: 31, waiting: false, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:38.024-0400 s20264| 2015-10-13T18:47:38.023-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:38.040-0400 s20264| 2015-10-13T18:47:38.040-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776458000|1, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:38.040-0400 s20264| 2015-10-13T18:47:38.040-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:38.041-0400 s20264| 2015-10-13T18:47:38.040-0400 D SHARDING [Balancer] found 1 shards listed on config server(s) [js_test:auth] 2015-10-13T18:47:38.041-0400 s20264| 2015-10-13T18:47:38.040-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776458000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:38.041-0400 s20264| 2015-10-13T18:47:38.040-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:38.041-0400 s20264| 2015-10-13T18:47:38.041-0400 D SHARDING [Balancer] Refreshing MaxChunkSize: 1MB [js_test:auth] 2015-10-13T18:47:38.042-0400 s20264| 2015-10-13T18:47:38.041-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776458000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:38.042-0400 s20264| 2015-10-13T18:47:38.041-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:38.042-0400 s20264| 2015-10-13T18:47:38.041-0400 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:auth] 2015-10-13T18:47:38.043-0400 s20264| 2015-10-13T18:47:38.041-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:08.041-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776458041), up: 31, waiting: true, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:38.043-0400 s20264| 2015-10-13T18:47:38.041-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:38.330-0400 d20270| 2015-10-13T18:47:38.329-0400 I REPL [ReplicationExecutor] could not find member to sync from [js_test:auth] 2015-10-13T18:47:38.331-0400 d20269| 2015-10-13T18:47:38.329-0400 I REPL [ReplicationExecutor] could not find member to sync from [js_test:auth] 2015-10-13T18:47:38.331-0400 d20270| 2015-10-13T18:47:38.330-0400 I REPL [ReplicationExecutor] Member ubuntu:20269 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:38.331-0400 d20269| 2015-10-13T18:47:38.330-0400 I REPL [ReplicationExecutor] Member ubuntu:20270 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:38.609-0400 d20268| 2015-10-13T18:47:38.608-0400 I REPL [ReplicationExecutor] Member ubuntu:20269 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:38.609-0400 d20268| 2015-10-13T18:47:38.608-0400 I REPL [ReplicationExecutor] Member ubuntu:20270 is now in state SECONDARY [js_test:auth] 2015-10-13T18:47:40.243-0400 d20268| 2015-10-13T18:47:40.243-0400 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected [js_test:auth] 2015-10-13T18:47:40.388-0400 d20270| 2015-10-13T18:47:40.387-0400 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "d2", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1444776454000|1, t: 0 } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, W: 2 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 143ms [js_test:auth] 2015-10-13T18:47:40.389-0400 d20269| 2015-10-13T18:47:40.387-0400 I COMMAND [conn3] command local.replset.election command: replSetRequestVotes { replSetRequestVotes: 1, setName: "d2", dryRun: true, term: 0, candidateIndex: 0, configVersion: 1, lastCommittedOp: { ts: Timestamp 1444776454000|1, t: 0 } } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 4, w: 2 } }, Database: { acquireCount: { r: 1, W: 2 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 143ms [js_test:auth] 2015-10-13T18:47:40.389-0400 d20268| 2015-10-13T18:47:40.387-0400 I REPL [ReplicationExecutor] dry election run succeeded, running for election [js_test:auth] 2015-10-13T18:47:40.517-0400 d20268| 2015-10-13T18:47:40.517-0400 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 1 [js_test:auth] 2015-10-13T18:47:40.517-0400 d20268| 2015-10-13T18:47:40.517-0400 I REPL [ReplicationExecutor] transition to PRIMARY [js_test:auth] 2015-10-13T18:47:40.608-0400 d20268| 2015-10-13T18:47:40.608-0400 I REPL [rsSync] transition to primary complete; database writes are now permitted [js_test:auth] 2015-10-13T18:47:40.738-0400 d20268| 2015-10-13T18:47:40.738-0400 I ACCESS [conn1] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:40.755-0400 d20269| 2015-10-13T18:47:40.755-0400 I ACCESS [conn1] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:40.771-0400 d20270| 2015-10-13T18:47:40.771-0400 I ACCESS [conn1] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:40.773-0400 adding shard d2/ubuntu:20268,ubuntu:20269,ubuntu:20270 [js_test:auth] 2015-10-13T18:47:40.775-0400 s20264| 2015-10-13T18:47:40.775-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:48:10.775-0400 cmd:{ getParameter: 1, authSchemaVersion: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:40.775-0400 s20264| 2015-10-13T18:47:40.775-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:40.775-0400 s20264| 2015-10-13T18:47:40.775-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:48:10.775-0400 cmd:{ usersInfo: [ { user: "foo", db: "admin" } ], showPrivileges: true, showCredentials: true, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:40.775-0400 s20264| 2015-10-13T18:47:40.775-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:40.797-0400 s20264| 2015-10-13T18:47:40.797-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:48:10.797-0400 cmd:{ usersInfo: [ { user: "foo", db: "admin" } ], showPrivileges: true, showCredentials: true, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:40.797-0400 s20264| 2015-10-13T18:47:40.797-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:40.798-0400 s20264| 2015-10-13T18:47:40.798-0400 I ACCESS [conn1] Successfully authenticated as principal foo on admin [js_test:auth] 2015-10-13T18:47:40.799-0400 logged in [js_test:auth] 2015-10-13T18:47:40.799-0400 s20264| 2015-10-13T18:47:40.799-0400 I NETWORK [conn1] Starting new replica set monitor for d2/ubuntu:20268,ubuntu:20269,ubuntu:20270 [js_test:auth] 2015-10-13T18:47:40.799-0400 s20264| 2015-10-13T18:47:40.799-0400 D NETWORK [conn1] Starting new refresh of replica set d2 [js_test:auth] 2015-10-13T18:47:40.799-0400 s20264| 2015-10-13T18:47:40.799-0400 D NETWORK [conn1] creating new connection to:ubuntu:20268 [js_test:auth] 2015-10-13T18:47:40.799-0400 s20264| 2015-10-13T18:47:40.799-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:40.799-0400 s20264| 2015-10-13T18:47:40.799-0400 D NETWORK [conn1] connected to server ubuntu:20268 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:40.799-0400 d20268| 2015-10-13T18:47:40.799-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54209 #9 (4 connections now open) [js_test:auth] 2015-10-13T18:47:40.799-0400 s20264| 2015-10-13T18:47:40.799-0400 D NETWORK [conn1] connected connection! [js_test:auth] 2015-10-13T18:47:40.799-0400 s20264| 2015-10-13T18:47:40.799-0400 D SHARDING [conn1] calling onCreate auth for ubuntu:20268 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:40.821-0400 d20268| 2015-10-13T18:47:40.821-0400 I ACCESS [conn9] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:40.821-0400 s20264| 2015-10-13T18:47:40.821-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:48:10.821-0400 cmd:{ isdbgrid: 1 } [js_test:auth] 2015-10-13T18:47:40.821-0400 s20264| 2015-10-13T18:47:40.821-0400 D ASIO [NetworkInterfaceASIO] Connecting to ubuntu:20268 [js_test:auth] 2015-10-13T18:47:40.821-0400 s20264| 2015-10-13T18:47:40.821-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:40.821-0400 d20268| 2015-10-13T18:47:40.821-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54210 #10 (5 connections now open) [js_test:auth] 2015-10-13T18:47:40.822-0400 s20264| 2015-10-13T18:47:40.822-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:40.825-0400 d20267| 2015-10-13T18:47:40.825-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:48118 #8 (5 connections now open) [js_test:auth] 2015-10-13T18:47:40.839-0400 s20264| 2015-10-13T18:47:40.839-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:40.839-0400 s20264| 2015-10-13T18:47:40.839-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:40.840-0400 d20268| 2015-10-13T18:47:40.839-0400 I ACCESS [conn10] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:40.840-0400 s20264| 2015-10-13T18:47:40.840-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20268 [js_test:auth] 2015-10-13T18:47:40.840-0400 s20264| 2015-10-13T18:47:40.840-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:40.840-0400 s20264| 2015-10-13T18:47:40.840-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:48:10.840-0400 cmd:{ isMaster: 1 } [js_test:auth] 2015-10-13T18:47:40.840-0400 s20264| 2015-10-13T18:47:40.840-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:40.841-0400 s20264| 2015-10-13T18:47:40.840-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:48:10.840-0400 cmd:{ replSetGetStatus: 1 } [js_test:auth] 2015-10-13T18:47:40.841-0400 s20264| 2015-10-13T18:47:40.840-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:40.841-0400 s20264| 2015-10-13T18:47:40.840-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:48:10.840-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:47:40.841-0400 s20264| 2015-10-13T18:47:40.840-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:40.841-0400 s20264| 2015-10-13T18:47:40.841-0400 I SHARDING [conn1] going to add shard: { _id: "d2", host: "d2/ubuntu:20268,ubuntu:20269,ubuntu:20270" } [js_test:auth] 2015-10-13T18:47:40.841-0400 s20264| 2015-10-13T18:47:40.841-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:10.841-0400 cmd:{ insert: "shards", documents: [ { _id: "d2", host: "d2/ubuntu:20268,ubuntu:20269,ubuntu:20270" } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:40.842-0400 s20264| 2015-10-13T18:47:40.841-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:40.843-0400 d20267| 2015-10-13T18:47:40.843-0400 I ACCESS [conn8] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:40.844-0400 d20265| 2015-10-13T18:47:40.844-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53991 #20 (13 connections now open) [js_test:auth] 2015-10-13T18:47:40.861-0400 s20264| 2015-10-13T18:47:40.861-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776460000|1, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:40.861-0400 s20264| 2015-10-13T18:47:40.861-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:40.862-0400 s20264| 2015-10-13T18:47:40.861-0400 D SHARDING [conn1] found 2 shards listed on config server(s) [js_test:auth] 2015-10-13T18:47:40.862-0400 s20264| 2015-10-13T18:47:40.862-0400 I SHARDING [conn1] about to log metadata event: { _id: "ubuntu-2015-10-13T18:47:40.862-0400-561d8a0cc06b51335e5d689c", server: "ubuntu", clientAddr: "127.0.0.1:54935", time: new Date(1444776460862), what: "addShard", ns: "", details: { name: "d2", host: "d2/ubuntu:20268,ubuntu:20269,ubuntu:20270" } } [js_test:auth] 2015-10-13T18:47:40.862-0400 s20264| 2015-10-13T18:47:40.862-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:10.862-0400 cmd:{ insert: "changelog", documents: [ { _id: "ubuntu-2015-10-13T18:47:40.862-0400-561d8a0cc06b51335e5d689c", server: "ubuntu", clientAddr: "127.0.0.1:54935", time: new Date(1444776460862), what: "addShard", ns: "", details: { name: "d2", host: "d2/ubuntu:20268,ubuntu:20269,ubuntu:20270" } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:40.862-0400 s20264| 2015-10-13T18:47:40.862-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:40.866-0400 d20265| 2015-10-13T18:47:40.866-0400 I ACCESS [conn20] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:40.866-0400 d20266| 2015-10-13T18:47:40.866-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:35907 #10 (6 connections now open) [js_test:auth] 2015-10-13T18:47:40.885-0400 d20266| 2015-10-13T18:47:40.885-0400 I ACCESS [conn10] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:40.886-0400 Awaiting ubuntu:20265 to be { "ok" : true } for connection to localhost:20264 (rs: undefined) [js_test:auth] 2015-10-13T18:47:40.888-0400 { [js_test:auth] 2015-10-13T18:47:40.888-0400 "d1" : { [js_test:auth] 2015-10-13T18:47:40.888-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.888-0400 { [js_test:auth] 2015-10-13T18:47:40.888-0400 "addr" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:40.888-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.888-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.888-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.888-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.889-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.889-0400 }, [js_test:auth] 2015-10-13T18:47:40.889-0400 { [js_test:auth] 2015-10-13T18:47:40.889-0400 "addr" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:40.889-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.889-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.889-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.889-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.889-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.890-0400 }, [js_test:auth] 2015-10-13T18:47:40.890-0400 { [js_test:auth] 2015-10-13T18:47:40.890-0400 "addr" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:40.890-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.890-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.890-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.890-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.890-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.890-0400 } [js_test:auth] 2015-10-13T18:47:40.890-0400 ] [js_test:auth] 2015-10-13T18:47:40.891-0400 }, [js_test:auth] 2015-10-13T18:47:40.891-0400 "auth-configRS" : { [js_test:auth] 2015-10-13T18:47:40.891-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.891-0400 { [js_test:auth] 2015-10-13T18:47:40.891-0400 "addr" : "ubuntu:20260", [js_test:auth] 2015-10-13T18:47:40.891-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.891-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.891-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.891-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.891-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.892-0400 }, [js_test:auth] 2015-10-13T18:47:40.892-0400 { [js_test:auth] 2015-10-13T18:47:40.892-0400 "addr" : "ubuntu:20261", [js_test:auth] 2015-10-13T18:47:40.892-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.892-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.892-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.892-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.892-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.892-0400 }, [js_test:auth] 2015-10-13T18:47:40.892-0400 { [js_test:auth] 2015-10-13T18:47:40.893-0400 "addr" : "ubuntu:20262", [js_test:auth] 2015-10-13T18:47:40.893-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.893-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.893-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.893-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.893-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.893-0400 } [js_test:auth] 2015-10-13T18:47:40.893-0400 ] [js_test:auth] 2015-10-13T18:47:40.893-0400 }, [js_test:auth] 2015-10-13T18:47:40.894-0400 "d2" : { [js_test:auth] 2015-10-13T18:47:40.894-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.894-0400 { [js_test:auth] 2015-10-13T18:47:40.894-0400 "addr" : "ubuntu:20268", [js_test:auth] 2015-10-13T18:47:40.894-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.894-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.894-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.894-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.894-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.894-0400 }, [js_test:auth] 2015-10-13T18:47:40.895-0400 { [js_test:auth] 2015-10-13T18:47:40.895-0400 "addr" : "ubuntu:20269", [js_test:auth] 2015-10-13T18:47:40.895-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:40.895-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.895-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.895-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.895-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:40.895-0400 }, [js_test:auth] 2015-10-13T18:47:40.895-0400 { [js_test:auth] 2015-10-13T18:47:40.895-0400 "addr" : "ubuntu:20270", [js_test:auth] 2015-10-13T18:47:40.896-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:40.896-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.896-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.896-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.896-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:40.896-0400 } [js_test:auth] 2015-10-13T18:47:40.896-0400 ] [js_test:auth] 2015-10-13T18:47:40.896-0400 } [js_test:auth] 2015-10-13T18:47:40.896-0400 } [js_test:auth] 2015-10-13T18:47:40.897-0400 Awaiting ubuntu:20266 to be { "ok" : true } for connection to localhost:20264 (rs: undefined) [js_test:auth] 2015-10-13T18:47:40.897-0400 { [js_test:auth] 2015-10-13T18:47:40.897-0400 "d1" : { [js_test:auth] 2015-10-13T18:47:40.897-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.897-0400 { [js_test:auth] 2015-10-13T18:47:40.897-0400 "addr" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:40.897-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.897-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.897-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.897-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.897-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.897-0400 }, [js_test:auth] 2015-10-13T18:47:40.897-0400 { [js_test:auth] 2015-10-13T18:47:40.897-0400 "addr" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:40.897-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.897-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.897-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.897-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.898-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.898-0400 }, [js_test:auth] 2015-10-13T18:47:40.898-0400 { [js_test:auth] 2015-10-13T18:47:40.898-0400 "addr" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:40.898-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.898-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.898-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.898-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.898-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.898-0400 } [js_test:auth] 2015-10-13T18:47:40.898-0400 ] [js_test:auth] 2015-10-13T18:47:40.898-0400 }, [js_test:auth] 2015-10-13T18:47:40.898-0400 "auth-configRS" : { [js_test:auth] 2015-10-13T18:47:40.898-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.898-0400 { [js_test:auth] 2015-10-13T18:47:40.898-0400 "addr" : "ubuntu:20260", [js_test:auth] 2015-10-13T18:47:40.898-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.898-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.899-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.899-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.899-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.899-0400 }, [js_test:auth] 2015-10-13T18:47:40.899-0400 { [js_test:auth] 2015-10-13T18:47:40.899-0400 "addr" : "ubuntu:20261", [js_test:auth] 2015-10-13T18:47:40.899-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.899-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.899-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.899-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.899-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.899-0400 }, [js_test:auth] 2015-10-13T18:47:40.899-0400 { [js_test:auth] 2015-10-13T18:47:40.899-0400 "addr" : "ubuntu:20262", [js_test:auth] 2015-10-13T18:47:40.900-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.900-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.900-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.900-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.900-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.900-0400 } [js_test:auth] 2015-10-13T18:47:40.900-0400 ] [js_test:auth] 2015-10-13T18:47:40.900-0400 }, [js_test:auth] 2015-10-13T18:47:40.900-0400 "d2" : { [js_test:auth] 2015-10-13T18:47:40.900-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.900-0400 { [js_test:auth] 2015-10-13T18:47:40.900-0400 "addr" : "ubuntu:20268", [js_test:auth] 2015-10-13T18:47:40.900-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.901-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.901-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.901-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.901-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.901-0400 }, [js_test:auth] 2015-10-13T18:47:40.901-0400 { [js_test:auth] 2015-10-13T18:47:40.901-0400 "addr" : "ubuntu:20269", [js_test:auth] 2015-10-13T18:47:40.901-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:40.901-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.901-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.901-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.901-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:40.901-0400 }, [js_test:auth] 2015-10-13T18:47:40.901-0400 { [js_test:auth] 2015-10-13T18:47:40.901-0400 "addr" : "ubuntu:20270", [js_test:auth] 2015-10-13T18:47:40.901-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:40.901-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.901-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.902-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.902-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:40.902-0400 } [js_test:auth] 2015-10-13T18:47:40.902-0400 ] [js_test:auth] 2015-10-13T18:47:40.902-0400 } [js_test:auth] 2015-10-13T18:47:40.902-0400 } [js_test:auth] 2015-10-13T18:47:40.902-0400 Awaiting ubuntu:20267 to be { "ok" : true } for connection to localhost:20264 (rs: undefined) [js_test:auth] 2015-10-13T18:47:40.902-0400 { [js_test:auth] 2015-10-13T18:47:40.902-0400 "d1" : { [js_test:auth] 2015-10-13T18:47:40.902-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.902-0400 { [js_test:auth] 2015-10-13T18:47:40.902-0400 "addr" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:40.902-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.902-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.902-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.902-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.902-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.902-0400 }, [js_test:auth] 2015-10-13T18:47:40.903-0400 { [js_test:auth] 2015-10-13T18:47:40.903-0400 "addr" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:40.903-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.903-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.903-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.903-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.903-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.903-0400 }, [js_test:auth] 2015-10-13T18:47:40.903-0400 { [js_test:auth] 2015-10-13T18:47:40.903-0400 "addr" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:40.903-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.903-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.903-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.903-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.903-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.904-0400 } [js_test:auth] 2015-10-13T18:47:40.904-0400 ] [js_test:auth] 2015-10-13T18:47:40.904-0400 }, [js_test:auth] 2015-10-13T18:47:40.904-0400 "auth-configRS" : { [js_test:auth] 2015-10-13T18:47:40.904-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.904-0400 { [js_test:auth] 2015-10-13T18:47:40.904-0400 "addr" : "ubuntu:20260", [js_test:auth] 2015-10-13T18:47:40.904-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.904-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.904-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.904-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.904-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.904-0400 }, [js_test:auth] 2015-10-13T18:47:40.904-0400 { [js_test:auth] 2015-10-13T18:47:40.905-0400 "addr" : "ubuntu:20261", [js_test:auth] 2015-10-13T18:47:40.905-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.905-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.905-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.905-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.905-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.905-0400 }, [js_test:auth] 2015-10-13T18:47:40.905-0400 { [js_test:auth] 2015-10-13T18:47:40.905-0400 "addr" : "ubuntu:20262", [js_test:auth] 2015-10-13T18:47:40.905-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.905-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.905-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.905-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.905-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.905-0400 } [js_test:auth] 2015-10-13T18:47:40.905-0400 ] [js_test:auth] 2015-10-13T18:47:40.905-0400 }, [js_test:auth] 2015-10-13T18:47:40.905-0400 "d2" : { [js_test:auth] 2015-10-13T18:47:40.906-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.906-0400 { [js_test:auth] 2015-10-13T18:47:40.906-0400 "addr" : "ubuntu:20268", [js_test:auth] 2015-10-13T18:47:40.906-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.906-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.906-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.906-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.906-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.906-0400 }, [js_test:auth] 2015-10-13T18:47:40.906-0400 { [js_test:auth] 2015-10-13T18:47:40.906-0400 "addr" : "ubuntu:20269", [js_test:auth] 2015-10-13T18:47:40.906-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:40.906-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.906-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.906-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.906-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:40.906-0400 }, [js_test:auth] 2015-10-13T18:47:40.906-0400 { [js_test:auth] 2015-10-13T18:47:40.907-0400 "addr" : "ubuntu:20270", [js_test:auth] 2015-10-13T18:47:40.907-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:40.907-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.907-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.907-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.907-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:40.907-0400 } [js_test:auth] 2015-10-13T18:47:40.907-0400 ] [js_test:auth] 2015-10-13T18:47:40.907-0400 } [js_test:auth] 2015-10-13T18:47:40.907-0400 } [js_test:auth] 2015-10-13T18:47:40.907-0400 Awaiting ubuntu:20268 to be { "ok" : true } for connection to localhost:20264 (rs: undefined) [js_test:auth] 2015-10-13T18:47:40.907-0400 { [js_test:auth] 2015-10-13T18:47:40.907-0400 "d1" : { [js_test:auth] 2015-10-13T18:47:40.907-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.907-0400 { [js_test:auth] 2015-10-13T18:47:40.907-0400 "addr" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:40.907-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.907-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.908-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.908-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.908-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.908-0400 }, [js_test:auth] 2015-10-13T18:47:40.908-0400 { [js_test:auth] 2015-10-13T18:47:40.908-0400 "addr" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:40.908-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.908-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.908-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.908-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.908-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.908-0400 }, [js_test:auth] 2015-10-13T18:47:40.908-0400 { [js_test:auth] 2015-10-13T18:47:40.908-0400 "addr" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:40.908-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.908-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.909-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.909-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.909-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.909-0400 } [js_test:auth] 2015-10-13T18:47:40.909-0400 ] [js_test:auth] 2015-10-13T18:47:40.909-0400 }, [js_test:auth] 2015-10-13T18:47:40.909-0400 "auth-configRS" : { [js_test:auth] 2015-10-13T18:47:40.909-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.909-0400 { [js_test:auth] 2015-10-13T18:47:40.910-0400 "addr" : "ubuntu:20260", [js_test:auth] 2015-10-13T18:47:40.910-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.910-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.910-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.910-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.910-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.910-0400 }, [js_test:auth] 2015-10-13T18:47:40.910-0400 { [js_test:auth] 2015-10-13T18:47:40.910-0400 "addr" : "ubuntu:20261", [js_test:auth] 2015-10-13T18:47:40.911-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.911-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.911-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.911-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.911-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.911-0400 }, [js_test:auth] 2015-10-13T18:47:40.911-0400 { [js_test:auth] 2015-10-13T18:47:40.911-0400 "addr" : "ubuntu:20262", [js_test:auth] 2015-10-13T18:47:40.912-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.912-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.912-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.912-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.912-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.912-0400 } [js_test:auth] 2015-10-13T18:47:40.912-0400 ] [js_test:auth] 2015-10-13T18:47:40.912-0400 }, [js_test:auth] 2015-10-13T18:47:40.912-0400 "d2" : { [js_test:auth] 2015-10-13T18:47:40.913-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.913-0400 { [js_test:auth] 2015-10-13T18:47:40.913-0400 "addr" : "ubuntu:20268", [js_test:auth] 2015-10-13T18:47:40.913-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.913-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.913-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.913-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.914-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.914-0400 }, [js_test:auth] 2015-10-13T18:47:40.914-0400 { [js_test:auth] 2015-10-13T18:47:40.914-0400 "addr" : "ubuntu:20269", [js_test:auth] 2015-10-13T18:47:40.914-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:40.914-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.914-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.914-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.914-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:40.914-0400 }, [js_test:auth] 2015-10-13T18:47:40.914-0400 { [js_test:auth] 2015-10-13T18:47:40.914-0400 "addr" : "ubuntu:20270", [js_test:auth] 2015-10-13T18:47:40.915-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:40.915-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.915-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.915-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.915-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:40.915-0400 } [js_test:auth] 2015-10-13T18:47:40.915-0400 ] [js_test:auth] 2015-10-13T18:47:40.915-0400 } [js_test:auth] 2015-10-13T18:47:40.915-0400 } [js_test:auth] 2015-10-13T18:47:40.915-0400 Awaiting ubuntu:20269 to be { "ok" : true } for connection to localhost:20264 (rs: undefined) [js_test:auth] 2015-10-13T18:47:40.915-0400 { [js_test:auth] 2015-10-13T18:47:40.916-0400 "d1" : { [js_test:auth] 2015-10-13T18:47:40.916-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.916-0400 { [js_test:auth] 2015-10-13T18:47:40.916-0400 "addr" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:40.916-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.916-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.916-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.916-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.916-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.916-0400 }, [js_test:auth] 2015-10-13T18:47:40.916-0400 { [js_test:auth] 2015-10-13T18:47:40.917-0400 "addr" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:40.917-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.917-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.917-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.917-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.917-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.917-0400 }, [js_test:auth] 2015-10-13T18:47:40.917-0400 { [js_test:auth] 2015-10-13T18:47:40.917-0400 "addr" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:40.917-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.917-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.918-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.918-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.918-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.918-0400 } [js_test:auth] 2015-10-13T18:47:40.918-0400 ] [js_test:auth] 2015-10-13T18:47:40.918-0400 }, [js_test:auth] 2015-10-13T18:47:40.918-0400 "auth-configRS" : { [js_test:auth] 2015-10-13T18:47:40.918-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.918-0400 { [js_test:auth] 2015-10-13T18:47:40.918-0400 "addr" : "ubuntu:20260", [js_test:auth] 2015-10-13T18:47:40.918-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.919-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.919-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.919-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.919-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.919-0400 }, [js_test:auth] 2015-10-13T18:47:40.919-0400 { [js_test:auth] 2015-10-13T18:47:40.920-0400 "addr" : "ubuntu:20261", [js_test:auth] 2015-10-13T18:47:40.920-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.920-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.920-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.921-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.921-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.921-0400 }, [js_test:auth] 2015-10-13T18:47:40.921-0400 { [js_test:auth] 2015-10-13T18:47:40.921-0400 "addr" : "ubuntu:20262", [js_test:auth] 2015-10-13T18:47:40.921-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.921-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.921-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.921-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:40.922-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.922-0400 } [js_test:auth] 2015-10-13T18:47:40.922-0400 ] [js_test:auth] 2015-10-13T18:47:40.922-0400 }, [js_test:auth] 2015-10-13T18:47:40.922-0400 "d2" : { [js_test:auth] 2015-10-13T18:47:40.922-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:40.922-0400 { [js_test:auth] 2015-10-13T18:47:40.922-0400 "addr" : "ubuntu:20268", [js_test:auth] 2015-10-13T18:47:40.922-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:40.922-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:40.922-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.923-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.923-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:40.923-0400 }, [js_test:auth] 2015-10-13T18:47:40.923-0400 { [js_test:auth] 2015-10-13T18:47:40.923-0400 "addr" : "ubuntu:20269", [js_test:auth] 2015-10-13T18:47:40.923-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:40.923-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.923-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.923-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.923-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:40.923-0400 }, [js_test:auth] 2015-10-13T18:47:40.924-0400 { [js_test:auth] 2015-10-13T18:47:40.924-0400 "addr" : "ubuntu:20270", [js_test:auth] 2015-10-13T18:47:40.924-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:40.924-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:40.924-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:40.924-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:40.924-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:40.924-0400 } [js_test:auth] 2015-10-13T18:47:40.924-0400 ] [js_test:auth] 2015-10-13T18:47:40.924-0400 } [js_test:auth] 2015-10-13T18:47:40.924-0400 } [js_test:auth] 2015-10-13T18:47:42.330-0400 d20270| 2015-10-13T18:47:42.330-0400 I REPL [ReplicationExecutor] Member ubuntu:20268 is now in state PRIMARY [js_test:auth] 2015-10-13T18:47:42.331-0400 d20269| 2015-10-13T18:47:42.330-0400 I REPL [ReplicationExecutor] Member ubuntu:20268 is now in state PRIMARY [js_test:auth] 2015-10-13T18:47:42.903-0400 { [js_test:auth] 2015-10-13T18:47:42.904-0400 "d1" : { [js_test:auth] 2015-10-13T18:47:42.904-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:42.904-0400 { [js_test:auth] 2015-10-13T18:47:42.904-0400 "addr" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:42.904-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:42.904-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:42.904-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:42.905-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:42.905-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:42.905-0400 }, [js_test:auth] 2015-10-13T18:47:42.905-0400 { [js_test:auth] 2015-10-13T18:47:42.905-0400 "addr" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:42.905-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:42.905-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:42.905-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:42.905-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:42.905-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:42.906-0400 }, [js_test:auth] 2015-10-13T18:47:42.906-0400 { [js_test:auth] 2015-10-13T18:47:42.906-0400 "addr" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:42.906-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:42.906-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:42.906-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:42.906-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:42.906-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:42.906-0400 } [js_test:auth] 2015-10-13T18:47:42.906-0400 ] [js_test:auth] 2015-10-13T18:47:42.906-0400 }, [js_test:auth] 2015-10-13T18:47:42.907-0400 "auth-configRS" : { [js_test:auth] 2015-10-13T18:47:42.907-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:42.907-0400 { [js_test:auth] 2015-10-13T18:47:42.907-0400 "addr" : "ubuntu:20260", [js_test:auth] 2015-10-13T18:47:42.907-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:42.907-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:42.907-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:42.907-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:42.907-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:42.907-0400 }, [js_test:auth] 2015-10-13T18:47:42.908-0400 { [js_test:auth] 2015-10-13T18:47:42.908-0400 "addr" : "ubuntu:20261", [js_test:auth] 2015-10-13T18:47:42.908-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:42.908-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:42.908-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:42.908-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:42.908-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:42.908-0400 }, [js_test:auth] 2015-10-13T18:47:42.908-0400 { [js_test:auth] 2015-10-13T18:47:42.908-0400 "addr" : "ubuntu:20262", [js_test:auth] 2015-10-13T18:47:42.908-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:42.908-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:42.908-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:42.908-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:42.909-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:42.909-0400 } [js_test:auth] 2015-10-13T18:47:42.909-0400 ] [js_test:auth] 2015-10-13T18:47:42.909-0400 }, [js_test:auth] 2015-10-13T18:47:42.909-0400 "d2" : { [js_test:auth] 2015-10-13T18:47:42.909-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:42.909-0400 { [js_test:auth] 2015-10-13T18:47:42.909-0400 "addr" : "ubuntu:20268", [js_test:auth] 2015-10-13T18:47:42.909-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:42.909-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:42.909-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:42.909-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:42.909-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:42.909-0400 }, [js_test:auth] 2015-10-13T18:47:42.909-0400 { [js_test:auth] 2015-10-13T18:47:42.909-0400 "addr" : "ubuntu:20269", [js_test:auth] 2015-10-13T18:47:42.909-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:42.909-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:42.910-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:42.910-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:42.910-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:42.910-0400 }, [js_test:auth] 2015-10-13T18:47:42.910-0400 { [js_test:auth] 2015-10-13T18:47:42.910-0400 "addr" : "ubuntu:20270", [js_test:auth] 2015-10-13T18:47:42.910-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:42.910-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:42.910-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:42.910-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:42.910-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:42.910-0400 } [js_test:auth] 2015-10-13T18:47:42.910-0400 ] [js_test:auth] 2015-10-13T18:47:42.910-0400 } [js_test:auth] 2015-10-13T18:47:42.910-0400 } [js_test:auth] 2015-10-13T18:47:43.330-0400 d20269| 2015-10-13T18:47:43.330-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20268 [js_test:auth] 2015-10-13T18:47:43.331-0400 d20270| 2015-10-13T18:47:43.330-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20268 [js_test:auth] 2015-10-13T18:47:43.331-0400 d20268| 2015-10-13T18:47:43.330-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54371 #11 (6 connections now open) [js_test:auth] 2015-10-13T18:47:43.331-0400 d20268| 2015-10-13T18:47:43.331-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54372 #12 (7 connections now open) [js_test:auth] 2015-10-13T18:47:43.355-0400 d20268| 2015-10-13T18:47:43.355-0400 I ACCESS [conn12] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:43.356-0400 d20270| 2015-10-13T18:47:43.355-0400 I REPL [SyncSourceFeedback] setting syncSourceFeedback to ubuntu:20268 [js_test:auth] 2015-10-13T18:47:43.356-0400 d20268| 2015-10-13T18:47:43.355-0400 I NETWORK [conn12] end connection 127.0.0.1:54372 (6 connections now open) [js_test:auth] 2015-10-13T18:47:43.356-0400 d20268| 2015-10-13T18:47:43.356-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54375 #13 (7 connections now open) [js_test:auth] 2015-10-13T18:47:43.356-0400 d20268| 2015-10-13T18:47:43.356-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54376 #14 (8 connections now open) [js_test:auth] 2015-10-13T18:47:43.356-0400 d20268| 2015-10-13T18:47:43.356-0400 I ACCESS [conn11] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:43.356-0400 d20269| 2015-10-13T18:47:43.356-0400 I REPL [SyncSourceFeedback] setting syncSourceFeedback to ubuntu:20268 [js_test:auth] 2015-10-13T18:47:43.356-0400 d20268| 2015-10-13T18:47:43.356-0400 I NETWORK [conn11] end connection 127.0.0.1:54371 (7 connections now open) [js_test:auth] 2015-10-13T18:47:43.356-0400 d20268| 2015-10-13T18:47:43.356-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54377 #15 (8 connections now open) [js_test:auth] 2015-10-13T18:47:43.357-0400 d20268| 2015-10-13T18:47:43.356-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54378 #16 (9 connections now open) [js_test:auth] 2015-10-13T18:47:43.377-0400 d20268| 2015-10-13T18:47:43.377-0400 I ACCESS [conn13] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:43.377-0400 d20268| 2015-10-13T18:47:43.377-0400 I ACCESS [conn14] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:43.377-0400 d20270| 2015-10-13T18:47:43.377-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20268 [js_test:auth] 2015-10-13T18:47:43.378-0400 d20268| 2015-10-13T18:47:43.378-0400 I ACCESS [conn15] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:43.379-0400 d20268| 2015-10-13T18:47:43.379-0400 I ACCESS [conn16] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:43.379-0400 d20269| 2015-10-13T18:47:43.379-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20268 [js_test:auth] 2015-10-13T18:47:44.910-0400 { [js_test:auth] 2015-10-13T18:47:44.910-0400 "d1" : { [js_test:auth] 2015-10-13T18:47:44.910-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:44.910-0400 { [js_test:auth] 2015-10-13T18:47:44.910-0400 "addr" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:44.910-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:44.910-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:44.910-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:44.910-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:44.910-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:44.911-0400 }, [js_test:auth] 2015-10-13T18:47:44.911-0400 { [js_test:auth] 2015-10-13T18:47:44.911-0400 "addr" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:44.911-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:44.911-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:44.911-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:44.911-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:44.911-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:44.911-0400 }, [js_test:auth] 2015-10-13T18:47:44.911-0400 { [js_test:auth] 2015-10-13T18:47:44.911-0400 "addr" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:44.911-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:44.911-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:44.911-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:44.911-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:44.911-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:44.911-0400 } [js_test:auth] 2015-10-13T18:47:44.911-0400 ] [js_test:auth] 2015-10-13T18:47:44.912-0400 }, [js_test:auth] 2015-10-13T18:47:44.912-0400 "auth-configRS" : { [js_test:auth] 2015-10-13T18:47:44.912-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:44.912-0400 { [js_test:auth] 2015-10-13T18:47:44.912-0400 "addr" : "ubuntu:20260", [js_test:auth] 2015-10-13T18:47:44.912-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:44.912-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:44.912-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:44.912-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:44.912-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:44.912-0400 }, [js_test:auth] 2015-10-13T18:47:44.912-0400 { [js_test:auth] 2015-10-13T18:47:44.912-0400 "addr" : "ubuntu:20261", [js_test:auth] 2015-10-13T18:47:44.912-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:44.912-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:44.912-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:44.912-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:44.913-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:44.913-0400 }, [js_test:auth] 2015-10-13T18:47:44.913-0400 { [js_test:auth] 2015-10-13T18:47:44.913-0400 "addr" : "ubuntu:20262", [js_test:auth] 2015-10-13T18:47:44.913-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:44.913-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:44.913-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:44.913-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:44.913-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:44.913-0400 } [js_test:auth] 2015-10-13T18:47:44.913-0400 ] [js_test:auth] 2015-10-13T18:47:44.913-0400 }, [js_test:auth] 2015-10-13T18:47:44.913-0400 "d2" : { [js_test:auth] 2015-10-13T18:47:44.913-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:44.913-0400 { [js_test:auth] 2015-10-13T18:47:44.913-0400 "addr" : "ubuntu:20268", [js_test:auth] 2015-10-13T18:47:44.914-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:44.914-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:44.914-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:44.914-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:44.914-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:44.914-0400 }, [js_test:auth] 2015-10-13T18:47:44.914-0400 { [js_test:auth] 2015-10-13T18:47:44.914-0400 "addr" : "ubuntu:20269", [js_test:auth] 2015-10-13T18:47:44.914-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:44.914-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:44.914-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:44.914-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:44.914-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:44.914-0400 }, [js_test:auth] 2015-10-13T18:47:44.914-0400 { [js_test:auth] 2015-10-13T18:47:44.914-0400 "addr" : "ubuntu:20270", [js_test:auth] 2015-10-13T18:47:44.914-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:44.914-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:44.915-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:44.915-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:44.915-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:44.915-0400 } [js_test:auth] 2015-10-13T18:47:44.915-0400 ] [js_test:auth] 2015-10-13T18:47:44.915-0400 } [js_test:auth] 2015-10-13T18:47:44.915-0400 } [js_test:auth] 2015-10-13T18:47:46.917-0400 { [js_test:auth] 2015-10-13T18:47:46.917-0400 "d1" : { [js_test:auth] 2015-10-13T18:47:46.917-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:46.917-0400 { [js_test:auth] 2015-10-13T18:47:46.917-0400 "addr" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:46.917-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:46.917-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:46.917-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:46.918-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:46.918-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:46.918-0400 }, [js_test:auth] 2015-10-13T18:47:46.918-0400 { [js_test:auth] 2015-10-13T18:47:46.918-0400 "addr" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:46.918-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:46.918-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:46.918-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:46.918-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:46.918-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:46.918-0400 }, [js_test:auth] 2015-10-13T18:47:46.918-0400 { [js_test:auth] 2015-10-13T18:47:46.918-0400 "addr" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:46.918-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:46.918-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:46.918-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:46.918-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:46.918-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:46.919-0400 } [js_test:auth] 2015-10-13T18:47:46.919-0400 ] [js_test:auth] 2015-10-13T18:47:46.919-0400 }, [js_test:auth] 2015-10-13T18:47:46.919-0400 "auth-configRS" : { [js_test:auth] 2015-10-13T18:47:46.919-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:46.919-0400 { [js_test:auth] 2015-10-13T18:47:46.919-0400 "addr" : "ubuntu:20260", [js_test:auth] 2015-10-13T18:47:46.919-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:46.919-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:46.919-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:46.919-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:46.919-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:46.919-0400 }, [js_test:auth] 2015-10-13T18:47:46.919-0400 { [js_test:auth] 2015-10-13T18:47:46.919-0400 "addr" : "ubuntu:20261", [js_test:auth] 2015-10-13T18:47:46.919-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:46.919-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:46.920-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:46.920-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:46.920-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:46.920-0400 }, [js_test:auth] 2015-10-13T18:47:46.920-0400 { [js_test:auth] 2015-10-13T18:47:46.920-0400 "addr" : "ubuntu:20262", [js_test:auth] 2015-10-13T18:47:46.920-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:46.920-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:46.920-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:46.920-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:46.920-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:46.920-0400 } [js_test:auth] 2015-10-13T18:47:46.920-0400 ] [js_test:auth] 2015-10-13T18:47:46.920-0400 }, [js_test:auth] 2015-10-13T18:47:46.920-0400 "d2" : { [js_test:auth] 2015-10-13T18:47:46.920-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:46.920-0400 { [js_test:auth] 2015-10-13T18:47:46.921-0400 "addr" : "ubuntu:20268", [js_test:auth] 2015-10-13T18:47:46.921-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:46.921-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:46.921-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:46.921-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:46.921-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:46.921-0400 }, [js_test:auth] 2015-10-13T18:47:46.921-0400 { [js_test:auth] 2015-10-13T18:47:46.921-0400 "addr" : "ubuntu:20269", [js_test:auth] 2015-10-13T18:47:46.921-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:46.921-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:46.921-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:46.921-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:46.921-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:46.921-0400 }, [js_test:auth] 2015-10-13T18:47:46.921-0400 { [js_test:auth] 2015-10-13T18:47:46.921-0400 "addr" : "ubuntu:20270", [js_test:auth] 2015-10-13T18:47:46.921-0400 "ok" : false, [js_test:auth] 2015-10-13T18:47:46.922-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:46.922-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:46.922-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:46.922-0400 "pingTimeMillis" : 2147483647 [js_test:auth] 2015-10-13T18:47:46.922-0400 } [js_test:auth] 2015-10-13T18:47:46.922-0400 ] [js_test:auth] 2015-10-13T18:47:46.922-0400 } [js_test:auth] 2015-10-13T18:47:46.922-0400 } [js_test:auth] 2015-10-13T18:47:47.799-0400 s20264| 2015-10-13T18:47:47.799-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d1 [js_test:auth] 2015-10-13T18:47:47.799-0400 s20264| 2015-10-13T18:47:47.799-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d1 [js_test:auth] 2015-10-13T18:47:47.799-0400 s20264| 2015-10-13T18:47:47.799-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20265, no events [js_test:auth] 2015-10-13T18:47:47.799-0400 s20264| 2015-10-13T18:47:47.799-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20266, no events [js_test:auth] 2015-10-13T18:47:47.800-0400 s20264| 2015-10-13T18:47:47.799-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20267, no events [js_test:auth] 2015-10-13T18:47:47.800-0400 s20264| 2015-10-13T18:47:47.800-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: auth-configRS [js_test:auth] 2015-10-13T18:47:47.800-0400 s20264| 2015-10-13T18:47:47.800-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set auth-configRS [js_test:auth] 2015-10-13T18:47:47.800-0400 s20264| 2015-10-13T18:47:47.800-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:47:47.800-0400 s20264| 2015-10-13T18:47:47.800-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20262, no events [js_test:auth] 2015-10-13T18:47:47.800-0400 s20264| 2015-10-13T18:47:47.800-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20261, no events [js_test:auth] 2015-10-13T18:47:47.800-0400 s20264| 2015-10-13T18:47:47.800-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d2 [js_test:auth] 2015-10-13T18:47:47.800-0400 s20264| 2015-10-13T18:47:47.800-0400 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:ubuntu:20270 [js_test:auth] 2015-10-13T18:47:47.801-0400 s20264| 2015-10-13T18:47:47.801-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:47.801-0400 s20264| 2015-10-13T18:47:47.801-0400 D NETWORK [ReplicaSetMonitorWatcher] connected to server ubuntu:20270 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:47.801-0400 d20270| 2015-10-13T18:47:47.801-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50963 #7 (4 connections now open) [js_test:auth] 2015-10-13T18:47:47.801-0400 s20264| 2015-10-13T18:47:47.801-0400 D NETWORK [ReplicaSetMonitorWatcher] connected connection! [js_test:auth] 2015-10-13T18:47:47.801-0400 s20264| 2015-10-13T18:47:47.801-0400 D SHARDING [ReplicaSetMonitorWatcher] calling onCreate auth for ubuntu:20270 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:47.816-0400 d20270| 2015-10-13T18:47:47.816-0400 I ACCESS [conn7] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:47.817-0400 s20264| 2015-10-13T18:47:47.817-0400 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:ubuntu:20269 [js_test:auth] 2015-10-13T18:47:47.817-0400 s20264| 2015-10-13T18:47:47.817-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:47.817-0400 s20264| 2015-10-13T18:47:47.817-0400 D NETWORK [ReplicaSetMonitorWatcher] connected to server ubuntu:20269 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:47.817-0400 d20269| 2015-10-13T18:47:47.817-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:48594 #7 (4 connections now open) [js_test:auth] 2015-10-13T18:47:47.817-0400 s20264| 2015-10-13T18:47:47.817-0400 D NETWORK [ReplicaSetMonitorWatcher] connected connection! [js_test:auth] 2015-10-13T18:47:47.817-0400 s20264| 2015-10-13T18:47:47.817-0400 D SHARDING [ReplicaSetMonitorWatcher] calling onCreate auth for ubuntu:20269 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:47.832-0400 d20269| 2015-10-13T18:47:47.832-0400 I ACCESS [conn7] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:47.920-0400 Awaiting ubuntu:20270 to be { "ok" : true } for connection to localhost:20264 (rs: undefined) [js_test:auth] 2015-10-13T18:47:47.922-0400 { [js_test:auth] 2015-10-13T18:47:47.922-0400 "d1" : { [js_test:auth] 2015-10-13T18:47:47.922-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:47.922-0400 { [js_test:auth] 2015-10-13T18:47:47.922-0400 "addr" : "ubuntu:20265", [js_test:auth] 2015-10-13T18:47:47.922-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:47.922-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:47.922-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:47.922-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:47.922-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:47.922-0400 }, [js_test:auth] 2015-10-13T18:47:47.922-0400 { [js_test:auth] 2015-10-13T18:47:47.923-0400 "addr" : "ubuntu:20266", [js_test:auth] 2015-10-13T18:47:47.923-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:47.923-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:47.923-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:47.923-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:47.923-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:47.923-0400 }, [js_test:auth] 2015-10-13T18:47:47.923-0400 { [js_test:auth] 2015-10-13T18:47:47.923-0400 "addr" : "ubuntu:20267", [js_test:auth] 2015-10-13T18:47:47.923-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:47.923-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:47.923-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:47.924-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:47.924-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:47.924-0400 } [js_test:auth] 2015-10-13T18:47:47.924-0400 ] [js_test:auth] 2015-10-13T18:47:47.924-0400 }, [js_test:auth] 2015-10-13T18:47:47.924-0400 "auth-configRS" : { [js_test:auth] 2015-10-13T18:47:47.924-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:47.924-0400 { [js_test:auth] 2015-10-13T18:47:47.924-0400 "addr" : "ubuntu:20260", [js_test:auth] 2015-10-13T18:47:47.924-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:47.924-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:47.924-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:47.924-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:47.925-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:47.925-0400 }, [js_test:auth] 2015-10-13T18:47:47.925-0400 { [js_test:auth] 2015-10-13T18:47:47.925-0400 "addr" : "ubuntu:20261", [js_test:auth] 2015-10-13T18:47:47.925-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:47.925-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:47.925-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:47.925-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:47.925-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:47.925-0400 }, [js_test:auth] 2015-10-13T18:47:47.925-0400 { [js_test:auth] 2015-10-13T18:47:47.925-0400 "addr" : "ubuntu:20262", [js_test:auth] 2015-10-13T18:47:47.925-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:47.926-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:47.926-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:47.926-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:47.926-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:47.926-0400 } [js_test:auth] 2015-10-13T18:47:47.926-0400 ] [js_test:auth] 2015-10-13T18:47:47.926-0400 }, [js_test:auth] 2015-10-13T18:47:47.926-0400 "d2" : { [js_test:auth] 2015-10-13T18:47:47.927-0400 "hosts" : [ [js_test:auth] 2015-10-13T18:47:47.927-0400 { [js_test:auth] 2015-10-13T18:47:47.927-0400 "addr" : "ubuntu:20268", [js_test:auth] 2015-10-13T18:47:47.927-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:47.927-0400 "ismaster" : true, [js_test:auth] 2015-10-13T18:47:47.927-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:47.927-0400 "secondary" : false, [js_test:auth] 2015-10-13T18:47:47.927-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:47.927-0400 }, [js_test:auth] 2015-10-13T18:47:47.927-0400 { [js_test:auth] 2015-10-13T18:47:47.927-0400 "addr" : "ubuntu:20269", [js_test:auth] 2015-10-13T18:47:47.927-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:47.927-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:47.928-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:47.928-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:47.928-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:47.928-0400 }, [js_test:auth] 2015-10-13T18:47:47.928-0400 { [js_test:auth] 2015-10-13T18:47:47.928-0400 "addr" : "ubuntu:20270", [js_test:auth] 2015-10-13T18:47:47.928-0400 "ok" : true, [js_test:auth] 2015-10-13T18:47:47.928-0400 "ismaster" : false, [js_test:auth] 2015-10-13T18:47:47.928-0400 "hidden" : false, [js_test:auth] 2015-10-13T18:47:47.928-0400 "secondary" : true, [js_test:auth] 2015-10-13T18:47:47.928-0400 "pingTimeMillis" : 0 [js_test:auth] 2015-10-13T18:47:47.928-0400 } [js_test:auth] 2015-10-13T18:47:47.929-0400 ] [js_test:auth] 2015-10-13T18:47:47.929-0400 } [js_test:auth] 2015-10-13T18:47:47.929-0400 } [js_test:auth] 2015-10-13T18:47:47.929-0400 s20264| 2015-10-13T18:47:47.922-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20265, no events [js_test:auth] 2015-10-13T18:47:48.063-0400 s20264| 2015-10-13T18:47:48.063-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:18.063-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776468063), up: 41, waiting: false, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:48.063-0400 s20264| 2015-10-13T18:47:48.063-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:48.078-0400 s20264| 2015-10-13T18:47:48.078-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776468000|1, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:48.079-0400 s20264| 2015-10-13T18:47:48.078-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:48.079-0400 s20264| 2015-10-13T18:47:48.078-0400 D SHARDING [Balancer] found 2 shards listed on config server(s) [js_test:auth] 2015-10-13T18:47:48.079-0400 s20264| 2015-10-13T18:47:48.079-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776468000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:48.080-0400 s20264| 2015-10-13T18:47:48.079-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:48.080-0400 s20264| 2015-10-13T18:47:48.079-0400 D SHARDING [Balancer] Refreshing MaxChunkSize: 1MB [js_test:auth] 2015-10-13T18:47:48.080-0400 s20264| 2015-10-13T18:47:48.079-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776468000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:48.080-0400 s20264| 2015-10-13T18:47:48.079-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:48.080-0400 s20264| 2015-10-13T18:47:48.079-0400 D SHARDING [Balancer] skipping balancing round because balancing is disabled [js_test:auth] 2015-10-13T18:47:48.080-0400 s20264| 2015-10-13T18:47:48.079-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:18.079-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776468079), up: 41, waiting: true, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:48.081-0400 s20264| 2015-10-13T18:47:48.079-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:48.268-0400 s20264| 2015-10-13T18:47:48.268-0400 D SHARDING [conn1] about to initiate autosplit: ns: test.foo, shard: d1, lastmod: 1|0||561d8a03c06b51335e5d6897, min: { x: MinKey }, max: { x: MaxKey } dataWritten: 93000 splitThreshold: 921 [js_test:auth] 2015-10-13T18:47:48.268-0400 s20264| 2015-10-13T18:47:48.268-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:18.268-0400 cmd:{ splitVector: "test.foo", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, maxChunkSizeBytes: 93000, maxSplitPoints: 0, maxChunkObjects: 250000 } [js_test:auth] 2015-10-13T18:47:48.270-0400 s20264| 2015-10-13T18:47:48.270-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:48.271-0400 d20265| 2015-10-13T18:47:48.270-0400 I SHARDING [conn11] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } [js_test:auth] 2015-10-13T18:47:48.272-0400 s20264| 2015-10-13T18:47:48.272-0400 D SHARDING [conn1] chunk not full enough to trigger auto-split [js_test:auth] 2015-10-13T18:47:48.307-0400 s20264| 2015-10-13T18:47:48.306-0400 D SHARDING [conn1] about to initiate autosplit: ns: test.foo, shard: d1, lastmod: 1|0||561d8a03c06b51335e5d6897, min: { x: MinKey }, max: { x: MaxKey } dataWritten: 93000 splitThreshold: 921 [js_test:auth] 2015-10-13T18:47:48.307-0400 s20264| 2015-10-13T18:47:48.306-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:18.306-0400 cmd:{ splitVector: "test.foo", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, maxChunkSizeBytes: 93000, maxSplitPoints: 0, maxChunkObjects: 250000 } [js_test:auth] 2015-10-13T18:47:48.307-0400 s20264| 2015-10-13T18:47:48.307-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:48.308-0400 d20265| 2015-10-13T18:47:48.307-0400 I SHARDING [conn11] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } [js_test:auth] 2015-10-13T18:47:48.309-0400 s20264| 2015-10-13T18:47:48.309-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20265, no events [js_test:auth] 2015-10-13T18:47:48.309-0400 s20264| 2015-10-13T18:47:48.309-0400 D SHARDING [conn1] calling onCreate auth for d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:47:48.309-0400 s20264| 2015-10-13T18:47:48.309-0400 D NETWORK [conn1] creating new connection to:ubuntu:20265 [js_test:auth] 2015-10-13T18:47:48.309-0400 s20264| 2015-10-13T18:47:48.309-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:48.309-0400 d20265| 2015-10-13T18:47:48.309-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54469 #21 (14 connections now open) [js_test:auth] 2015-10-13T18:47:48.309-0400 s20264| 2015-10-13T18:47:48.309-0400 D NETWORK [conn1] connected to server ubuntu:20265 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:48.310-0400 s20264| 2015-10-13T18:47:48.309-0400 D NETWORK [conn1] connected connection! [js_test:auth] 2015-10-13T18:47:48.325-0400 d20265| 2015-10-13T18:47:48.325-0400 I ACCESS [conn21] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:48.325-0400 s20264| 2015-10-13T18:47:48.325-0400 D SHARDING [conn1] initializing shard connection to d1:d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:47:48.325-0400 s20264| 2015-10-13T18:47:48.325-0400 D SHARDING [conn1] setShardVersion d1 ubuntu:20265 { setShardVersion: "", init: true, authoritative: true, configdb: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", shard: "d1", shardHost: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267" } [js_test:auth] 2015-10-13T18:47:48.326-0400 d20265| 2015-10-13T18:47:48.325-0400 I SHARDING [conn21] received splitChunk request: { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 1.0 }, { x: 1001.0 }, { x: 1502.0 } ], configdb: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", shardVersion: [ Timestamp 1000|0, ObjectId('561d8a03c06b51335e5d6897') ], epoch: ObjectId('561d8a03c06b51335e5d6897') } [js_test:auth] 2015-10-13T18:47:48.349-0400 d20265| 2015-10-13T18:47:48.349-0400 I SHARDING [conn21] distributed lock 'test.foo' acquired for 'splitting chunk [{ x: MinKey }, { x: MaxKey }) in test.foo', ts : 561d8a14cf305caadba71ab0 [js_test:auth] 2015-10-13T18:47:48.349-0400 d20265| 2015-10-13T18:47:48.349-0400 I SHARDING [conn21] remotely refreshing metadata for test.foo based on current shard version 1|0||561d8a03c06b51335e5d6897, current metadata version is 1|0||561d8a03c06b51335e5d6897 [js_test:auth] 2015-10-13T18:47:48.350-0400 c20262| 2015-10-13T18:47:48.349-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49477 #13 (8 connections now open) [js_test:auth] 2015-10-13T18:47:48.367-0400 c20262| 2015-10-13T18:47:48.367-0400 I ACCESS [conn13] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:48.367-0400 d20265| 2015-10-13T18:47:48.367-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20262 [js_test:auth] 2015-10-13T18:47:48.369-0400 d20265| 2015-10-13T18:47:48.369-0400 I SHARDING [conn21] metadata of collection test.foo already up to date (shard version : 1|0||561d8a03c06b51335e5d6897, took 19ms) [js_test:auth] 2015-10-13T18:47:48.369-0400 d20265| 2015-10-13T18:47:48.369-0400 I SHARDING [conn21] splitChunk accepted at version 1|0||561d8a03c06b51335e5d6897 [js_test:auth] 2015-10-13T18:47:48.370-0400 d20265| 2015-10-13T18:47:48.370-0400 I SHARDING [conn21] about to log metadata event: { _id: "ubuntu-2015-10-13T18:47:48.370-0400-561d8a14cf305caadba71ab1", server: "ubuntu", clientAddr: "127.0.0.1:54469", time: new Date(1444776468370), what: "multi-split", ns: "test.foo", details: { before: { min: { x: MinKey }, max: { x: MaxKey } }, number: 1, of: 4, chunk: { min: { x: MinKey }, max: { x: 1.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897') } } } [js_test:auth] 2015-10-13T18:47:48.385-0400 d20265| 2015-10-13T18:47:48.384-0400 I SHARDING [conn21] about to log metadata event: { _id: "ubuntu-2015-10-13T18:47:48.384-0400-561d8a14cf305caadba71ab2", server: "ubuntu", clientAddr: "127.0.0.1:54469", time: new Date(1444776468384), what: "multi-split", ns: "test.foo", details: { before: { min: { x: MinKey }, max: { x: MaxKey } }, number: 2, of: 4, chunk: { min: { x: 1.0 }, max: { x: 1001.0 }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897') } } } [js_test:auth] 2015-10-13T18:47:48.423-0400 d20265| 2015-10-13T18:47:48.422-0400 I SHARDING [conn21] about to log metadata event: { _id: "ubuntu-2015-10-13T18:47:48.422-0400-561d8a14cf305caadba71ab3", server: "ubuntu", clientAddr: "127.0.0.1:54469", time: new Date(1444776468422), what: "multi-split", ns: "test.foo", details: { before: { min: { x: MinKey }, max: { x: MaxKey } }, number: 3, of: 4, chunk: { min: { x: 1001.0 }, max: { x: 1502.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897') } } } [js_test:auth] 2015-10-13T18:47:48.455-0400 d20265| 2015-10-13T18:47:48.455-0400 I SHARDING [conn21] about to log metadata event: { _id: "ubuntu-2015-10-13T18:47:48.455-0400-561d8a14cf305caadba71ab4", server: "ubuntu", clientAddr: "127.0.0.1:54469", time: new Date(1444776468455), what: "multi-split", ns: "test.foo", details: { before: { min: { x: MinKey }, max: { x: MaxKey } }, number: 4, of: 4, chunk: { min: { x: 1502.0 }, max: { x: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897') } } } [js_test:auth] 2015-10-13T18:47:48.511-0400 d20265| 2015-10-13T18:47:48.511-0400 I SHARDING [conn21] distributed lock with ts: 561d8a14cf305caadba71ab0' unlocked. [js_test:auth] 2015-10-13T18:47:48.512-0400 d20265| 2015-10-13T18:47:48.511-0400 I COMMAND [conn21] command admin.$cmd command: splitChunk { splitChunk: "test.foo", keyPattern: { x: 1.0 }, min: { x: MinKey }, max: { x: MaxKey }, from: "d1", splitKeys: [ { x: 1.0 }, { x: 1001.0 }, { x: 1502.0 } ], configdb: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", shardVersion: [ Timestamp 1000|0, ObjectId('561d8a03c06b51335e5d6897') ], epoch: ObjectId('561d8a03c06b51335e5d6897') } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:188 locks:{ Global: { acquireCount: { r: 6, w: 2 } }, Database: { acquireCount: { r: 2, w: 2 } }, Collection: { acquireCount: { r: 2, W: 2 } } } protocol:op_command 186ms [js_test:auth] 2015-10-13T18:47:48.512-0400 s20264| 2015-10-13T18:47:48.512-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776468000|9, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:48.512-0400 s20264| 2015-10-13T18:47:48.512-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:48.513-0400 s20264| 2015-10-13T18:47:48.512-0400 D SHARDING [conn1] loading chunk manager for collection test.foo using old chunk manager w/ version 1|0||561d8a03c06b51335e5d6897 and 1 chunks [js_test:auth] 2015-10-13T18:47:48.513-0400 s20264| 2015-10-13T18:47:48.513-0400 D SHARDING [conn1] major version query from 1|0||561d8a03c06b51335e5d6897 and over 1 shards is query: { ns: "test.foo", lastmod: { $gte: Timestamp 1000|0 } }, sort: { lastmod: 1 } [js_test:auth] 2015-10-13T18:47:48.513-0400 s20264| 2015-10-13T18:47:48.513-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "chunks", filter: { ns: "test.foo", lastmod: { $gte: Timestamp 1000|0 } }, sort: { lastmod: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776468000|9, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:48.513-0400 s20264| 2015-10-13T18:47:48.513-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:48.513-0400 s20264| 2015-10-13T18:47:48.513-0400 D SHARDING [conn1] loaded 4 chunks into new chunk manager for test.foo with version 1|4||561d8a03c06b51335e5d6897 [js_test:auth] 2015-10-13T18:47:48.514-0400 s20264| 2015-10-13T18:47:48.513-0400 I SHARDING [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||561d8a03c06b51335e5d6897 based on: 1|0||561d8a03c06b51335e5d6897 [js_test:auth] 2015-10-13T18:47:48.514-0400 s20264| 2015-10-13T18:47:48.513-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776468000|9, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:48.514-0400 s20264| 2015-10-13T18:47:48.513-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:48.514-0400 s20264| 2015-10-13T18:47:48.514-0400 I SHARDING [conn1] autosplitted test.foo shard: ns: test.foo, shard: d1, lastmod: 1|0||561d8a03c06b51335e5d6897, min: { x: MinKey }, max: { x: MaxKey } into 4 (splitThreshold 921) (migrate suggested, but no migrations allowed) [js_test:auth] 2015-10-13T18:47:48.551-0400 s20264| 2015-10-13T18:47:48.551-0400 D SHARDING [conn1] about to initiate autosplit: ns: test.foo, shard: d1, lastmod: 1|4||561d8a03c06b51335e5d6897, min: { x: 1502.0 }, max: { x: MaxKey } dataWritten: 204571 splitThreshold: 943718 [js_test:auth] 2015-10-13T18:47:48.551-0400 s20264| 2015-10-13T18:47:48.551-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:18.551-0400 cmd:{ splitVector: "test.foo", keyPattern: { x: 1.0 }, min: { x: 1502.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, maxSplitPoints: 0, maxChunkObjects: 250000 } [js_test:auth] 2015-10-13T18:47:48.552-0400 s20264| 2015-10-13T18:47:48.551-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:48.552-0400 s20264| 2015-10-13T18:47:48.552-0400 D SHARDING [conn1] chunk not full enough to trigger auto-split [js_test:auth] 2015-10-13T18:47:48.662-0400 s20264| 2015-10-13T18:47:48.662-0400 D SHARDING [conn1] about to initiate autosplit: ns: test.foo, shard: d1, lastmod: 1|4||561d8a03c06b51335e5d6897, min: { x: 1502.0 }, max: { x: MaxKey } dataWritten: 279000 splitThreshold: 943718 [js_test:auth] 2015-10-13T18:47:48.663-0400 s20264| 2015-10-13T18:47:48.662-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:18.662-0400 cmd:{ splitVector: "test.foo", keyPattern: { x: 1.0 }, min: { x: 1502.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, maxSplitPoints: 0, maxChunkObjects: 250000 } [js_test:auth] 2015-10-13T18:47:48.663-0400 s20264| 2015-10-13T18:47:48.662-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:48.663-0400 s20264| 2015-10-13T18:47:48.662-0400 D SHARDING [conn1] chunk not full enough to trigger auto-split [js_test:auth] 2015-10-13T18:47:48.770-0400 s20264| 2015-10-13T18:47:48.770-0400 D SHARDING [conn1] about to initiate autosplit: ns: test.foo, shard: d1, lastmod: 1|4||561d8a03c06b51335e5d6897, min: { x: 1502.0 }, max: { x: MaxKey } dataWritten: 279000 splitThreshold: 943718 [js_test:auth] 2015-10-13T18:47:48.770-0400 s20264| 2015-10-13T18:47:48.770-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:18.770-0400 cmd:{ splitVector: "test.foo", keyPattern: { x: 1.0 }, min: { x: 1502.0 }, max: { x: MaxKey }, maxChunkSizeBytes: 1048576, maxSplitPoints: 0, maxChunkObjects: 250000 } [js_test:auth] 2015-10-13T18:47:48.771-0400 s20264| 2015-10-13T18:47:48.770-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:48.771-0400 s20264| 2015-10-13T18:47:48.770-0400 D SHARDING [conn1] chunk not full enough to trigger auto-split [js_test:auth] 2015-10-13T18:47:48.823-0400 s20264| 2015-10-13T18:47:48.822-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:18.822-0400 cmd:{ update: "settings", updates: [ { q: { _id: "balancer" }, u: { $set: { stopped: false } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 0 }, ordered: true, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:48.823-0400 s20264| 2015-10-13T18:47:48.822-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:48.843-0400 s20264| 2015-10-13T18:47:48.842-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "collections", filter: { _id: /^config\./ }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776468000|10, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:48.843-0400 s20264| 2015-10-13T18:47:48.842-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:48.843-0400 s20264| 2015-10-13T18:47:48.843-0400 D SHARDING [conn1] found 0 collections left and 0 collections dropped for database config [js_test:auth] 2015-10-13T18:47:48.843-0400 s20264| 2015-10-13T18:47:48.843-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:48.843-0400 s20264| 2015-10-13T18:47:48.843-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:48.844-0400 s20264| 2015-10-13T18:47:48.844-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:48.844-0400 s20264| 2015-10-13T18:47:48.844-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:49.045-0400 s20264| 2015-10-13T18:47:49.044-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:49.045-0400 s20264| 2015-10-13T18:47:49.044-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:49.246-0400 s20264| 2015-10-13T18:47:49.245-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:49.246-0400 s20264| 2015-10-13T18:47:49.246-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:49.447-0400 s20264| 2015-10-13T18:47:49.447-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:49.447-0400 s20264| 2015-10-13T18:47:49.447-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:49.648-0400 s20264| 2015-10-13T18:47:49.648-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:49.648-0400 s20264| 2015-10-13T18:47:49.648-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:49.849-0400 s20264| 2015-10-13T18:47:49.849-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:49.849-0400 s20264| 2015-10-13T18:47:49.849-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:50.050-0400 s20264| 2015-10-13T18:47:50.050-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:50.050-0400 s20264| 2015-10-13T18:47:50.050-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:50.251-0400 s20264| 2015-10-13T18:47:50.251-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:50.251-0400 s20264| 2015-10-13T18:47:50.251-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:50.453-0400 s20264| 2015-10-13T18:47:50.452-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:50.453-0400 s20264| 2015-10-13T18:47:50.452-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:50.654-0400 s20264| 2015-10-13T18:47:50.654-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:50.654-0400 s20264| 2015-10-13T18:47:50.654-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:50.855-0400 s20264| 2015-10-13T18:47:50.855-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:50.855-0400 s20264| 2015-10-13T18:47:50.855-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:51.056-0400 s20264| 2015-10-13T18:47:51.056-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:51.056-0400 s20264| 2015-10-13T18:47:51.056-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:51.257-0400 s20264| 2015-10-13T18:47:51.257-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:51.257-0400 s20264| 2015-10-13T18:47:51.257-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:51.458-0400 s20264| 2015-10-13T18:47:51.458-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:51.458-0400 s20264| 2015-10-13T18:47:51.458-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:51.659-0400 s20264| 2015-10-13T18:47:51.659-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:51.660-0400 s20264| 2015-10-13T18:47:51.659-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:51.861-0400 s20264| 2015-10-13T18:47:51.860-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:51.861-0400 s20264| 2015-10-13T18:47:51.861-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:52.062-0400 s20264| 2015-10-13T18:47:52.062-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:52.062-0400 s20264| 2015-10-13T18:47:52.062-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:52.263-0400 s20264| 2015-10-13T18:47:52.263-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:52.263-0400 s20264| 2015-10-13T18:47:52.263-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:52.464-0400 s20264| 2015-10-13T18:47:52.464-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:52.464-0400 s20264| 2015-10-13T18:47:52.464-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:52.665-0400 s20264| 2015-10-13T18:47:52.665-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:52.666-0400 s20264| 2015-10-13T18:47:52.665-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:52.867-0400 s20264| 2015-10-13T18:47:52.867-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:52.867-0400 s20264| 2015-10-13T18:47:52.867-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:53.068-0400 s20264| 2015-10-13T18:47:53.068-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:53.068-0400 s20264| 2015-10-13T18:47:53.068-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:53.269-0400 s20264| 2015-10-13T18:47:53.269-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:53.269-0400 s20264| 2015-10-13T18:47:53.269-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:53.470-0400 s20264| 2015-10-13T18:47:53.470-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:53.470-0400 s20264| 2015-10-13T18:47:53.470-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:53.671-0400 s20264| 2015-10-13T18:47:53.671-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:53.671-0400 s20264| 2015-10-13T18:47:53.671-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:53.872-0400 s20264| 2015-10-13T18:47:53.872-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:53.872-0400 s20264| 2015-10-13T18:47:53.872-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:54.073-0400 s20264| 2015-10-13T18:47:54.073-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:54.074-0400 s20264| 2015-10-13T18:47:54.073-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:54.275-0400 s20264| 2015-10-13T18:47:54.275-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:54.275-0400 s20264| 2015-10-13T18:47:54.275-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:54.476-0400 s20264| 2015-10-13T18:47:54.476-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:54.476-0400 s20264| 2015-10-13T18:47:54.476-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:54.677-0400 s20264| 2015-10-13T18:47:54.677-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:54.677-0400 s20264| 2015-10-13T18:47:54.677-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:54.878-0400 s20264| 2015-10-13T18:47:54.878-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:54.879-0400 s20264| 2015-10-13T18:47:54.878-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:55.080-0400 s20264| 2015-10-13T18:47:55.079-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:55.080-0400 s20264| 2015-10-13T18:47:55.080-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:55.281-0400 s20264| 2015-10-13T18:47:55.281-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:55.282-0400 s20264| 2015-10-13T18:47:55.281-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:55.482-0400 s20264| 2015-10-13T18:47:55.482-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:55.483-0400 s20264| 2015-10-13T18:47:55.482-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:55.684-0400 s20264| 2015-10-13T18:47:55.684-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:55.684-0400 s20264| 2015-10-13T18:47:55.684-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:55.885-0400 s20264| 2015-10-13T18:47:55.885-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:55.886-0400 s20264| 2015-10-13T18:47:55.885-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:56.087-0400 s20264| 2015-10-13T18:47:56.086-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:56.087-0400 s20264| 2015-10-13T18:47:56.087-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:56.288-0400 s20264| 2015-10-13T18:47:56.288-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:56.288-0400 s20264| 2015-10-13T18:47:56.288-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:56.489-0400 s20264| 2015-10-13T18:47:56.489-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:56.490-0400 s20264| 2015-10-13T18:47:56.489-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:56.690-0400 s20264| 2015-10-13T18:47:56.690-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:56.691-0400 s20264| 2015-10-13T18:47:56.690-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:56.891-0400 s20264| 2015-10-13T18:47:56.891-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:56.892-0400 s20264| 2015-10-13T18:47:56.891-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:57.093-0400 s20264| 2015-10-13T18:47:57.092-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:57.093-0400 s20264| 2015-10-13T18:47:57.093-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:57.294-0400 s20264| 2015-10-13T18:47:57.294-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:57.295-0400 s20264| 2015-10-13T18:47:57.294-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:57.496-0400 s20264| 2015-10-13T18:47:57.496-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:57.496-0400 s20264| 2015-10-13T18:47:57.496-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:57.697-0400 s20264| 2015-10-13T18:47:57.697-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:57.697-0400 s20264| 2015-10-13T18:47:57.697-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:57.833-0400 s20264| 2015-10-13T18:47:57.833-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d1 [js_test:auth] 2015-10-13T18:47:57.833-0400 s20264| 2015-10-13T18:47:57.833-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d1 [js_test:auth] 2015-10-13T18:47:57.834-0400 s20264| 2015-10-13T18:47:57.833-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20265, no events [js_test:auth] 2015-10-13T18:47:57.834-0400 s20264| 2015-10-13T18:47:57.833-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20267, no events [js_test:auth] 2015-10-13T18:47:57.834-0400 s20264| 2015-10-13T18:47:57.833-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20266, no events [js_test:auth] 2015-10-13T18:47:57.835-0400 s20264| 2015-10-13T18:47:57.834-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: auth-configRS [js_test:auth] 2015-10-13T18:47:57.835-0400 s20264| 2015-10-13T18:47:57.834-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set auth-configRS [js_test:auth] 2015-10-13T18:47:57.835-0400 s20264| 2015-10-13T18:47:57.834-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:47:57.836-0400 s20264| 2015-10-13T18:47:57.834-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20261, no events [js_test:auth] 2015-10-13T18:47:57.836-0400 s20264| 2015-10-13T18:47:57.834-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20262, no events [js_test:auth] 2015-10-13T18:47:57.836-0400 s20264| 2015-10-13T18:47:57.834-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d2 [js_test:auth] 2015-10-13T18:47:57.836-0400 s20264| 2015-10-13T18:47:57.834-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d2 [js_test:auth] 2015-10-13T18:47:57.836-0400 s20264| 2015-10-13T18:47:57.834-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20268, no events [js_test:auth] 2015-10-13T18:47:57.836-0400 s20264| 2015-10-13T18:47:57.834-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20270, no events [js_test:auth] 2015-10-13T18:47:57.837-0400 s20264| 2015-10-13T18:47:57.835-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20269, no events [js_test:auth] 2015-10-13T18:47:57.899-0400 s20264| 2015-10-13T18:47:57.899-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:57.899-0400 s20264| 2015-10-13T18:47:57.899-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:58.099-0400 s20264| 2015-10-13T18:47:58.099-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:28.099-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776478099), up: 51, waiting: false, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:58.099-0400 s20264| 2015-10-13T18:47:58.099-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:58.100-0400 s20264| 2015-10-13T18:47:58.100-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:58.100-0400 s20264| 2015-10-13T18:47:58.100-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:58.113-0400 s20264| 2015-10-13T18:47:58.113-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776478000|1, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:58.113-0400 s20264| 2015-10-13T18:47:58.113-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:58.114-0400 s20264| 2015-10-13T18:47:58.113-0400 D SHARDING [Balancer] found 2 shards listed on config server(s) [js_test:auth] 2015-10-13T18:47:58.114-0400 s20264| 2015-10-13T18:47:58.114-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776478000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:58.114-0400 s20264| 2015-10-13T18:47:58.114-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:58.115-0400 s20264| 2015-10-13T18:47:58.114-0400 D SHARDING [Balancer] Refreshing MaxChunkSize: 1MB [js_test:auth] 2015-10-13T18:47:58.115-0400 s20264| 2015-10-13T18:47:58.114-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776478000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:58.115-0400 s20264| 2015-10-13T18:47:58.114-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:58.115-0400 s20264| 2015-10-13T18:47:58.115-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:28.115-0400 cmd:{ features: 1 } [js_test:auth] 2015-10-13T18:47:58.116-0400 s20264| 2015-10-13T18:47:58.115-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:58.116-0400 s20264| 2015-10-13T18:47:58.115-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:48:28.115-0400 cmd:{ features: 1 } [js_test:auth] 2015-10-13T18:47:58.116-0400 s20264| 2015-10-13T18:47:58.115-0400 D ASIO [NetworkInterfaceASIO] Connecting to ubuntu:20268 [js_test:auth] 2015-10-13T18:47:58.117-0400 s20264| 2015-10-13T18:47:58.115-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:58.117-0400 d20268| 2015-10-13T18:47:58.115-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55268 #17 (10 connections now open) [js_test:auth] 2015-10-13T18:47:58.118-0400 s20264| 2015-10-13T18:47:58.118-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:58.137-0400 s20264| 2015-10-13T18:47:58.137-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:58.137-0400 s20264| 2015-10-13T18:47:58.137-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:58.138-0400 d20268| 2015-10-13T18:47:58.137-0400 I ACCESS [conn17] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.138-0400 s20264| 2015-10-13T18:47:58.138-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:58.138-0400 d20268| 2015-10-13T18:47:58.138-0400 I SHARDING [conn17] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers [js_test:auth] 2015-10-13T18:47:58.138-0400 d20268| 2015-10-13T18:47:58.138-0400 I SHARDING [conn17] Updating config server connection string to: auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 [js_test:auth] 2015-10-13T18:47:58.138-0400 d20268| 2015-10-13T18:47:58.138-0400 I NETWORK [conn17] Starting new replica set monitor for auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 [js_test:auth] 2015-10-13T18:47:58.138-0400 d20268| 2015-10-13T18:47:58.138-0400 I NETWORK [ReplicaSetMonitorWatcher] starting [js_test:auth] 2015-10-13T18:47:58.140-0400 d20268| 2015-10-13T18:47:58.140-0400 I SHARDING [thread1] creating distributed lock ping thread for process ubuntu:20268:1444776478:-243159719 (sleeping for 30000ms) [js_test:auth] 2015-10-13T18:47:58.141-0400 c20262| 2015-10-13T18:47:58.141-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50052 #14 (9 connections now open) [js_test:auth] 2015-10-13T18:47:58.141-0400 c20261| 2015-10-13T18:47:58.141-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50061 #14 (9 connections now open) [js_test:auth] 2015-10-13T18:47:58.161-0400 c20262| 2015-10-13T18:47:58.161-0400 I ACCESS [conn14] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.162-0400 c20260| 2015-10-13T18:47:58.161-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52936 #24 (14 connections now open) [js_test:auth] 2015-10-13T18:47:58.163-0400 c20261| 2015-10-13T18:47:58.163-0400 I ACCESS [conn14] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.180-0400 c20260| 2015-10-13T18:47:58.180-0400 I ACCESS [conn24] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.181-0400 c20260| 2015-10-13T18:47:58.181-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52938 #25 (15 connections now open) [js_test:auth] 2015-10-13T18:47:58.181-0400 c20262| 2015-10-13T18:47:58.180-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50056 #15 (10 connections now open) [js_test:auth] 2015-10-13T18:47:58.213-0400 c20260| 2015-10-13T18:47:58.213-0400 I ACCESS [conn25] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.213-0400 d20268| 2015-10-13T18:47:58.213-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20260 [js_test:auth] 2015-10-13T18:47:58.214-0400 c20262| 2015-10-13T18:47:58.213-0400 I ACCESS [conn15] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.214-0400 d20268| 2015-10-13T18:47:58.213-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20262 [js_test:auth] 2015-10-13T18:47:58.214-0400 d20268| 2015-10-13T18:47:58.214-0400 I NETWORK [conn17] Starting new replica set monitor for d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:47:58.214-0400 d20268| 2015-10-13T18:47:58.214-0400 I NETWORK [conn17] Starting new replica set monitor for d2/ubuntu:20268,ubuntu:20269,ubuntu:20270 [js_test:auth] 2015-10-13T18:47:58.214-0400 d20268| 2015-10-13T18:47:58.214-0400 I SHARDING [conn17] remote client 127.0.0.1:55268 initialized this host as shard d2 [js_test:auth] 2015-10-13T18:47:58.214-0400 s20264| 2015-10-13T18:47:58.214-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20268 [js_test:auth] 2015-10-13T18:47:58.214-0400 s20264| 2015-10-13T18:47:58.214-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:58.215-0400 s20264| 2015-10-13T18:47:58.214-0400 D SHARDING [Balancer] trying to acquire new distributed lock for balancer ( lock timeout : 900000 ms, ping interval : 30000 ms, process : ubuntu:20264:1444776427:399327856 ) with lockSessionID: 561d8a1ec06b51335e5d689d, why: doing balance round [js_test:auth] 2015-10-13T18:47:58.215-0400 s20264| 2015-10-13T18:47:58.214-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:28.214-0400 cmd:{ findAndModify: "locks", query: { _id: "balancer", state: 0 }, update: { $set: { ts: ObjectId('561d8a1ec06b51335e5d689d'), state: 2, who: "ubuntu:20264:1444776427:399327856:Balancer", process: "ubuntu:20264:1444776427:399327856", when: new Date(1444776478214), why: "doing balance round" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:58.215-0400 s20264| 2015-10-13T18:47:58.214-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:58.231-0400 d20268| 2015-10-13T18:47:58.231-0400 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: findAndModify query predicate didn't match any lock document [js_test:auth] 2015-10-13T18:47:58.253-0400 s20264| 2015-10-13T18:47:58.253-0400 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 561d8a1ec06b51335e5d689d [js_test:auth] 2015-10-13T18:47:58.253-0400 s20264| 2015-10-13T18:47:58.253-0400 D SHARDING [Balancer] *** start balancing round. waitForDelete: 1, secondaryThrottle: { w: 1, wtimeout: 0 } [js_test:auth] 2015-10-13T18:47:58.253-0400 s20264| 2015-10-13T18:47:58.253-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776478000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:58.253-0400 s20264| 2015-10-13T18:47:58.253-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:58.254-0400 s20264| 2015-10-13T18:47:58.253-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776478000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:58.254-0400 s20264| 2015-10-13T18:47:58.253-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:58.254-0400 s20264| 2015-10-13T18:47:58.254-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:28.254-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:47:58.254-0400 s20264| 2015-10-13T18:47:58.254-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:58.255-0400 s20264| 2015-10-13T18:47:58.255-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20265 db:admin expDate:2015-10-13T18:48:28.255-0400 cmd:{ serverStatus: 1 } [js_test:auth] 2015-10-13T18:47:58.255-0400 s20264| 2015-10-13T18:47:58.255-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20265 [js_test:auth] 2015-10-13T18:47:58.256-0400 s20264| 2015-10-13T18:47:58.256-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:48:28.256-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:47:58.256-0400 s20264| 2015-10-13T18:47:58.256-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:58.256-0400 s20264| 2015-10-13T18:47:58.256-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:48:28.256-0400 cmd:{ serverStatus: 1 } [js_test:auth] 2015-10-13T18:47:58.257-0400 s20264| 2015-10-13T18:47:58.256-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:47:58.257-0400 s20264| 2015-10-13T18:47:58.257-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { min: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776478000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:58.257-0400 s20264| 2015-10-13T18:47:58.257-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:58.258-0400 s20264| 2015-10-13T18:47:58.258-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "tags", filter: { ns: "test.foo" }, sort: { min: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776478000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:58.258-0400 s20264| 2015-10-13T18:47:58.258-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:58.259-0400 s20264| 2015-10-13T18:47:58.259-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776478000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:58.259-0400 s20264| 2015-10-13T18:47:58.259-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:47:58.259-0400 s20264| 2015-10-13T18:47:58.259-0400 D SHARDING [Balancer] collection : test.foo [js_test:auth] 2015-10-13T18:47:58.259-0400 s20264| 2015-10-13T18:47:58.259-0400 D SHARDING [Balancer] donor : d1 chunks on 4 [js_test:auth] 2015-10-13T18:47:58.260-0400 s20264| 2015-10-13T18:47:58.259-0400 D SHARDING [Balancer] receiver : d2 chunks on 0 [js_test:auth] 2015-10-13T18:47:58.260-0400 s20264| 2015-10-13T18:47:58.259-0400 D SHARDING [Balancer] threshold : 2 [js_test:auth] 2015-10-13T18:47:58.260-0400 s20264| 2015-10-13T18:47:58.259-0400 I SHARDING [Balancer] ns: test.foo going to move { _id: "test.foo-x_MinKey", ns: "test.foo", min: { x: MinKey }, max: { x: 1.0 }, shard: "d1", version: Timestamp 1000|1, versionEpoch: ObjectId('561d8a03c06b51335e5d6897'), lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897') } from: d1 to: d2 tag [] [js_test:auth] 2015-10-13T18:47:58.260-0400 s20264| 2015-10-13T18:47:58.259-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776478000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:47:58.260-0400 s20264| 2015-10-13T18:47:58.259-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:47:58.260-0400 s20264| 2015-10-13T18:47:58.260-0400 I SHARDING [Balancer] moving chunk ns: test.foo moving ( ns: test.foo, shard: d1, lastmod: 1|1||561d8a03c06b51335e5d6897, min: { x: MinKey }, max: { x: 1.0 }) d1 -> d2 [js_test:auth] 2015-10-13T18:47:58.260-0400 s20264| 2015-10-13T18:47:58.260-0400 D NETWORK [Balancer] polling for status of connection to 127.0.1.1:20265, no events [js_test:auth] 2015-10-13T18:47:58.261-0400 d20265| 2015-10-13T18:47:58.261-0400 I NETWORK [conn21] Starting new replica set monitor for d2/ubuntu:20268,ubuntu:20269,ubuntu:20270 [js_test:auth] 2015-10-13T18:47:58.261-0400 d20265| 2015-10-13T18:47:58.261-0400 I SHARDING [conn21] moveChunk waiting for full cleanup after move [js_test:auth] 2015-10-13T18:47:58.261-0400 d20265| 2015-10-13T18:47:58.261-0400 I SHARDING [conn21] received moveChunk request: { moveChunk: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", to: "d2/ubuntu:20268,ubuntu:20269,ubuntu:20270", fromShard: "d1", toShard: "d2", min: { x: MinKey }, max: { x: 1.0 }, maxChunkSizeBytes: 1048576, configdb: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", secondaryThrottle: false, waitForDelete: true, maxTimeMS: 0, shardVersion: [ Timestamp 1000|4, ObjectId('561d8a03c06b51335e5d6897') ], epoch: ObjectId('561d8a03c06b51335e5d6897') } [js_test:auth] 2015-10-13T18:47:58.277-0400 d20265| 2015-10-13T18:47:58.277-0400 I SHARDING [conn21] distributed lock 'test.foo' acquired for 'migrating chunk [{ x: MinKey }, { x: 1.0 }) in test.foo', ts : 561d8a1ecf305caadba71ab5 [js_test:auth] 2015-10-13T18:47:58.277-0400 d20265| 2015-10-13T18:47:58.277-0400 I SHARDING [conn21] remotely refreshing metadata for test.foo based on current shard version 1|4||561d8a03c06b51335e5d6897, current metadata version is 1|4||561d8a03c06b51335e5d6897 [js_test:auth] 2015-10-13T18:47:58.278-0400 d20265| 2015-10-13T18:47:58.278-0400 I SHARDING [conn21] metadata of collection test.foo already up to date (shard version : 1|4||561d8a03c06b51335e5d6897, took 1ms) [js_test:auth] 2015-10-13T18:47:58.278-0400 d20265| 2015-10-13T18:47:58.278-0400 I SHARDING [conn21] about to log metadata event: { _id: "ubuntu-2015-10-13T18:47:58.278-0400-561d8a1ecf305caadba71ab6", server: "ubuntu", clientAddr: "127.0.0.1:54469", time: new Date(1444776478278), what: "moveChunk.start", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 1.0 }, from: "d1", to: "d2" } } [js_test:auth] 2015-10-13T18:47:58.300-0400 d20265| 2015-10-13T18:47:58.299-0400 I SHARDING [conn21] moveChunk request accepted at version 1|4||561d8a03c06b51335e5d6897 [js_test:auth] 2015-10-13T18:47:58.300-0400 d20265| 2015-10-13T18:47:58.300-0400 I SHARDING [conn21] moveChunk number of documents: 1 [js_test:auth] 2015-10-13T18:47:58.300-0400 d20270| 2015-10-13T18:47:58.300-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51600 #8 (5 connections now open) [js_test:auth] 2015-10-13T18:47:58.301-0400 s20264| 2015-10-13T18:47:58.301-0400 D ASIO [conn1] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "locks", filter: { _id: "balancer" }, limit: 1, singleBatch: true } [js_test:auth] 2015-10-13T18:47:58.301-0400 s20264| 2015-10-13T18:47:58.301-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:47:58.303-0400 s20264| 2015-10-13T18:47:58.302-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:47:58.303-0400 s20264| 2015-10-13T18:47:58.302-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:47:58.303-0400 s20264| 2015-10-13T18:47:58.302-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:47:58.303-0400 s20264| 2015-10-13T18:47:58.302-0400 D SHARDING [conn1] calling onCreate auth for auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 [js_test:auth] 2015-10-13T18:47:58.303-0400 s20264| 2015-10-13T18:47:58.303-0400 D NETWORK [conn1] creating new connection to:ubuntu:20260 [js_test:auth] 2015-10-13T18:47:58.303-0400 s20264| 2015-10-13T18:47:58.303-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:47:58.304-0400 s20264| 2015-10-13T18:47:58.303-0400 D NETWORK [conn1] connected to server ubuntu:20260 (127.0.1.1) [js_test:auth] 2015-10-13T18:47:58.304-0400 c20260| 2015-10-13T18:47:58.303-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52944 #26 (16 connections now open) [js_test:auth] 2015-10-13T18:47:58.304-0400 s20264| 2015-10-13T18:47:58.303-0400 D NETWORK [conn1] connected connection! [js_test:auth] 2015-10-13T18:47:58.316-0400 d20270| 2015-10-13T18:47:58.316-0400 I ACCESS [conn8] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.316-0400 d20268| 2015-10-13T18:47:58.316-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55281 #18 (11 connections now open) [js_test:auth] 2015-10-13T18:47:58.322-0400 c20260| 2015-10-13T18:47:58.321-0400 I ACCESS [conn26] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.322-0400 s20264| 2015-10-13T18:47:58.322-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:47:58.322-0400 s20264| 2015-10-13T18:47:58.322-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:47:58.323-0400 s20264| 2015-10-13T18:47:58.322-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:47:58.323-0400 s20264| 2015-10-13T18:47:58.322-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:47:58.323-0400 s20264| 2015-10-13T18:47:58.322-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:47:58.323-0400 s20264| 2015-10-13T18:47:58.322-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:47:58.324-0400 s20264| 2015-10-13T18:47:58.322-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:47:58.324-0400 s20264| 2015-10-13T18:47:58.322-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:47:58.324-0400 s20264| 2015-10-13T18:47:58.322-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:47:58.324-0400 s20264| 2015-10-13T18:47:58.322-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:47:58.325-0400 s20264| 2015-10-13T18:47:58.323-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:47:58.325-0400 s20264| 2015-10-13T18:47:58.323-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:47:58.325-0400 s20264| 2015-10-13T18:47:58.323-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:47:58.325-0400 s20264| 2015-10-13T18:47:58.323-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:47:58.325-0400 s20264| 2015-10-13T18:47:58.323-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:47:58.326-0400 s20264| 2015-10-13T18:47:58.323-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:47:58.326-0400 s20264| 2015-10-13T18:47:58.323-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:47:58.326-0400 s20264| 2015-10-13T18:47:58.323-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:47:58.326-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:47:58.332-0400 d20268| 2015-10-13T18:47:58.332-0400 I ACCESS [conn18] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.333-0400 d20268| 2015-10-13T18:47:58.333-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55288 #19 (12 connections now open) [js_test:auth] 2015-10-13T18:47:58.348-0400 d20268| 2015-10-13T18:47:58.348-0400 I ACCESS [conn19] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.349-0400 d20268| 2015-10-13T18:47:58.348-0400 I SHARDING [conn19] remotely refreshing metadata for test.foo, current shard version is 0|0||000000000000000000000000, current metadata version is 0|0||000000000000000000000000 [js_test:auth] 2015-10-13T18:47:58.350-0400 c20261| 2015-10-13T18:47:58.349-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50081 #15 (10 connections now open) [js_test:auth] 2015-10-13T18:47:58.370-0400 c20261| 2015-10-13T18:47:58.370-0400 I ACCESS [conn15] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.370-0400 d20268| 2015-10-13T18:47:58.370-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20261 [js_test:auth] 2015-10-13T18:47:58.371-0400 d20268| 2015-10-13T18:47:58.370-0400 I SHARDING [conn19] collection test.foo was previously unsharded, new metadata loaded with shard version 0|0||561d8a03c06b51335e5d6897 [js_test:auth] 2015-10-13T18:47:58.371-0400 d20268| 2015-10-13T18:47:58.370-0400 I SHARDING [conn19] collection version was loaded at version 1|4||561d8a03c06b51335e5d6897, took 21ms [js_test:auth] 2015-10-13T18:47:58.371-0400 d20268| 2015-10-13T18:47:58.370-0400 I SHARDING [migrateThread] starting receiving-end of migration of chunk { x: MinKey } -> { x: 1.0 } for collection test.foo from d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 at epoch 561d8a03c06b51335e5d6897 [js_test:auth] 2015-10-13T18:47:58.372-0400 d20267| 2015-10-13T18:47:58.371-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49205 #9 (6 connections now open) [js_test:auth] 2015-10-13T18:47:58.372-0400 d20268| 2015-10-13T18:47:58.372-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55299 #20 (13 connections now open) [js_test:auth] 2015-10-13T18:47:58.398-0400 d20267| 2015-10-13T18:47:58.398-0400 I ACCESS [conn9] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.398-0400 d20265| 2015-10-13T18:47:58.398-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55079 #22 (15 connections now open) [js_test:auth] 2015-10-13T18:47:58.401-0400 d20268| 2015-10-13T18:47:58.401-0400 I ACCESS [conn20] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.401-0400 d20265| 2015-10-13T18:47:58.401-0400 I SHARDING [conn21] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", min: { x: MinKey }, max: { x: 1.0 }, shardKeyPattern: { x: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [js_test:auth] 2015-10-13T18:47:58.403-0400 d20265| 2015-10-13T18:47:58.403-0400 I SHARDING [conn21] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", min: { x: MinKey }, max: { x: 1.0 }, shardKeyPattern: { x: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [js_test:auth] 2015-10-13T18:47:58.408-0400 d20265| 2015-10-13T18:47:58.407-0400 I SHARDING [conn21] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", min: { x: MinKey }, max: { x: 1.0 }, shardKeyPattern: { x: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [js_test:auth] 2015-10-13T18:47:58.414-0400 d20265| 2015-10-13T18:47:58.414-0400 I ACCESS [conn22] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.414-0400 d20265| 2015-10-13T18:47:58.414-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55080 #23 (16 connections now open) [js_test:auth] 2015-10-13T18:47:58.416-0400 d20265| 2015-10-13T18:47:58.416-0400 I SHARDING [conn21] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", min: { x: MinKey }, max: { x: 1.0 }, shardKeyPattern: { x: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [js_test:auth] 2015-10-13T18:47:58.431-0400 d20265| 2015-10-13T18:47:58.431-0400 I ACCESS [conn23] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:47:58.432-0400 d20265| 2015-10-13T18:47:58.432-0400 I SHARDING [conn21] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", min: { x: MinKey }, max: { x: 1.0 }, shardKeyPattern: { x: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [js_test:auth] 2015-10-13T18:47:58.464-0400 d20265| 2015-10-13T18:47:58.464-0400 I SHARDING [conn21] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", min: { x: MinKey }, max: { x: 1.0 }, shardKeyPattern: { x: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [js_test:auth] 2015-10-13T18:47:58.529-0400 d20265| 2015-10-13T18:47:58.528-0400 I SHARDING [conn21] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", min: { x: MinKey }, max: { x: 1.0 }, shardKeyPattern: { x: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [js_test:auth] 2015-10-13T18:47:58.548-0400 d20268| 2015-10-13T18:47:58.548-0400 I INDEX [migrateThread] build index on: test.foo properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "test.foo" } [js_test:auth] 2015-10-13T18:47:58.548-0400 d20268| 2015-10-13T18:47:58.548-0400 I INDEX [migrateThread] building index using bulk method [js_test:auth] 2015-10-13T18:47:58.611-0400 d20268| 2015-10-13T18:47:58.611-0400 I INDEX [migrateThread] build index on: test.foo properties: { v: 1, key: { x: 1.0 }, name: "x_1", ns: "test.foo" } [js_test:auth] 2015-10-13T18:47:58.611-0400 d20268| 2015-10-13T18:47:58.611-0400 I INDEX [migrateThread] building index using bulk method [js_test:auth] 2015-10-13T18:47:58.626-0400 d20268| 2015-10-13T18:47:58.625-0400 I INDEX [migrateThread] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:58.626-0400 d20268| 2015-10-13T18:47:58.626-0400 I SHARDING [migrateThread] Deleter starting delete for: test.foo from { x: MinKey } -> { x: 1.0 }, with opId: 362 [js_test:auth] 2015-10-13T18:47:58.627-0400 d20268| 2015-10-13T18:47:58.626-0400 I SHARDING [migrateThread] rangeDeleter deleted 0 documents for test.foo from { x: MinKey } -> { x: 1.0 } [js_test:auth] 2015-10-13T18:47:58.657-0400 d20265| 2015-10-13T18:47:58.657-0400 I SHARDING [conn21] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", min: { x: MinKey }, max: { x: 1.0 }, shardKeyPattern: { x: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [js_test:auth] 2015-10-13T18:47:58.721-0400 d20269| 2015-10-13T18:47:58.721-0400 I INDEX [repl writer worker 3] build index on: test.foo properties: { v: 1, key: { x: 1.0 }, name: "x_1", ns: "test.foo" } [js_test:auth] 2015-10-13T18:47:58.721-0400 d20270| 2015-10-13T18:47:58.721-0400 I INDEX [repl writer worker 3] build index on: test.foo properties: { v: 1, key: { x: 1.0 }, name: "x_1", ns: "test.foo" } [js_test:auth] 2015-10-13T18:47:58.721-0400 d20269| 2015-10-13T18:47:58.721-0400 I INDEX [repl writer worker 3] building index using bulk method [js_test:auth] 2015-10-13T18:47:58.721-0400 d20270| 2015-10-13T18:47:58.721-0400 I INDEX [repl writer worker 3] building index using bulk method [js_test:auth] 2015-10-13T18:47:58.734-0400 d20270| 2015-10-13T18:47:58.734-0400 I INDEX [repl writer worker 3] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:58.734-0400 d20269| 2015-10-13T18:47:58.734-0400 I INDEX [repl writer worker 3] build index done. scanned 0 total records. 0 secs [js_test:auth] 2015-10-13T18:47:58.735-0400 d20268| 2015-10-13T18:47:58.735-0400 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section [js_test:auth] 2015-10-13T18:47:58.913-0400 d20265| 2015-10-13T18:47:58.913-0400 I SHARDING [conn21] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", min: { x: MinKey }, max: { x: 1.0 }, shardKeyPattern: { x: 1.0 }, state: "catchup", counts: { cloned: 1, clonedBytes: 93, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [js_test:auth] 2015-10-13T18:47:59.426-0400 d20265| 2015-10-13T18:47:59.425-0400 I SHARDING [conn21] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", min: { x: MinKey }, max: { x: 1.0 }, shardKeyPattern: { x: 1.0 }, state: "catchup", counts: { cloned: 1, clonedBytes: 93, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [js_test:auth] 2015-10-13T18:47:59.735-0400 d20268| 2015-10-13T18:47:59.735-0400 I SHARDING [migrateThread] Waiting for replication to catch up before entering critical section [js_test:auth] 2015-10-13T18:47:59.735-0400 d20268| 2015-10-13T18:47:59.735-0400 I SHARDING [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { x: MinKey } -> { x: 1.0 } [js_test:auth] 2015-10-13T18:48:00.450-0400 d20265| 2015-10-13T18:48:00.450-0400 I SHARDING [conn21] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", min: { x: MinKey }, max: { x: 1.0 }, shardKeyPattern: { x: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 93, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [js_test:auth] 2015-10-13T18:48:00.450-0400 d20265| 2015-10-13T18:48:00.450-0400 I SHARDING [conn21] About to check if it is safe to enter critical section [js_test:auth] 2015-10-13T18:48:00.450-0400 d20265| 2015-10-13T18:48:00.450-0400 I SHARDING [conn21] About to enter migrate critical section [js_test:auth] 2015-10-13T18:48:03.324-0400 s20264| 2015-10-13T18:48:03.324-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:48:03.324-0400 s20264| 2015-10-13T18:48:03.324-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:03.325-0400 s20264| 2015-10-13T18:48:03.324-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:03.325-0400 s20264| 2015-10-13T18:48:03.324-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:03.325-0400 s20264| 2015-10-13T18:48:03.324-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:03.325-0400 s20264| 2015-10-13T18:48:03.324-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:03.325-0400 s20264| 2015-10-13T18:48:03.324-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:03.325-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:03.325-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:48:03.325-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:03.326-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:03.326-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:03.326-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:03.326-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:03.326-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:03.326-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:48:03.326-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:03.327-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:03.327-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:03.327-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:03.327-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:03.327-0400 s20264| 2015-10-13T18:48:03.325-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:03.327-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:48:05.570-0400 d20267| 2015-10-13T18:48:05.570-0400 I REPL [ReplicationExecutor] could not find member to sync from [js_test:auth] 2015-10-13T18:48:05.570-0400 d20266| 2015-10-13T18:48:05.570-0400 I REPL [ReplicationExecutor] could not find member to sync from [js_test:auth] 2015-10-13T18:48:05.570-0400 d20265| 2015-10-13T18:48:05.570-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55645 #24 (17 connections now open) [js_test:auth] 2015-10-13T18:48:05.570-0400 d20265| 2015-10-13T18:48:05.570-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55644 #25 (18 connections now open) [js_test:auth] 2015-10-13T18:48:05.884-0400 d20265| 2015-10-13T18:48:05.884-0400 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() timeout 127.0.1.1:20265 [js_test:auth] 2015-10-13T18:48:05.884-0400 d20265| 2015-10-13T18:48:05.884-0400 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 127.0.1.1:20265 error: 9001 socket exception [RECV_TIMEOUT] server [127.0.1.1:20265] [js_test:auth] 2015-10-13T18:48:05.885-0400 d20265| 2015-10-13T18:48:05.884-0400 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1444776460844102 microSec, clearing pool for ubuntu:20265 of 0 connections [js_test:auth] 2015-10-13T18:48:05.885-0400 d20265| 2015-10-13T18:48:05.885-0400 W NETWORK [ReplicaSetMonitorWatcher] No primary detected for set d1 [js_test:auth] 2015-10-13T18:48:05.886-0400 d20269| 2015-10-13T18:48:05.886-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:49849 #8 (5 connections now open) [js_test:auth] 2015-10-13T18:48:05.902-0400 d20269| 2015-10-13T18:48:05.902-0400 I ACCESS [conn8] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:06.049-0400 d20266| 2015-10-13T18:48:06.048-0400 I REPL [ReplicationExecutor] conducting a dry run election to see if we could be elected [js_test:auth] 2015-10-13T18:48:06.049-0400 d20265| 2015-10-13T18:48:06.049-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55692 #26 (19 connections now open) [js_test:auth] 2015-10-13T18:48:06.049-0400 d20266| 2015-10-13T18:48:06.049-0400 I REPL [ReplicationExecutor] dry election run succeeded, running for election [js_test:auth] 2015-10-13T18:48:06.050-0400 d20265| 2015-10-13T18:48:06.050-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55693 #27 (20 connections now open) [js_test:auth] 2015-10-13T18:48:06.050-0400 d20266| 2015-10-13T18:48:06.050-0400 I REPL [ReplicationExecutor] election succeeded, assuming primary role in term 2 [js_test:auth] 2015-10-13T18:48:06.050-0400 d20266| 2015-10-13T18:48:06.050-0400 I REPL [ReplicationExecutor] transition to PRIMARY [js_test:auth] 2015-10-13T18:48:06.050-0400 d20265| 2015-10-13T18:48:06.050-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55694 #28 (21 connections now open) [js_test:auth] 2015-10-13T18:48:06.570-0400 d20266| 2015-10-13T18:48:06.570-0400 I REPL [rsSync] transition to primary complete; database writes are now permitted [js_test:auth] 2015-10-13T18:48:06.601-0400 d20265| 2015-10-13T18:48:06.601-0400 I REPL [ReplicationExecutor] can't see a majority of the set, relinquishing primary [js_test:auth] 2015-10-13T18:48:06.601-0400 d20265| 2015-10-13T18:48:06.601-0400 I REPL [ReplicationExecutor] Stepping down from primary in response to heartbeat [js_test:auth] 2015-10-13T18:48:07.563-0400 d20265| 2015-10-13T18:48:07.563-0400 I REPL [ReplicationExecutor] stepping down from primary, because a new term has begun [js_test:auth] 2015-10-13T18:48:07.563-0400 d20265| 2015-10-13T18:48:07.563-0400 I REPL [ReplicationExecutor] Member ubuntu:20267 is now in state SECONDARY [js_test:auth] 2015-10-13T18:48:07.570-0400 d20265| 2015-10-13T18:48:07.570-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55814 #29 (22 connections now open) [js_test:auth] 2015-10-13T18:48:07.571-0400 d20267| 2015-10-13T18:48:07.570-0400 I REPL [ReplicationExecutor] Member ubuntu:20266 is now in state PRIMARY [js_test:auth] 2015-10-13T18:48:07.601-0400 d20265| 2015-10-13T18:48:07.600-0400 I REPL [ReplicationExecutor] Member ubuntu:20266 is now in state PRIMARY [js_test:auth] 2015-10-13T18:48:07.835-0400 s20264| 2015-10-13T18:48:07.835-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d1 [js_test:auth] 2015-10-13T18:48:07.835-0400 s20264| 2015-10-13T18:48:07.835-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d1 [js_test:auth] 2015-10-13T18:48:07.835-0400 s20264| 2015-10-13T18:48:07.835-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20265, no events [js_test:auth] 2015-10-13T18:48:07.880-0400 s20264| 2015-10-13T18:48:07.880-0400 D NETWORK [PeriodicTaskRunner] polling for status of connection to 127.0.1.1:20265, no events [js_test:auth] 2015-10-13T18:48:07.880-0400 s20264| 2015-10-13T18:48:07.880-0400 D NETWORK [PeriodicTaskRunner] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:07.881-0400 s20264| 2015-10-13T18:48:07.880-0400 D NETWORK [PeriodicTaskRunner] polling for status of connection to 127.0.1.1:20261, no events [js_test:auth] 2015-10-13T18:48:07.881-0400 s20264| 2015-10-13T18:48:07.880-0400 D NETWORK [PeriodicTaskRunner] polling for status of connection to 127.0.1.1:20262, no events [js_test:auth] 2015-10-13T18:48:07.881-0400 s20264| 2015-10-13T18:48:07.880-0400 D NETWORK [PeriodicTaskRunner] polling for status of connection to 127.0.1.1:20266, no events [js_test:auth] 2015-10-13T18:48:07.881-0400 s20264| 2015-10-13T18:48:07.880-0400 D NETWORK [PeriodicTaskRunner] polling for status of connection to 127.0.1.1:20267, no events [js_test:auth] 2015-10-13T18:48:07.881-0400 s20264| 2015-10-13T18:48:07.880-0400 D NETWORK [PeriodicTaskRunner] polling for status of connection to 127.0.1.1:20268, no events [js_test:auth] 2015-10-13T18:48:07.882-0400 s20264| 2015-10-13T18:48:07.880-0400 D NETWORK [PeriodicTaskRunner] polling for status of connection to 127.0.1.1:20269, no events [js_test:auth] 2015-10-13T18:48:07.882-0400 s20264| 2015-10-13T18:48:07.880-0400 D NETWORK [PeriodicTaskRunner] polling for status of connection to 127.0.1.1:20270, no events [js_test:auth] 2015-10-13T18:48:07.882-0400 s20264| 2015-10-13T18:48:07.880-0400 D - [PeriodicTaskRunner] cleaning up unused lock buckets of the global lock manager [js_test:auth] 2015-10-13T18:48:07.882-0400 s20264| 2015-10-13T18:48:07.880-0400 D NETWORK [PeriodicTaskRunner] polling for status of connection to 127.0.1.1:20265, no events [js_test:auth] 2015-10-13T18:48:07.882-0400 s20264| 2015-10-13T18:48:07.880-0400 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:48:37.880-0400 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:07.882-0400 s20264| 2015-10-13T18:48:07.880-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:07.896-0400 s20264| 2015-10-13T18:48:07.895-0400 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:37.895-0400 cmd:{ findAndModify: "lockpings", query: { _id: "ubuntu:20264:1444776427:399327856" }, update: { $set: { ping: new Date(1444776487895) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:07.896-0400 s20264| 2015-10-13T18:48:07.895-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:08.138-0400 d20266| 2015-10-13T18:48:08.138-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:37761 #11 (7 connections now open) [js_test:auth] 2015-10-13T18:48:08.157-0400 d20266| 2015-10-13T18:48:08.157-0400 I ACCESS [conn11] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:08.157-0400 d20270| 2015-10-13T18:48:08.157-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:52391 #9 (6 connections now open) [js_test:auth] 2015-10-13T18:48:08.174-0400 d20270| 2015-10-13T18:48:08.174-0400 I ACCESS [conn9] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:08.174-0400 d20268| 2015-10-13T18:48:08.174-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:56074 #21 (14 connections now open) [js_test:auth] 2015-10-13T18:48:08.190-0400 d20268| 2015-10-13T18:48:08.190-0400 I ACCESS [conn21] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:08.191-0400 d20269| 2015-10-13T18:48:08.191-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:50028 #9 (6 connections now open) [js_test:auth] 2015-10-13T18:48:08.207-0400 d20269| 2015-10-13T18:48:08.207-0400 I ACCESS [conn9] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:08.326-0400 s20264| 2015-10-13T18:48:08.326-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:48:08.326-0400 s20264| 2015-10-13T18:48:08.326-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:08.327-0400 s20264| 2015-10-13T18:48:08.326-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:08.327-0400 s20264| 2015-10-13T18:48:08.326-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:08.327-0400 s20264| 2015-10-13T18:48:08.326-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:08.327-0400 s20264| 2015-10-13T18:48:08.326-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:08.327-0400 s20264| 2015-10-13T18:48:08.326-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:08.327-0400 s20264| 2015-10-13T18:48:08.326-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:08.327-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:48:08.327-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:08.328-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:08.328-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:08.328-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:08.328-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:08.328-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:08.328-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:48:08.328-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:08.328-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:08.329-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:08.329-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:08.329-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:08.329-0400 s20264| 2015-10-13T18:48:08.327-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:08.329-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:48:08.570-0400 d20267| 2015-10-13T18:48:08.570-0400 I REPL [ReplicationExecutor] syncing from: ubuntu:20266 [js_test:auth] 2015-10-13T18:48:08.571-0400 d20266| 2015-10-13T18:48:08.570-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:37794 #12 (8 connections now open) [js_test:auth] 2015-10-13T18:48:08.589-0400 d20266| 2015-10-13T18:48:08.589-0400 I ACCESS [conn12] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:08.590-0400 d20266| 2015-10-13T18:48:08.590-0400 I NETWORK [conn12] end connection 127.0.0.1:37794 (7 connections now open) [js_test:auth] 2015-10-13T18:48:08.590-0400 d20266| 2015-10-13T18:48:08.590-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:37796 #13 (8 connections now open) [js_test:auth] 2015-10-13T18:48:08.606-0400 d20266| 2015-10-13T18:48:08.606-0400 I ACCESS [conn13] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:08.606-0400 d20267| 2015-10-13T18:48:08.606-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20266 [js_test:auth] 2015-10-13T18:48:09.565-0400 d20265| 2015-10-13T18:48:09.565-0400 I NETWORK [conn18] end connection 127.0.0.1:53503 (21 connections now open) [js_test:auth] 2015-10-13T18:48:09.565-0400 d20265| 2015-10-13T18:48:09.565-0400 I NETWORK [conn3] end connection 127.0.0.1:52976 (20 connections now open) [js_test:auth] 2015-10-13T18:48:09.565-0400 d20265| 2015-10-13T18:48:09.565-0400 I NETWORK [conn4] end connection 127.0.0.1:52977 (20 connections now open) [js_test:auth] 2015-10-13T18:48:09.565-0400 d20265| 2015-10-13T18:48:09.565-0400 I NETWORK [conn20] end connection 127.0.0.1:53991 (20 connections now open) [js_test:auth] 2015-10-13T18:48:09.565-0400 d20267| 2015-10-13T18:48:09.565-0400 I REPL [SyncSourceFeedback] setting syncSourceFeedback to ubuntu:20266 [js_test:auth] 2015-10-13T18:48:09.565-0400 d20266| 2015-10-13T18:48:09.565-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:37868 #14 (9 connections now open) [js_test:auth] 2015-10-13T18:48:09.566-0400 s20264| 2015-10-13T18:48:09.565-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: auth-configRS [js_test:auth] 2015-10-13T18:48:09.566-0400 s20264| 2015-10-13T18:48:09.565-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set auth-configRS [js_test:auth] 2015-10-13T18:48:09.566-0400 c20262| 2015-10-13T18:48:09.566-0400 I NETWORK [conn12] end connection 127.0.0.1:48390 (9 connections now open) [js_test:auth] 2015-10-13T18:48:09.566-0400 c20260| 2015-10-13T18:48:09.566-0400 I NETWORK [conn22] end connection 127.0.0.1:51273 (15 connections now open) [js_test:auth] 2015-10-13T18:48:09.566-0400 d20269| 2015-10-13T18:48:09.566-0400 I NETWORK [conn8] end connection 127.0.0.1:49849 (5 connections now open) [js_test:auth] 2015-10-13T18:48:09.566-0400 d20268| 2015-10-13T18:48:09.566-0400 I NETWORK [conn20] end connection 127.0.0.1:55299 (13 connections now open) [js_test:auth] 2015-10-13T18:48:09.566-0400 d20266| 2015-10-13T18:48:09.566-0400 I NETWORK [conn10] end connection 127.0.0.1:35907 (8 connections now open) [js_test:auth] 2015-10-13T18:48:09.567-0400 s20264| 2015-10-13T18:48:09.566-0400 D NETWORK [Balancer] SocketException: remote: 127.0.1.1:20265 error: 9001 socket exception [CLOSED] server [127.0.1.1:20265] [js_test:auth] 2015-10-13T18:48:09.567-0400 s20264| 2015-10-13T18:48:09.566-0400 D - [Balancer] User Assertion: 6:network error while attempting to run command 'moveChunk' on host 'ubuntu:20265' [js_test:auth] 2015-10-13T18:48:09.567-0400 d20268| 2015-10-13T18:48:09.566-0400 I NETWORK [conn19] end connection 127.0.0.1:55288 (12 connections now open) [js_test:auth] 2015-10-13T18:48:09.567-0400 s20264| 2015-10-13T18:48:09.566-0400 W SHARDING [Balancer] could not move chunk min: { x: MinKey } max: { x: 1.0 }, continuing balancing round :: caused by :: 6 network error while attempting to run command 'moveChunk' on host 'ubuntu:20265' [js_test:auth] 2015-10-13T18:48:09.567-0400 d20268| 2015-10-13T18:48:09.566-0400 I NETWORK [conn18] end connection 127.0.0.1:55281 (11 connections now open) [js_test:auth] 2015-10-13T18:48:09.567-0400 s20264| 2015-10-13T18:48:09.567-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:39.567-0400 cmd:{ create: "config.actionlog", capped: true, size: 2097152, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:09.568-0400 d20270| 2015-10-13T18:48:09.567-0400 I NETWORK [conn8] end connection 127.0.0.1:51600 (5 connections now open) [js_test:auth] 2015-10-13T18:48:09.568-0400 c20261| 2015-10-13T18:48:09.567-0400 I NETWORK [conn12] end connection 127.0.0.1:48399 (9 connections now open) [js_test:auth] 2015-10-13T18:48:09.568-0400 d20265| 2015-10-13T18:48:09.565-0400 I REPL [replExecDBWorker-0] transition to SECONDARY [js_test:auth] 2015-10-13T18:48:09.568-0400 d20265| 2015-10-13T18:48:09.565-0400 I NETWORK [conn15] end connection 127.0.0.1:53441 (17 connections now open) [js_test:auth] 2015-10-13T18:48:09.568-0400 d20265| 2015-10-13T18:48:09.565-0400 I NETWORK [conn25] end connection 127.0.0.1:55644 (17 connections now open) [js_test:auth] 2015-10-13T18:48:09.568-0400 d20265| 2015-10-13T18:48:09.565-0400 I NETWORK [conn9] end connection 127.0.0.1:53370 (17 connections now open) [js_test:auth] 2015-10-13T18:48:09.568-0400 d20265| 2015-10-13T18:48:09.565-0400 I NETWORK [conn1] end connection 127.0.0.1:54603 (17 connections now open) [js_test:auth] 2015-10-13T18:48:09.569-0400 d20265| 2015-10-13T18:48:09.565-0400 I NETWORK [conn13] end connection 127.0.0.1:53404 (17 connections now open) [js_test:auth] 2015-10-13T18:48:09.569-0400 d20265| 2015-10-13T18:48:09.565-0400 I NETWORK [conn24] end connection 127.0.0.1:55645 (17 connections now open) [js_test:auth] 2015-10-13T18:48:09.569-0400 d20265| 2015-10-13T18:48:09.566-0400 I NETWORK [conn27] end connection 127.0.0.1:55693 (17 connections now open) [js_test:auth] 2015-10-13T18:48:09.569-0400 d20265| 2015-10-13T18:48:09.566-0400 I NETWORK [conn29] end connection 127.0.0.1:55814 (17 connections now open) [js_test:auth] 2015-10-13T18:48:09.569-0400 d20265| 2015-10-13T18:48:09.566-0400 I - [conn21] Assertion failure it != _collMetadata.end() src/mongo/db/s/sharding_state.cpp 291 [js_test:auth] 2015-10-13T18:48:09.569-0400 d20265| 2015-10-13T18:48:09.566-0400 I NETWORK [conn11] end connection 127.0.0.1:53384 (16 connections now open) [js_test:auth] 2015-10-13T18:48:09.570-0400 d20265| 2015-10-13T18:48:09.566-0400 I NETWORK [conn16] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [127.0.0.1:53442] [js_test:auth] 2015-10-13T18:48:09.570-0400 d20265| 2015-10-13T18:48:09.566-0400 I NETWORK [conn10] end connection 127.0.0.1:53371 (14 connections now open) [js_test:auth] 2015-10-13T18:48:09.570-0400 d20265| 2015-10-13T18:48:09.566-0400 I NETWORK [conn22] end connection 127.0.0.1:55079 (13 connections now open) [js_test:auth] 2015-10-13T18:48:09.570-0400 d20265| 2015-10-13T18:48:09.566-0400 I NETWORK [conn12] end connection 127.0.0.1:53400 (9 connections now open) [js_test:auth] 2015-10-13T18:48:09.570-0400 d20265| 2015-10-13T18:48:09.566-0400 I NETWORK [conn19] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [127.0.0.1:53504] [js_test:auth] 2015-10-13T18:48:09.570-0400 d20265| 2015-10-13T18:48:09.567-0400 I NETWORK [conn28] end connection 127.0.0.1:55694 (3 connections now open) [js_test:auth] 2015-10-13T18:48:09.570-0400 s20264| 2015-10-13T18:48:09.567-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:09.571-0400 d20265| 2015-10-13T18:48:09.567-0400 I NETWORK [conn26] end connection 127.0.0.1:55692 (2 connections now open) [js_test:auth] 2015-10-13T18:48:09.571-0400 d20267| 2015-10-13T18:48:09.567-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:09.571-0400 d20267| 2015-10-13T18:48:09.567-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:09.571-0400 d20267| 2015-10-13T18:48:09.568-0400 I NETWORK [conn8] end connection 127.0.0.1:48118 (5 connections now open) [js_test:auth] 2015-10-13T18:48:09.571-0400 s20264| 2015-10-13T18:48:09.568-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d2 [js_test:auth] 2015-10-13T18:48:09.571-0400 s20264| 2015-10-13T18:48:09.568-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d2 [js_test:auth] 2015-10-13T18:48:09.571-0400 d20267| 2015-10-13T18:48:09.571-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:09.572-0400 d20265| 2015-10-13T18:48:09.572-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55954 #30 (3 connections now open) [js_test:auth] 2015-10-13T18:48:09.573-0400 d20266| 2015-10-13T18:48:09.573-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:09.573-0400 d20266| 2015-10-13T18:48:09.573-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:09.573-0400 d20266| 2015-10-13T18:48:09.573-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:09.574-0400 d20266| 2015-10-13T18:48:09.574-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:09.574-0400 d20266| 2015-10-13T18:48:09.574-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:09.574-0400 d20265| 2015-10-13T18:48:09.574-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:55955 #31 (4 connections now open) [js_test:auth] 2015-10-13T18:48:09.576-0400 d20265| 2015-10-13T18:48:09.576-0400 I CONTROL [conn21] [js_test:auth] 2015-10-13T18:48:09.576-0400 d20265| 0x12f6b82 0x12978d4 0x1284634 0xf8b56b 0xf79a80 0xf83ea8 0xb6c42f 0xb6d2df 0xac5bde 0xc77d4e 0x9679e5 0x12a405d 0x7f59ab515182 0x7f59ab24247d [js_test:auth] 2015-10-13T18:48:09.576-0400 d20265| ----- BEGIN BACKTRACE ----- [js_test:auth] 2015-10-13T18:48:09.577-0400 d20265| {"backtrace":[{"b":"400000","o":"EF6B82"},{"b":"400000","o":"E978D4"},{"b":"400000","o":"E84634"},{"b":"400000","o":"B8B56B"},{"b":"400000","o":"B79A80"},{"b":"400000","o":"B83EA8"},{"b":"400000","o":"76C42F"},{"b":"400000","o":"76D2DF"},{"b":"400000","o":"6C5BDE"},{"b":"400000","o":"877D4E"},{"b":"400000","o":"5679E5"},{"b":"400000","o":"EA405D"},{"b":"7F59AB50D000","o":"8182"},{"b":"7F59AB148000","o":"FA47D"}],"processInfo":{ "mongodbVersion" : "3.1.10-pre-", "gitVersion" : "9c9100212f7f8f3afb5f240d405f853894c376f1", "compiledModules" : [ "subscription" ], "uname" : { "sysname" : "Linux", "release" : "3.13.0-58-generic", "version" : "#97-Ubuntu SMP Wed Jul 8 02:56:15 UTC 2015", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000", "buildId" : "94E34C403A93DC7FF58BE81A5D2678B6A7DCDB0C" }, { "b" : "7FFFF7F70000", "elfType" : 3, "buildId" : "083C85C3A0476C2B9FEDD2C9D02100E02ABCA8EB" }, { "b" : "7F59AD131000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "C1AE4CB7195D337A77A3C689051DABAA3980CA0C" }, { "b" : "7F59ACEC8000", "path" : "/usr/lib/x86_64-linux-gnu/libnetsnmpagent.so.30", "elfType" : 3, "buildId" : "96C16FDBBA28C6635657AFDBAF0F5A1090072474" }, { "b" : "7F59ACBEE000", "path" : "/usr/lib/x86_64-linux-gnu/libnetsnmp.so.30", "elfType" : 3, "buildId" : "61AE85EF50A072D671D55B4776383F8365A3FAA7" }, { "b" : "7F59AC813000", "path" : "/lib/x86_64-linux-gnu/libcrypto.so.1.0.0", "elfType" : 3, "buildId" : "F000D29917E9B6E94A35A8F02E5C62846E5916BC" }, { "b" : "7F59AC5F8000", "path" : "/usr/lib/x86_64-linux-gnu/libsasl2.so.2", "elfType" : 3, "buildId" : "666B276BD134B0E9579B67D4EE333F2D0FB813CD" }, { "b" : "7F59AC3B2000", "path" : "/usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "EEF6871CB80C0344DF907DD8B0D8C90A0B57D4F0" }, { "b" : "7F59AC0AC000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "1D76B71E905CB867B27CEF230FCB20F01A3178F5" }, { "b" : "7F59ABE4D000", "path" : "/lib/x86_64-linux-gnu/libssl.so.1.0.0", "elfType" : 3, "buildId" : "A20EFFEC993A8441FA17F2079F923CBD04079E19" }, { "b" : "7F59ABC45000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "92FCF41EFE012D6186E31A59AD05BDBB487769AB" }, { "b" : "7F59AB941000", "path" : "/usr/lib/x86_64-linux-gnu/libstdc++.so.6", "elfType" : 3, "buildId" : "4BF6F7ADD8244AD86008E6BF40D90F8873892197" }, { "b" : "7F59AB72B000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "8D0AA71411580EE6C08809695C3984769F25725B" }, { "b" : "7F59AB50D000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "9318E8AF0BFBE444731BB0461202EF57F7C39542" }, { "b" : "7F59AB148000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "30C94DC66A1FE95180C3D68D2B89E576D5AE213C" }, { "b" : "7F59AD335000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "9F00581AB3C73E3AEA35995A0C50D24D59A01D47" }, { "b" : "7F59AAF3E000", "path" : "/lib/x86_64-linux-gnu/libwrap.so.0", "elfType" : 3, "buildId" : "54FCBC5B0F994A13A9B0EAD46F23E7DA7F7FE75B" }, { "b" : "7F59AABB5000", "path" : "/usr/lib/libperl.so.5.18", "elfType" : 3, "buildId" : "FA1AD13C56D51B69C9558106D8DAF14730B4C14F" }, { "b" : "7F59AA8EA000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5.so.3", "elfType" : 3, "buildId" : "00EB4BEB543C549A411AF26E8C2B4BBA806F57AE" }, { "b" : "7F59AA6BB000", "path" : "/usr/lib/x86_64-linux-gnu/libk5crypto.so.3", "elfType" : 3, "buildId" : "3057CF4B96D55B1CD2C3681B2A1F75279F66F225" }, { "b" : "7F59AA4B7000", "path" : "/lib/x86_64-linux-gnu/libcom_err.so.2", "elfType" : 3, "buildId" : "04BF7D9BE17AC2A5F7121B246488932718870207" }, { "b" : "7F59AA2AC000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5support.so.0", "elfType" : 3, "buildId" : "86E28C0FCF8D7DAEB9AE77A7C7930F4C2B78A64E" }, { "b" : "7F59AA092000", "path" : "/lib/x86_64-linux-gnu/libnsl.so.1", "elfType" : 3, "buildId" : "497315006FCA1547A16E644FB7FEBA8BD2FAB054" }, { "b" : "7F59A9E59000", "path" : "/lib/x86_64-linux-gnu/libcrypt.so.1", "elfType" : 3, "buildId" : "1B0F2710E989E9A581C257DFFDC90085D0E1348A" }, { "b" : "7F59A9C55000", "path" : "/lib/x86_64-linux-gnu/libkeyutils.so.1", "elfType" : 3, "buildId" : "0F03635F97B93D3DACD84F0ED363C56BD266044F" }, { "b" : "7F59A9A3A000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType" : 3, "buildId" : "616683BCFD8626F176EDA99B6A5D4D2C57996590" } ] }} [js_test:auth] 2015-10-13T18:48:09.577-0400 d20265| mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x12f6b82] [js_test:auth] 2015-10-13T18:48:09.577-0400 d20265| mongod(_ZN5mongo10logContextEPKc+0x134) [0x12978d4] [js_test:auth] 2015-10-13T18:48:09.578-0400 d20265| mongod(_ZN5mongo12verifyFailedEPKcS1_j+0xB4) [0x1284634] [js_test:auth] 2015-10-13T18:48:09.578-0400 d20265| mongod(_ZN5mongo13ShardingState11donateChunkEPNS_16OperationContextERKSsRKNS_7BSONObjES7_NS_12ChunkVersionE+0x3DB) [0xf8b56b] [js_test:auth] 2015-10-13T18:48:09.578-0400 d20265| mongod(_ZN5mongo23ChunkMoveOperationState15commitMigrationEPNS_16OperationContextE+0x240) [0xf79a80] [js_test:auth] 2015-10-13T18:48:09.578-0400 d20265| mongod(+0xB83EA8) [0xf83ea8] [js_test:auth] 2015-10-13T18:48:09.578-0400 d20265| mongod(_ZN5mongo7Command3runEPNS_16OperationContextERKNS_3rpc16RequestInterfaceEPNS3_21ReplyBuilderInterfaceE+0x2AF) [0xb6c42f] [js_test:auth] 2015-10-13T18:48:09.578-0400 d20265| mongod(_ZN5mongo7Command11execCommandEPNS_16OperationContextEPS0_RKNS_3rpc16RequestInterfaceEPNS4_21ReplyBuilderInterfaceE+0x48F) [0xb6d2df] [js_test:auth] 2015-10-13T18:48:09.578-0400 d20265| mongod(_ZN5mongo11runCommandsEPNS_16OperationContextERKNS_3rpc16RequestInterfaceEPNS2_21ReplyBuilderInterfaceE+0x1EE) [0xac5bde] [js_test:auth] 2015-10-13T18:48:09.578-0400 d20265| mongod(_ZN5mongo16assembleResponseEPNS_16OperationContextERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0xB9E) [0xc77d4e] [js_test:auth] 2015-10-13T18:48:09.578-0400 d20265| mongod(_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortE+0xC5) [0x9679e5] [js_test:auth] 2015-10-13T18:48:09.578-0400 d20265| mongod(_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x27D) [0x12a405d] [js_test:auth] 2015-10-13T18:48:09.579-0400 d20265| libpthread.so.0(+0x8182) [0x7f59ab515182] [js_test:auth] 2015-10-13T18:48:09.579-0400 d20265| libc.so.6(clone+0x6D) [0x7f59ab24247d] [js_test:auth] 2015-10-13T18:48:09.579-0400 d20265| ----- END BACKTRACE ----- [js_test:auth] 2015-10-13T18:48:09.579-0400 d20265| 2015-10-13T18:48:09.576-0400 I - [replExecDBWorker-0] Invariant failure !_isSignaled src/mongo/db/repl/replication_executor.cpp 528 [js_test:auth] 2015-10-13T18:48:09.579-0400 d20265| 2015-10-13T18:48:09.576-0400 I SHARDING [conn21] MigrateFromStatus::done About to acquire global lock to exit critical section [js_test:auth] 2015-10-13T18:48:09.579-0400 d20265| 2015-10-13T18:48:09.576-0400 I - [replExecDBWorker-0] [js_test:auth] 2015-10-13T18:48:09.579-0400 d20265| [js_test:auth] 2015-10-13T18:48:09.579-0400 d20265| ***aborting after invariant() failure [js_test:auth] 2015-10-13T18:48:09.579-0400 d20265| [js_test:auth] 2015-10-13T18:48:09.579-0400 d20265| [js_test:auth] 2015-10-13T18:48:09.580-0400 d20265| 2015-10-13T18:48:09.576-0400 I NETWORK [conn23] SocketException handling request, closing client connection: 9001 socket exception [SEND_ERROR] server [127.0.0.1:55080] [js_test:auth] 2015-10-13T18:48:09.586-0400 d20266| 2015-10-13T18:48:09.586-0400 I ACCESS [conn14] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:09.591-0400 d20265| 2015-10-13T18:48:09.591-0400 F - [replExecDBWorker-0] Got signal: 6 (Aborted). [js_test:auth] 2015-10-13T18:48:09.591-0400 d20265| [js_test:auth] 2015-10-13T18:48:09.591-0400 d20265| 0x12f6b82 0x12f5cd9 0x12f64f2 0x7f59ab51d340 0x7f59ab17ecc9 0x7f59ab1820d8 0x1283c2b 0xf0eed8 0xf0fff9 0xf10378 0xf0a88b 0xf10e46 0xf0f322 0xf1458f 0xe75cae 0xf4278a 0xf434bc 0x128e1d0 0x128eb59 0x128f6b0 0x7f59ab9f2a40 0x7f59ab515182 0x7f59ab24247d [js_test:auth] 2015-10-13T18:48:09.591-0400 d20265| ----- BEGIN BACKTRACE ----- [js_test:auth] 2015-10-13T18:48:09.592-0400 d20265| {"backtrace":[{"b":"400000","o":"EF6B82"},{"b":"400000","o":"EF5CD9"},{"b":"400000","o":"EF64F2"},{"b":"7F59AB50D000","o":"10340"},{"b":"7F59AB148000","o":"36CC9"},{"b":"7F59AB148000","o":"3A0D8"},{"b":"400000","o":"E83C2B"},{"b":"400000","o":"B0EED8"},{"b":"400000","o":"B0FFF9"},{"b":"400000","o":"B10378"},{"b":"400000","o":"B0A88B"},{"b":"400000","o":"B10E46"},{"b":"400000","o":"B0F322"},{"b":"400000","o":"B1458F"},{"b":"400000","o":"A75CAE"},{"b":"400000","o":"B4278A"},{"b":"400000","o":"B434BC"},{"b":"400000","o":"E8E1D0"},{"b":"400000","o":"E8EB59"},{"b":"400000","o":"E8F6B0"},{"b":"7F59AB941000","o":"B1A40"},{"b":"7F59AB50D000","o":"8182"},{"b":"7F59AB148000","o":"FA47D"}],"processInfo":{ "mongodbVersion" : "3.1.10-pre-", "gitVersion" : "9c9100212f7f8f3afb5f240d405f853894c376f1", "compiledModules" : [ "subscription" ], "uname" : { "sysname" : "Linux", "release" : "3.13.0-58-generic", "version" : "#97-Ubuntu SMP Wed Jul 8 02:56:15 UTC 2015", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000", "buildId" : "94E34C403A93DC7FF58BE81A5D2678B6A7DCDB0C" }, { "b" : "7FFFF7F70000", "elfType" : 3, "buildId" : "083C85C3A0476C2B9FEDD2C9D02100E02ABCA8EB" }, { "b" : "7F59AD131000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "C1AE4CB7195D337A77A3C689051DABAA3980CA0C" }, { "b" : "7F59ACEC8000", "path" : "/usr/lib/x86_64-linux-gnu/libnetsnmpagent.so.30", "elfType" : 3, "buildId" : "96C16FDBBA28C6635657AFDBAF0F5A1090072474" }, { "b" : "7F59ACBEE000", "path" : "/usr/lib/x86_64-linux-gnu/libnetsnmp.so.30", "elfType" : 3, "buildId" : "61AE85EF50A072D671D55B4776383F8365A3FAA7" }, { "b" : "7F59AC813000", "path" : "/lib/x86_64-linux-gnu/libcrypto.so.1.0.0", "elfType" : 3, "buildId" : "F000D29917E9B6E94A35A8F02E5C62846E5916BC" }, { "b" : "7F59AC5F8000", "path" : "/usr/lib/x86_64-linux-gnu/libsasl2.so.2", "elfType" : 3, "buildId" : "666B276BD134B0E9579B67D4EE333F2D0FB813CD" }, { "b" : "7F59AC3B2000", "path" : "/usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2", "elfType" : 3, "buildId" : "EEF6871CB80C0344DF907DD8B0D8C90A0B57D4F0" }, { "b" : "7F59AC0AC000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "1D76B71E905CB867B27CEF230FCB20F01A3178F5" }, { "b" : "7F59ABE4D000", "path" : "/lib/x86_64-linux-gnu/libssl.so.1.0.0", "elfType" : 3, "buildId" : "A20EFFEC993A8441FA17F2079F923CBD04079E19" }, { "b" : "7F59ABC45000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "92FCF41EFE012D6186E31A59AD05BDBB487769AB" }, { "b" : "7F59AB941000", "path" : "/usr/lib/x86_64-linux-gnu/libstdc++.so.6", "elfType" : 3, "buildId" : "4BF6F7ADD8244AD86008E6BF40D90F8873892197" }, { "b" : "7F59AB72B000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "8D0AA71411580EE6C08809695C3984769F25725B" }, { "b" : "7F59AB50D000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "9318E8AF0BFBE444731BB0461202EF57F7C39542" }, { "b" : "7F59AB148000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "30C94DC66A1FE95180C3D68D2B89E576D5AE213C" }, { "b" : "7F59AD335000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "9F00581AB3C73E3AEA35995A0C50D24D59A01D47" }, { "b" : "7F59AAF3E000", "path" : "/lib/x86_64-linux-gnu/libwrap.so.0", "elfType" : 3, "buildId" : "54FCBC5B0F994A13A9B0EAD46F23E7DA7F7FE75B" }, { "b" : "7F59AABB5000", "path" : "/usr/lib/libperl.so.5.18", "elfType" : 3, "buildId" : "FA1AD13C56D51B69C9558106D8DAF14730B4C14F" }, { "b" : "7F59AA8EA000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5.so.3", "elfType" : 3, "buildId" : "00EB4BEB543C549A411AF26E8C2B4BBA806F57AE" }, { "b" : "7F59AA6BB000", "path" : "/usr/lib/x86_64-linux-gnu/libk5crypto.so.3", "elfType" : 3, "buildId" : "3057CF4B96D55B1CD2C3681B2A1F75279F66F225" }, { "b" : "7F59AA4B7000", "path" : "/lib/x86_64-linux-gnu/libcom_err.so.2", "elfType" : 3, "buildId" : "04BF7D9BE17AC2A5F7121B246488932718870207" }, { "b" : "7F59AA2AC000", "path" : "/usr/lib/x86_64-linux-gnu/libkrb5support.so.0", "elfType" : 3, "buildId" : "86E28C0FCF8D7DAEB9AE77A7C7930F4C2B78A64E" }, { "b" : "7F59AA092000", "path" : "/lib/x86_64-linux-gnu/libnsl.so.1", "elfType" : 3, "buildId" : "497315006FCA1547A16E644FB7FEBA8BD2FAB054" }, { "b" : "7F59A9E59000", "path" : "/lib/x86_64-linux-gnu/libcrypt.so.1", "elfType" : 3, "buildId" : "1B0F2710E989E9A581C257DFFDC90085D0E1348A" }, { "b" : "7F59A9C55000", "path" : "/lib/x86_64-linux-gnu/libkeyutils.so.1", "elfType" : 3, "buildId" : "0F03635F97B93D3DACD84F0ED363C56BD266044F" }, { "b" : "7F59A9A3A000", "path" : "/lib/x86_64-linux-gnu/libresolv.so.2", "elfType" : 3, "buildId" : "616683BCFD8626F176EDA99B6A5D4D2C57996590" } ] }} [js_test:auth] 2015-10-13T18:48:09.593-0400 d20265| mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x12f6b82] [js_test:auth] 2015-10-13T18:48:09.593-0400 d20265| mongod(+0xEF5CD9) [0x12f5cd9] [js_test:auth] 2015-10-13T18:48:09.593-0400 d20265| mongod(+0xEF64F2) [0x12f64f2] [js_test:auth] 2015-10-13T18:48:09.593-0400 d20265| libpthread.so.0(+0x10340) [0x7f59ab51d340] [js_test:auth] 2015-10-13T18:48:09.593-0400 d20265| libc.so.6(gsignal+0x39) [0x7f59ab17ecc9] [js_test:auth] 2015-10-13T18:48:09.593-0400 d20265| libc.so.6(abort+0x148) [0x7f59ab1820d8] [js_test:auth] 2015-10-13T18:48:09.593-0400 d20265| mongod(_ZN5mongo15invariantFailedEPKcS1_j+0xCB) [0x1283c2b] [js_test:auth] 2015-10-13T18:48:09.593-0400 d20265| mongod(+0xB0EED8) [0xf0eed8] [js_test:auth] 2015-10-13T18:48:09.593-0400 d20265| mongod(_ZN5mongo4repl19ReplicationExecutor18signalEvent_inlockERKNS_8executor12TaskExecutor11EventHandleE+0x19) [0xf0fff9] [js_test:auth] 2015-10-13T18:48:09.593-0400 d20265| mongod(_ZN5mongo4repl19ReplicationExecutor11signalEventERKNS_8executor12TaskExecutor11EventHandleE+0x38) [0xf10378] [js_test:auth] 2015-10-13T18:48:09.593-0400 d20265| mongod(_ZN5mongo4repl26ReplicationCoordinatorImpl15_stepDownFinishERKNS_8executor12TaskExecutor12CallbackArgsE+0xCB) [0xf0a88b] [js_test:auth] 2015-10-13T18:48:09.594-0400 d20265| mongod(_ZN5mongo4repl19ReplicationExecutor12_doOperationEPNS_16OperationContextERKNS_6StatusERKNS_8executor12TaskExecutor14CallbackHandleEPSt4listINS1_8WorkItemESaISD_EEPSt5mutex+0x206) [0xf10e46] [js_test:auth] 2015-10-13T18:48:09.594-0400 d20265| mongod(+0xB0F322) [0xf0f322] [js_test:auth] 2015-10-13T18:48:09.594-0400 d20265| mongod(+0xB1458F) [0xf1458f] [js_test:auth] 2015-10-13T18:48:09.594-0400 d20265| mongod(+0xA75CAE) [0xe75cae] [js_test:auth] 2015-10-13T18:48:09.594-0400 d20265| mongod(+0xB4278A) [0xf4278a] [js_test:auth] 2015-10-13T18:48:09.594-0400 d20265| mongod(_ZN5mongo4repl10TaskRunner9_runTasksEv+0x9C) [0xf434bc] [js_test:auth] 2015-10-13T18:48:09.594-0400 d20265| mongod(_ZN5mongo10ThreadPool10_doOneTaskEPSt11unique_lockISt5mutexE+0x130) [0x128e1d0] [js_test:auth] 2015-10-13T18:48:09.594-0400 d20265| mongod(_ZN5mongo10ThreadPool13_consumeTasksEv+0xA9) [0x128eb59] [js_test:auth] 2015-10-13T18:48:09.594-0400 d20265| mongod(_ZN5mongo10ThreadPool17_workerThreadBodyEPS0_RKSs+0x100) [0x128f6b0] [js_test:auth] 2015-10-13T18:48:09.594-0400 d20265| libstdc++.so.6(+0xB1A40) [0x7f59ab9f2a40] [js_test:auth] 2015-10-13T18:48:09.594-0400 d20265| libpthread.so.0(+0x8182) [0x7f59ab515182] [js_test:auth] 2015-10-13T18:48:09.595-0400 d20265| libc.so.6(clone+0x6D) [0x7f59ab24247d] [js_test:auth] 2015-10-13T18:48:09.595-0400 d20265| ----- END BACKTRACE ----- [js_test:auth] 2015-10-13T18:48:09.695-0400 c20260| 2015-10-13T18:48:09.695-0400 I COMMAND [conn21] command config.config.actionlog command: create { create: "config.actionlog", capped: true, size: 2097152, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:261 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 128ms [js_test:auth] 2015-10-13T18:48:09.696-0400 c20260| 2015-10-13T18:48:09.695-0400 I COMMAND [conn25] command config.config.changelog command: create { create: "config.changelog", capped: true, size: 10485760, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:309 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 128073 } } } protocol:op_command 128ms [js_test:auth] 2015-10-13T18:48:09.696-0400 s20264| 2015-10-13T18:48:09.696-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:39.695-0400 cmd:{ insert: "actionlog", documents: [ { _id: ObjectId('561d8a29c06b51335e5d689e'), server: "ubuntu", what: "balancer.round", time: new Date(1444776489566), details: { executionTimeMillis: 11467, errorOccured: false, candidateChunks: 1, chunksMoved: 0 } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:09.696-0400 d20268| 2015-10-13T18:48:09.696-0400 I SHARDING [migrateThread] about to log metadata event: { _id: "ubuntu-2015-10-13T18:48:09.696-0400-561d8a294ff374a377a83f79", server: "ubuntu", clientAddr: "", time: new Date(1444776489696), what: "moveChunk.to", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 1.0 }, step 1 of 5: 255, step 2 of 5: 108, step 3 of 5: 0, step 4 of 5: 0, note: "aborted" } } [js_test:auth] 2015-10-13T18:48:09.696-0400 s20264| 2015-10-13T18:48:09.696-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:09.807-0400 d20267| 2015-10-13T18:48:09.807-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:09.808-0400 d20266| 2015-10-13T18:48:09.807-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:09.808-0400 d20267| 2015-10-13T18:48:09.807-0400 I NETWORK [conn3] end connection 127.0.0.1:47101 (4 connections now open) [js_test:auth] 2015-10-13T18:48:09.808-0400 d20267| 2015-10-13T18:48:09.807-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:09.808-0400 d20266| 2015-10-13T18:48:09.807-0400 I NETWORK [conn7] end connection 127.0.0.1:35262 (7 connections now open) [js_test:auth] 2015-10-13T18:48:09.808-0400 d20266| 2015-10-13T18:48:09.807-0400 I NETWORK [conn8] end connection 127.0.0.1:35263 (7 connections now open) [js_test:auth] 2015-10-13T18:48:09.808-0400 d20266| 2015-10-13T18:48:09.807-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:09.808-0400 c20261| 2015-10-13T18:48:09.807-0400 I NETWORK [conn13] end connection 127.0.0.1:48402 (8 connections now open) [js_test:auth] 2015-10-13T18:48:09.808-0400 c20260| 2015-10-13T18:48:09.807-0400 I NETWORK [conn23] end connection 127.0.0.1:51274 (14 connections now open) [js_test:auth] 2015-10-13T18:48:09.808-0400 d20266| 2015-10-13T18:48:09.808-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:09.808-0400 d20267| 2015-10-13T18:48:09.808-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:09.809-0400 c20262| 2015-10-13T18:48:09.808-0400 I NETWORK [conn13] end connection 127.0.0.1:49477 (8 connections now open) [js_test:auth] 2015-10-13T18:48:09.809-0400 d20266| 2015-10-13T18:48:09.808-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:09.809-0400 d20267| 2015-10-13T18:48:09.808-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:09.832-0400 c20260| 2015-10-13T18:48:09.832-0400 I WRITE [conn21] insert config.actionlog query: { _id: ObjectId('561d8a29c06b51335e5d689e'), server: "ubuntu", what: "balancer.round", time: new Date(1444776489566), details: { executionTimeMillis: 11467, errorOccured: false, candidateChunks: 1, chunksMoved: 0 } } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 4, w: 4 } }, Database: { acquireCount: { w: 3, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 88 } }, Collection: { acquireCount: { w: 1, W: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } 135ms [js_test:auth] 2015-10-13T18:48:09.853-0400 c20260| 2015-10-13T18:48:09.853-0400 I COMMAND [conn25] command config.$cmd command: insert { insert: "changelog", documents: [ { _id: "ubuntu-2015-10-13T18:48:09.696-0400-561d8a294ff374a377a83f79", server: "ubuntu", clientAddr: "", time: new Date(1444776489696), what: "moveChunk.to", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 1.0 }, step 1 of 5: 255, step 2 of 5: 108, step 3 of 5: 0, step 4 of 5: 0, note: "aborted" } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:324 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_command 156ms [js_test:auth] 2015-10-13T18:48:09.853-0400 d20268| 2015-10-13T18:48:09.853-0400 E SHARDING [migrateThread] migrate failed: network error while attempting to run command '_transferMods' on host 'ubuntu:20265' [js_test:auth] 2015-10-13T18:48:09.977-0400 c20260| 2015-10-13T18:48:09.976-0400 I COMMAND [conn21] command config.$cmd command: insert { insert: "actionlog", documents: [ { _id: ObjectId('561d8a29c06b51335e5d689e'), server: "ubuntu", what: "balancer.round", time: new Date(1444776489566), details: { executionTimeMillis: 11467, errorOccured: false, candidateChunks: 1, chunksMoved: 0 } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } ntoreturn:1 ntoskip:0 keyUpdates:0 writeConflicts:0 numYields:0 reslen:324 locks:{ Global: { acquireCount: { r: 4, w: 4 } }, Database: { acquireCount: { w: 3, W: 1 }, acquireWaitCount: { W: 1 }, timeAcquiringMicros: { W: 88 } }, Collection: { acquireCount: { w: 1, W: 1 } }, Metadata: { acquireCount: { w: 2 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 280ms [js_test:auth] 2015-10-13T18:48:09.977-0400 s20264| 2015-10-13T18:48:09.977-0400 D SHARDING [Balancer] *** end of balancing round [js_test:auth] 2015-10-13T18:48:09.977-0400 s20264| 2015-10-13T18:48:09.977-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:39.977-0400 cmd:{ findAndModify: "locks", query: { ts: ObjectId('561d8a1ec06b51335e5d689d') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:09.977-0400 s20264| 2015-10-13T18:48:09.977-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:09.994-0400 s20264| 2015-10-13T18:48:09.993-0400 I SHARDING [Balancer] distributed lock with ts: 561d8a1ec06b51335e5d689d' unlocked. [js_test:auth] 2015-10-13T18:48:09.994-0400 s20264| 2015-10-13T18:48:09.993-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:39.993-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776489993), up: 62, waiting: true, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:09.994-0400 s20264| 2015-10-13T18:48:09.993-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:11.809-0400 d20267| 2015-10-13T18:48:11.808-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:11.809-0400 d20267| 2015-10-13T18:48:11.808-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:11.809-0400 d20267| 2015-10-13T18:48:11.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:11.809-0400 d20267| 2015-10-13T18:48:11.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:11.809-0400 d20266| 2015-10-13T18:48:11.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:11.810-0400 d20266| 2015-10-13T18:48:11.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:11.810-0400 d20267| 2015-10-13T18:48:11.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:11.810-0400 d20267| 2015-10-13T18:48:11.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:11.810-0400 d20266| 2015-10-13T18:48:11.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:11.810-0400 d20266| 2015-10-13T18:48:11.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:11.810-0400 d20266| 2015-10-13T18:48:11.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:11.811-0400 d20266| 2015-10-13T18:48:11.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:13.329-0400 s20264| 2015-10-13T18:48:13.328-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:48:13.329-0400 s20264| 2015-10-13T18:48:13.328-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:13.329-0400 s20264| 2015-10-13T18:48:13.328-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:13.329-0400 s20264| 2015-10-13T18:48:13.328-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:13.330-0400 s20264| 2015-10-13T18:48:13.329-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:13.330-0400 s20264| 2015-10-13T18:48:13.329-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:13.330-0400 s20264| 2015-10-13T18:48:13.329-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:13.330-0400 s20264| 2015-10-13T18:48:13.329-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:13.334-0400 s20264| 2015-10-13T18:48:13.333-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:48:13.334-0400 s20264| 2015-10-13T18:48:13.333-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:13.334-0400 s20264| 2015-10-13T18:48:13.333-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:13.335-0400 s20264| 2015-10-13T18:48:13.333-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:13.335-0400 s20264| 2015-10-13T18:48:13.334-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:13.335-0400 s20264| 2015-10-13T18:48:13.334-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:13.336-0400 s20264| 2015-10-13T18:48:13.334-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:13.337-0400 s20264| 2015-10-13T18:48:13.335-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:48:13.337-0400 s20264| 2015-10-13T18:48:13.335-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:13.337-0400 s20264| 2015-10-13T18:48:13.335-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:13.337-0400 s20264| 2015-10-13T18:48:13.335-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:13.338-0400 s20264| 2015-10-13T18:48:13.335-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:13.338-0400 s20264| 2015-10-13T18:48:13.335-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:13.338-0400 s20264| 2015-10-13T18:48:13.336-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:13.338-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:48:13.809-0400 d20267| 2015-10-13T18:48:13.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:13.809-0400 d20266| 2015-10-13T18:48:13.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:13.810-0400 d20267| 2015-10-13T18:48:13.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:13.810-0400 d20266| 2015-10-13T18:48:13.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:13.810-0400 d20267| 2015-10-13T18:48:13.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:13.810-0400 d20267| 2015-10-13T18:48:13.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:13.810-0400 d20266| 2015-10-13T18:48:13.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:13.810-0400 d20266| 2015-10-13T18:48:13.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:13.810-0400 d20267| 2015-10-13T18:48:13.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:13.810-0400 d20267| 2015-10-13T18:48:13.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:13.810-0400 d20266| 2015-10-13T18:48:13.810-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:13.810-0400 d20266| 2015-10-13T18:48:13.810-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:15.809-0400 d20267| 2015-10-13T18:48:15.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:15.809-0400 d20267| 2015-10-13T18:48:15.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:15.809-0400 d20267| 2015-10-13T18:48:15.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:15.809-0400 d20267| 2015-10-13T18:48:15.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:15.809-0400 d20267| 2015-10-13T18:48:15.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:15.810-0400 d20267| 2015-10-13T18:48:15.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:15.810-0400 d20266| 2015-10-13T18:48:15.810-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:15.810-0400 d20266| 2015-10-13T18:48:15.810-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:15.810-0400 d20266| 2015-10-13T18:48:15.810-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:15.811-0400 d20266| 2015-10-13T18:48:15.810-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:15.811-0400 d20266| 2015-10-13T18:48:15.811-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:15.811-0400 d20266| 2015-10-13T18:48:15.811-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:17.809-0400 d20267| 2015-10-13T18:48:17.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:17.809-0400 d20267| 2015-10-13T18:48:17.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:17.810-0400 d20267| 2015-10-13T18:48:17.809-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:17.810-0400 d20267| 2015-10-13T18:48:17.809-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:17.810-0400 d20267| 2015-10-13T18:48:17.810-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:17.810-0400 d20267| 2015-10-13T18:48:17.810-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:17.811-0400 d20266| 2015-10-13T18:48:17.811-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:17.812-0400 d20266| 2015-10-13T18:48:17.811-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:17.812-0400 d20266| 2015-10-13T18:48:17.812-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:17.812-0400 d20266| 2015-10-13T18:48:17.812-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:17.813-0400 d20266| 2015-10-13T18:48:17.812-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:17.813-0400 d20266| 2015-10-13T18:48:17.812-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:18.208-0400 d20268| 2015-10-13T18:48:18.208-0400 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 20 secs, remote host 127.0.1.1:20265) [js_test:auth] 2015-10-13T18:48:18.208-0400 d20268| 2015-10-13T18:48:18.208-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:18.337-0400 s20264| 2015-10-13T18:48:18.337-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:48:18.337-0400 s20264| 2015-10-13T18:48:18.337-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:18.337-0400 s20264| 2015-10-13T18:48:18.337-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:18.338-0400 s20264| 2015-10-13T18:48:18.337-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:18.338-0400 s20264| 2015-10-13T18:48:18.337-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:18.338-0400 s20264| 2015-10-13T18:48:18.337-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:18.339-0400 s20264| 2015-10-13T18:48:18.337-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:18.339-0400 s20264| 2015-10-13T18:48:18.337-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:18.339-0400 s20264| 2015-10-13T18:48:18.338-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:48:18.340-0400 s20264| 2015-10-13T18:48:18.338-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:18.340-0400 s20264| 2015-10-13T18:48:18.338-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:18.340-0400 s20264| 2015-10-13T18:48:18.338-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:18.340-0400 s20264| 2015-10-13T18:48:18.338-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:18.340-0400 s20264| 2015-10-13T18:48:18.338-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:18.341-0400 s20264| 2015-10-13T18:48:18.338-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:18.341-0400 s20264| 2015-10-13T18:48:18.339-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:48:18.341-0400 s20264| 2015-10-13T18:48:18.339-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:18.341-0400 s20264| 2015-10-13T18:48:18.339-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:18.341-0400 s20264| 2015-10-13T18:48:18.339-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:18.341-0400 s20264| 2015-10-13T18:48:18.339-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:18.341-0400 s20264| 2015-10-13T18:48:18.339-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:18.341-0400 s20264| 2015-10-13T18:48:18.339-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:18.342-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:48:19.569-0400 s20264| 2015-10-13T18:48:19.569-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d1 [js_test:auth] 2015-10-13T18:48:19.569-0400 s20264| 2015-10-13T18:48:19.569-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d1 [js_test:auth] 2015-10-13T18:48:19.570-0400 s20264| 2015-10-13T18:48:19.569-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20266, no events [js_test:auth] 2015-10-13T18:48:19.570-0400 s20264| 2015-10-13T18:48:19.570-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20265, event detected [js_test:auth] 2015-10-13T18:48:19.570-0400 s20264| 2015-10-13T18:48:19.570-0400 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 12 secs, remote host 127.0.1.1:20265) [js_test:auth] 2015-10-13T18:48:19.570-0400 s20264| 2015-10-13T18:48:19.570-0400 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:ubuntu:20265 [js_test:auth] 2015-10-13T18:48:19.570-0400 s20264| 2015-10-13T18:48:19.570-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:48:19.570-0400 s20264| 2015-10-13T18:48:19.570-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:19.570-0400 s20264| 2015-10-13T18:48:19.570-0400 D - [ReplicaSetMonitorWatcher] User Assertion: 13328:connection pool: connect failed ubuntu:20265 : couldn't connect to server ubuntu:20265, connection attempt failed [js_test:auth] 2015-10-13T18:48:19.570-0400 s20264| 2015-10-13T18:48:19.570-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20267, no events [js_test:auth] 2015-10-13T18:48:19.570-0400 s20264| 2015-10-13T18:48:19.570-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: auth-configRS [js_test:auth] 2015-10-13T18:48:19.570-0400 s20264| 2015-10-13T18:48:19.570-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set auth-configRS [js_test:auth] 2015-10-13T18:48:19.571-0400 s20264| 2015-10-13T18:48:19.570-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:19.571-0400 s20264| 2015-10-13T18:48:19.570-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20262, no events [js_test:auth] 2015-10-13T18:48:19.571-0400 s20264| 2015-10-13T18:48:19.571-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20261, no events [js_test:auth] 2015-10-13T18:48:19.571-0400 s20264| 2015-10-13T18:48:19.571-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d2 [js_test:auth] 2015-10-13T18:48:19.571-0400 s20264| 2015-10-13T18:48:19.571-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d2 [js_test:auth] 2015-10-13T18:48:19.571-0400 s20264| 2015-10-13T18:48:19.571-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20268, no events [js_test:auth] 2015-10-13T18:48:19.571-0400 s20264| 2015-10-13T18:48:19.571-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20270, no events [js_test:auth] 2015-10-13T18:48:19.572-0400 s20264| 2015-10-13T18:48:19.572-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20269, no events [js_test:auth] 2015-10-13T18:48:19.811-0400 d20267| 2015-10-13T18:48:19.811-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:19.811-0400 d20267| 2015-10-13T18:48:19.811-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:19.811-0400 d20267| 2015-10-13T18:48:19.811-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:19.811-0400 d20267| 2015-10-13T18:48:19.811-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:19.811-0400 d20267| 2015-10-13T18:48:19.811-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:19.811-0400 d20267| 2015-10-13T18:48:19.811-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:19.813-0400 d20266| 2015-10-13T18:48:19.813-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:19.813-0400 d20266| 2015-10-13T18:48:19.813-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:19.813-0400 d20266| 2015-10-13T18:48:19.813-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:19.813-0400 d20266| 2015-10-13T18:48:19.813-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:19.813-0400 d20266| 2015-10-13T18:48:19.813-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:19.814-0400 d20266| 2015-10-13T18:48:19.813-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:20.022-0400 s20264| 2015-10-13T18:48:20.022-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:50.022-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776500022), up: 73, waiting: false, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.022-0400 s20264| 2015-10-13T18:48:20.022-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:20.041-0400 s20264| 2015-10-13T18:48:20.041-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776500000|1, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.041-0400 s20264| 2015-10-13T18:48:20.041-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:48:20.041-0400 s20264| 2015-10-13T18:48:20.041-0400 D SHARDING [Balancer] found 2 shards listed on config server(s) [js_test:auth] 2015-10-13T18:48:20.042-0400 s20264| 2015-10-13T18:48:20.041-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776500000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.042-0400 s20264| 2015-10-13T18:48:20.041-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:20.043-0400 s20264| 2015-10-13T18:48:20.042-0400 D SHARDING [Balancer] Refreshing MaxChunkSize: 1MB [js_test:auth] 2015-10-13T18:48:20.043-0400 s20264| 2015-10-13T18:48:20.042-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776500000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.044-0400 s20264| 2015-10-13T18:48:20.042-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:20.044-0400 s20264| 2015-10-13T18:48:20.042-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:48:50.042-0400 cmd:{ features: 1 } [js_test:auth] 2015-10-13T18:48:20.044-0400 s20264| 2015-10-13T18:48:20.042-0400 D ASIO [NetworkInterfaceASIO] Connecting to ubuntu:20266 [js_test:auth] 2015-10-13T18:48:20.044-0400 s20264| 2015-10-13T18:48:20.042-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:20.045-0400 d20266| 2015-10-13T18:48:20.042-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:38621 #15 (7 connections now open) [js_test:auth] 2015-10-13T18:48:20.045-0400 s20264| 2015-10-13T18:48:20.045-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:20.061-0400 s20264| 2015-10-13T18:48:20.061-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:20.061-0400 s20264| 2015-10-13T18:48:20.061-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:20.062-0400 d20266| 2015-10-13T18:48:20.061-0400 I ACCESS [conn15] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:20.062-0400 s20264| 2015-10-13T18:48:20.062-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:20.062-0400 d20266| 2015-10-13T18:48:20.062-0400 I SHARDING [conn15] first cluster operation detected, adding sharding hook to enable versioning and authentication to remote servers [js_test:auth] 2015-10-13T18:48:20.062-0400 d20266| 2015-10-13T18:48:20.062-0400 I SHARDING [conn15] Updating config server connection string to: auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 [js_test:auth] 2015-10-13T18:48:20.062-0400 d20266| 2015-10-13T18:48:20.062-0400 I NETWORK [conn15] Starting new replica set monitor for auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262 [js_test:auth] 2015-10-13T18:48:20.062-0400 d20266| 2015-10-13T18:48:20.062-0400 I NETWORK [ReplicaSetMonitorWatcher] starting [js_test:auth] 2015-10-13T18:48:20.065-0400 d20266| 2015-10-13T18:48:20.065-0400 I SHARDING [thread1] creating distributed lock ping thread for process ubuntu:20266:1444776500:1580135359 (sleeping for 30000ms) [js_test:auth] 2015-10-13T18:48:20.065-0400 c20260| 2015-10-13T18:48:20.065-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54593 #27 (15 connections now open) [js_test:auth] 2015-10-13T18:48:20.065-0400 c20262| 2015-10-13T18:48:20.065-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51713 #16 (9 connections now open) [js_test:auth] 2015-10-13T18:48:20.085-0400 c20260| 2015-10-13T18:48:20.085-0400 I ACCESS [conn27] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:20.086-0400 c20262| 2015-10-13T18:48:20.085-0400 I ACCESS [conn16] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:20.086-0400 c20261| 2015-10-13T18:48:20.086-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51722 #16 (9 connections now open) [js_test:auth] 2015-10-13T18:48:20.086-0400 c20262| 2015-10-13T18:48:20.086-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51715 #17 (10 connections now open) [js_test:auth] 2015-10-13T18:48:20.106-0400 c20261| 2015-10-13T18:48:20.106-0400 I ACCESS [conn16] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:20.109-0400 c20262| 2015-10-13T18:48:20.109-0400 I ACCESS [conn17] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:20.109-0400 c20260| 2015-10-13T18:48:20.109-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:54597 #28 (16 connections now open) [js_test:auth] 2015-10-13T18:48:20.109-0400 d20266| 2015-10-13T18:48:20.109-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20262 [js_test:auth] 2015-10-13T18:48:20.111-0400 d20266| 2015-10-13T18:48:20.111-0400 I NETWORK [conn15] Starting new replica set monitor for d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:48:20.111-0400 d20266| 2015-10-13T18:48:20.111-0400 I NETWORK [conn15] Starting new replica set monitor for d2/ubuntu:20268,ubuntu:20269,ubuntu:20270 [js_test:auth] 2015-10-13T18:48:20.111-0400 d20266| 2015-10-13T18:48:20.111-0400 I SHARDING [conn15] remote client 127.0.0.1:38621 initialized this host as shard d1 [js_test:auth] 2015-10-13T18:48:20.111-0400 s20264| 2015-10-13T18:48:20.111-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20266 [js_test:auth] 2015-10-13T18:48:20.111-0400 s20264| 2015-10-13T18:48:20.111-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:20.111-0400 s20264| 2015-10-13T18:48:20.111-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:48:50.111-0400 cmd:{ features: 1 } [js_test:auth] 2015-10-13T18:48:20.111-0400 s20264| 2015-10-13T18:48:20.111-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:48:20.111-0400 s20264| 2015-10-13T18:48:20.111-0400 D SHARDING [Balancer] trying to acquire new distributed lock for balancer ( lock timeout : 900000 ms, ping interval : 30000 ms, process : ubuntu:20264:1444776427:399327856 ) with lockSessionID: 561d8a34c06b51335e5d689f, why: doing balance round [js_test:auth] 2015-10-13T18:48:20.112-0400 s20264| 2015-10-13T18:48:20.111-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:50.111-0400 cmd:{ findAndModify: "locks", query: { _id: "balancer", state: 0 }, update: { $set: { ts: ObjectId('561d8a34c06b51335e5d689f'), state: 2, who: "ubuntu:20264:1444776427:399327856:Balancer", process: "ubuntu:20264:1444776427:399327856", when: new Date(1444776500111), why: "doing balance round" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.112-0400 s20264| 2015-10-13T18:48:20.111-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:20.121-0400 s20264| 2015-10-13T18:48:20.121-0400 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 561d8a34c06b51335e5d689f [js_test:auth] 2015-10-13T18:48:20.121-0400 s20264| 2015-10-13T18:48:20.121-0400 D SHARDING [Balancer] *** start balancing round. waitForDelete: 1, secondaryThrottle: { w: 1, wtimeout: 0 } [js_test:auth] 2015-10-13T18:48:20.121-0400 s20264| 2015-10-13T18:48:20.121-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776500000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.122-0400 s20264| 2015-10-13T18:48:20.121-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:48:20.122-0400 s20264| 2015-10-13T18:48:20.121-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776500000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.122-0400 s20264| 2015-10-13T18:48:20.121-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:48:20.122-0400 s20264| 2015-10-13T18:48:20.121-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:48:50.121-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:48:20.122-0400 s20264| 2015-10-13T18:48:20.121-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:20.123-0400 s20264| 2015-10-13T18:48:20.122-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:48:50.122-0400 cmd:{ serverStatus: 1 } [js_test:auth] 2015-10-13T18:48:20.123-0400 s20264| 2015-10-13T18:48:20.122-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:20.124-0400 s20264| 2015-10-13T18:48:20.123-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:48:50.123-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:48:20.124-0400 s20264| 2015-10-13T18:48:20.123-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:48:20.125-0400 s20264| 2015-10-13T18:48:20.124-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:48:50.124-0400 cmd:{ serverStatus: 1 } [js_test:auth] 2015-10-13T18:48:20.125-0400 s20264| 2015-10-13T18:48:20.124-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:48:20.126-0400 s20264| 2015-10-13T18:48:20.124-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { min: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776500000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.127-0400 s20264| 2015-10-13T18:48:20.125-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:48:20.127-0400 s20264| 2015-10-13T18:48:20.125-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "tags", filter: { ns: "test.foo" }, sort: { min: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776500000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.128-0400 s20264| 2015-10-13T18:48:20.125-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:48:20.128-0400 s20264| 2015-10-13T18:48:20.126-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776500000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.128-0400 s20264| 2015-10-13T18:48:20.126-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:20.129-0400 s20264| 2015-10-13T18:48:20.126-0400 D SHARDING [Balancer] collection : test.foo [js_test:auth] 2015-10-13T18:48:20.129-0400 s20264| 2015-10-13T18:48:20.126-0400 D SHARDING [Balancer] donor : d1 chunks on 4 [js_test:auth] 2015-10-13T18:48:20.129-0400 s20264| 2015-10-13T18:48:20.126-0400 D SHARDING [Balancer] receiver : d2 chunks on 0 [js_test:auth] 2015-10-13T18:48:20.129-0400 s20264| 2015-10-13T18:48:20.126-0400 D SHARDING [Balancer] threshold : 2 [js_test:auth] 2015-10-13T18:48:20.130-0400 s20264| 2015-10-13T18:48:20.126-0400 I SHARDING [Balancer] ns: test.foo going to move { _id: "test.foo-x_MinKey", ns: "test.foo", min: { x: MinKey }, max: { x: 1.0 }, shard: "d1", version: Timestamp 1000|1, versionEpoch: ObjectId('561d8a03c06b51335e5d6897'), lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897') } from: d1 to: d2 tag [] [js_test:auth] 2015-10-13T18:48:20.131-0400 s20264| 2015-10-13T18:48:20.126-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776500000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.131-0400 s20264| 2015-10-13T18:48:20.126-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:48:20.132-0400 s20264| 2015-10-13T18:48:20.127-0400 I SHARDING [Balancer] moving chunk ns: test.foo moving ( ns: test.foo, shard: d1, lastmod: 1|1||561d8a03c06b51335e5d6897, min: { x: MinKey }, max: { x: 1.0 }) d1 -> d2 [js_test:auth] 2015-10-13T18:48:20.132-0400 s20264| 2015-10-13T18:48:20.127-0400 D SHARDING [Balancer] calling onCreate auth for d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:48:20.132-0400 s20264| 2015-10-13T18:48:20.127-0400 D NETWORK [Balancer] creating new connection to:ubuntu:20266 [js_test:auth] 2015-10-13T18:48:20.133-0400 s20264| 2015-10-13T18:48:20.127-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:48:20.133-0400 s20264| 2015-10-13T18:48:20.127-0400 D NETWORK [Balancer] connected to server ubuntu:20266 (127.0.1.1) [js_test:auth] 2015-10-13T18:48:20.134-0400 d20266| 2015-10-13T18:48:20.127-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:38631 #16 (8 connections now open) [js_test:auth] 2015-10-13T18:48:20.134-0400 s20264| 2015-10-13T18:48:20.127-0400 D NETWORK [Balancer] connected connection! [js_test:auth] 2015-10-13T18:48:20.134-0400 c20260| 2015-10-13T18:48:20.128-0400 I ACCESS [conn28] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:20.134-0400 d20266| 2015-10-13T18:48:20.128-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20260 [js_test:auth] 2015-10-13T18:48:20.143-0400 d20266| 2015-10-13T18:48:20.143-0400 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: findAndModify query predicate didn't match any lock document [js_test:auth] 2015-10-13T18:48:20.146-0400 d20266| 2015-10-13T18:48:20.146-0400 I ACCESS [conn16] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:20.146-0400 s20264| 2015-10-13T18:48:20.146-0400 D SHARDING [Balancer] initializing shard connection to d1:d1/ubuntu:20265,ubuntu:20266,ubuntu:20267 [js_test:auth] 2015-10-13T18:48:20.147-0400 s20264| 2015-10-13T18:48:20.146-0400 D SHARDING [Balancer] setShardVersion d1 ubuntu:20266 { setShardVersion: "", init: true, authoritative: true, configdb: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", shard: "d1", shardHost: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267" } [js_test:auth] 2015-10-13T18:48:20.147-0400 d20266| 2015-10-13T18:48:20.147-0400 I SHARDING [conn16] moveChunk waiting for full cleanup after move [js_test:auth] 2015-10-13T18:48:20.148-0400 d20266| 2015-10-13T18:48:20.147-0400 I SHARDING [conn16] received moveChunk request: { moveChunk: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", to: "d2/ubuntu:20268,ubuntu:20269,ubuntu:20270", fromShard: "d1", toShard: "d2", min: { x: MinKey }, max: { x: 1.0 }, maxChunkSizeBytes: 1048576, configdb: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", secondaryThrottle: false, waitForDelete: true, maxTimeMS: 0, shardVersion: [ Timestamp 1000|4, ObjectId('561d8a03c06b51335e5d6897') ], epoch: ObjectId('561d8a03c06b51335e5d6897') } [js_test:auth] 2015-10-13T18:48:20.149-0400 d20266| 2015-10-13T18:48:20.149-0400 W SHARDING [conn16] could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo [js_test:auth] 2015-10-13T18:48:20.150-0400 d20266| 2015-10-13T18:48:20.149-0400 I SHARDING [conn16] about to log metadata event: { _id: "ubuntu-2015-10-13T18:48:20.149-0400-561d8a34bcc93d4b7b68fb04", server: "ubuntu", clientAddr: "127.0.0.1:38631", time: new Date(1444776500149), what: "moveChunk.from", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 1.0 }, step 1 of 6: 0, to: "d2", from: "d1", note: "aborted" } } [js_test:auth] 2015-10-13T18:48:20.164-0400 s20264| 2015-10-13T18:48:20.163-0400 I SHARDING [Balancer] moveChunk result: { ok: 0.0, errmsg: "could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo", code: 46 } [js_test:auth] 2015-10-13T18:48:20.164-0400 s20264| 2015-10-13T18:48:20.163-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776500000|4, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.165-0400 s20264| 2015-10-13T18:48:20.163-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:20.165-0400 s20264| 2015-10-13T18:48:20.164-0400 I SHARDING [Balancer] balancer move failed: { ok: 0.0, errmsg: "could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo", code: 46 } from: d1 to: d2 chunk: min: { x: MinKey } max: { x: 1.0 } [js_test:auth] 2015-10-13T18:48:20.165-0400 s20264| 2015-10-13T18:48:20.164-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:50.164-0400 cmd:{ insert: "actionlog", documents: [ { _id: ObjectId('561d8a34c06b51335e5d68a0'), server: "ubuntu", what: "balancer.round", time: new Date(1444776500164), details: { executionTimeMillis: 142, errorOccured: false, candidateChunks: 1, chunksMoved: 0 } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.166-0400 s20264| 2015-10-13T18:48:20.164-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:20.191-0400 s20264| 2015-10-13T18:48:20.191-0400 D SHARDING [Balancer] *** end of balancing round [js_test:auth] 2015-10-13T18:48:20.191-0400 s20264| 2015-10-13T18:48:20.191-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:50.191-0400 cmd:{ findAndModify: "locks", query: { ts: ObjectId('561d8a34c06b51335e5d689f') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.191-0400 s20264| 2015-10-13T18:48:20.191-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:20.215-0400 s20264| 2015-10-13T18:48:20.215-0400 I SHARDING [Balancer] distributed lock with ts: 561d8a34c06b51335e5d689f' unlocked. [js_test:auth] 2015-10-13T18:48:20.216-0400 s20264| 2015-10-13T18:48:20.215-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:48:50.215-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776500215), up: 73, waiting: true, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:20.216-0400 s20264| 2015-10-13T18:48:20.215-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:21.811-0400 d20267| 2015-10-13T18:48:21.811-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:21.812-0400 d20267| 2015-10-13T18:48:21.811-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:21.812-0400 d20267| 2015-10-13T18:48:21.811-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:21.812-0400 d20267| 2015-10-13T18:48:21.811-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:21.813-0400 d20267| 2015-10-13T18:48:21.812-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:21.814-0400 d20267| 2015-10-13T18:48:21.812-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:21.815-0400 d20266| 2015-10-13T18:48:21.814-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:21.816-0400 d20266| 2015-10-13T18:48:21.814-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:21.817-0400 d20266| 2015-10-13T18:48:21.814-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:21.817-0400 d20266| 2015-10-13T18:48:21.814-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:21.818-0400 d20266| 2015-10-13T18:48:21.814-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:21.819-0400 d20266| 2015-10-13T18:48:21.814-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:23.341-0400 s20264| 2015-10-13T18:48:23.340-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:48:23.341-0400 s20264| 2015-10-13T18:48:23.340-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:23.341-0400 s20264| 2015-10-13T18:48:23.340-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:23.341-0400 s20264| 2015-10-13T18:48:23.340-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:23.342-0400 s20264| 2015-10-13T18:48:23.340-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:23.342-0400 s20264| 2015-10-13T18:48:23.340-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:23.342-0400 s20264| 2015-10-13T18:48:23.340-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:23.342-0400 s20264| 2015-10-13T18:48:23.341-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:23.342-0400 s20264| 2015-10-13T18:48:23.341-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:48:23.343-0400 s20264| 2015-10-13T18:48:23.341-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:23.343-0400 s20264| 2015-10-13T18:48:23.341-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:23.343-0400 s20264| 2015-10-13T18:48:23.341-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:23.343-0400 s20264| 2015-10-13T18:48:23.341-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:23.344-0400 s20264| 2015-10-13T18:48:23.341-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:23.344-0400 s20264| 2015-10-13T18:48:23.341-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:23.344-0400 s20264| 2015-10-13T18:48:23.342-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:48:23.344-0400 s20264| 2015-10-13T18:48:23.342-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:23.344-0400 s20264| 2015-10-13T18:48:23.342-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:23.344-0400 s20264| 2015-10-13T18:48:23.342-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:23.344-0400 s20264| 2015-10-13T18:48:23.342-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:23.345-0400 s20264| 2015-10-13T18:48:23.342-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:23.345-0400 s20264| 2015-10-13T18:48:23.342-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:23.345-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:48:23.813-0400 d20267| 2015-10-13T18:48:23.813-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:23.814-0400 d20267| 2015-10-13T18:48:23.813-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:23.814-0400 d20267| 2015-10-13T18:48:23.813-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:23.815-0400 d20267| 2015-10-13T18:48:23.813-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:23.815-0400 d20267| 2015-10-13T18:48:23.813-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:23.816-0400 d20267| 2015-10-13T18:48:23.813-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:23.816-0400 d20266| 2015-10-13T18:48:23.815-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:23.816-0400 d20266| 2015-10-13T18:48:23.815-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:23.816-0400 d20266| 2015-10-13T18:48:23.815-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:23.816-0400 d20266| 2015-10-13T18:48:23.815-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:23.817-0400 d20266| 2015-10-13T18:48:23.816-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:23.817-0400 d20266| 2015-10-13T18:48:23.816-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:25.813-0400 d20267| 2015-10-13T18:48:25.813-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:25.814-0400 d20267| 2015-10-13T18:48:25.813-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:25.814-0400 d20267| 2015-10-13T18:48:25.813-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:25.814-0400 d20267| 2015-10-13T18:48:25.814-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:25.814-0400 d20267| 2015-10-13T18:48:25.814-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:25.814-0400 d20267| 2015-10-13T18:48:25.814-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:25.817-0400 d20266| 2015-10-13T18:48:25.817-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:25.817-0400 d20266| 2015-10-13T18:48:25.817-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:25.817-0400 d20266| 2015-10-13T18:48:25.817-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:25.818-0400 d20266| 2015-10-13T18:48:25.817-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:25.818-0400 d20266| 2015-10-13T18:48:25.817-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:25.818-0400 d20266| 2015-10-13T18:48:25.817-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:27.815-0400 d20267| 2015-10-13T18:48:27.815-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:27.815-0400 d20267| 2015-10-13T18:48:27.815-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:27.815-0400 d20267| 2015-10-13T18:48:27.815-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:27.815-0400 d20267| 2015-10-13T18:48:27.815-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:27.815-0400 d20267| 2015-10-13T18:48:27.815-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:27.815-0400 d20267| 2015-10-13T18:48:27.815-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:27.818-0400 d20266| 2015-10-13T18:48:27.818-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:27.818-0400 d20266| 2015-10-13T18:48:27.818-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:27.818-0400 d20266| 2015-10-13T18:48:27.818-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:27.818-0400 d20266| 2015-10-13T18:48:27.818-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:27.818-0400 d20266| 2015-10-13T18:48:27.818-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:27.819-0400 d20266| 2015-10-13T18:48:27.818-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:28.212-0400 d20268| 2015-10-13T18:48:28.211-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:28.343-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:48:28.343-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:28.344-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:28.344-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:28.344-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:28.344-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:28.344-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:28.345-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:28.345-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:48:28.345-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:28.345-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:28.345-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:28.345-0400 s20264| 2015-10-13T18:48:28.343-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:28.345-0400 s20264| 2015-10-13T18:48:28.344-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:28.346-0400 s20264| 2015-10-13T18:48:28.344-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:28.346-0400 s20264| 2015-10-13T18:48:28.344-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:48:28.346-0400 s20264| 2015-10-13T18:48:28.344-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:28.346-0400 s20264| 2015-10-13T18:48:28.344-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:28.346-0400 s20264| 2015-10-13T18:48:28.344-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:28.346-0400 s20264| 2015-10-13T18:48:28.344-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:28.346-0400 s20264| 2015-10-13T18:48:28.344-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:28.346-0400 s20264| 2015-10-13T18:48:28.344-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:28.346-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:48:29.572-0400 s20264| 2015-10-13T18:48:29.572-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d1 [js_test:auth] 2015-10-13T18:48:29.572-0400 s20264| 2015-10-13T18:48:29.572-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d1 [js_test:auth] 2015-10-13T18:48:29.572-0400 s20264| 2015-10-13T18:48:29.572-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20266, no events [js_test:auth] 2015-10-13T18:48:29.572-0400 s20264| 2015-10-13T18:48:29.572-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20267, no events [js_test:auth] 2015-10-13T18:48:29.573-0400 s20264| 2015-10-13T18:48:29.573-0400 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:ubuntu:20265 [js_test:auth] 2015-10-13T18:48:29.573-0400 s20264| 2015-10-13T18:48:29.573-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:48:29.573-0400 s20264| 2015-10-13T18:48:29.573-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:29.573-0400 s20264| 2015-10-13T18:48:29.573-0400 D - [ReplicaSetMonitorWatcher] User Assertion: 13328:connection pool: connect failed ubuntu:20265 : couldn't connect to server ubuntu:20265, connection attempt failed [js_test:auth] 2015-10-13T18:48:29.573-0400 s20264| 2015-10-13T18:48:29.573-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: auth-configRS [js_test:auth] 2015-10-13T18:48:29.573-0400 s20264| 2015-10-13T18:48:29.573-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set auth-configRS [js_test:auth] 2015-10-13T18:48:29.573-0400 s20264| 2015-10-13T18:48:29.573-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:29.573-0400 s20264| 2015-10-13T18:48:29.573-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20261, no events [js_test:auth] 2015-10-13T18:48:29.574-0400 s20264| 2015-10-13T18:48:29.573-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20262, no events [js_test:auth] 2015-10-13T18:48:29.574-0400 s20264| 2015-10-13T18:48:29.574-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d2 [js_test:auth] 2015-10-13T18:48:29.574-0400 s20264| 2015-10-13T18:48:29.574-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d2 [js_test:auth] 2015-10-13T18:48:29.574-0400 s20264| 2015-10-13T18:48:29.574-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20268, no events [js_test:auth] 2015-10-13T18:48:29.574-0400 s20264| 2015-10-13T18:48:29.574-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20270, no events [js_test:auth] 2015-10-13T18:48:29.574-0400 s20264| 2015-10-13T18:48:29.574-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20269, no events [js_test:auth] 2015-10-13T18:48:29.815-0400 d20267| 2015-10-13T18:48:29.815-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:29.815-0400 d20267| 2015-10-13T18:48:29.815-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:29.815-0400 d20267| 2015-10-13T18:48:29.815-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:29.815-0400 d20267| 2015-10-13T18:48:29.815-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:29.816-0400 d20267| 2015-10-13T18:48:29.815-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:29.816-0400 d20267| 2015-10-13T18:48:29.815-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:29.818-0400 d20266| 2015-10-13T18:48:29.818-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:29.818-0400 d20266| 2015-10-13T18:48:29.818-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:29.818-0400 d20266| 2015-10-13T18:48:29.818-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:29.819-0400 d20266| 2015-10-13T18:48:29.818-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:29.819-0400 d20266| 2015-10-13T18:48:29.819-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:29.819-0400 d20266| 2015-10-13T18:48:29.819-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:30.062-0400 d20266| 2015-10-13T18:48:30.062-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:30.063-0400 d20267| 2015-10-13T18:48:30.062-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51540 #10 (5 connections now open) [js_test:auth] 2015-10-13T18:48:30.082-0400 d20267| 2015-10-13T18:48:30.082-0400 I ACCESS [conn10] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:30.083-0400 d20266| 2015-10-13T18:48:30.083-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:39327 #17 (9 connections now open) [js_test:auth] 2015-10-13T18:48:30.099-0400 d20266| 2015-10-13T18:48:30.099-0400 I ACCESS [conn17] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:30.100-0400 d20268| 2015-10-13T18:48:30.100-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:57635 #22 (12 connections now open) [js_test:auth] 2015-10-13T18:48:30.117-0400 d20268| 2015-10-13T18:48:30.117-0400 I ACCESS [conn22] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:30.118-0400 d20270| 2015-10-13T18:48:30.118-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53957 #10 (6 connections now open) [js_test:auth] 2015-10-13T18:48:30.134-0400 d20270| 2015-10-13T18:48:30.134-0400 I ACCESS [conn10] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:30.135-0400 d20269| 2015-10-13T18:48:30.135-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51589 #10 (6 connections now open) [js_test:auth] 2015-10-13T18:48:30.152-0400 d20269| 2015-10-13T18:48:30.152-0400 I ACCESS [conn10] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:30.235-0400 s20264| 2015-10-13T18:48:30.235-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:00.234-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776510234), up: 83, waiting: false, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.235-0400 s20264| 2015-10-13T18:48:30.235-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:30.250-0400 s20264| 2015-10-13T18:48:30.250-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776510000|1, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.251-0400 s20264| 2015-10-13T18:48:30.250-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:30.251-0400 s20264| 2015-10-13T18:48:30.251-0400 D SHARDING [Balancer] found 2 shards listed on config server(s) [js_test:auth] 2015-10-13T18:48:30.251-0400 s20264| 2015-10-13T18:48:30.251-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776510000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.251-0400 s20264| 2015-10-13T18:48:30.251-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:48:30.251-0400 s20264| 2015-10-13T18:48:30.251-0400 D SHARDING [Balancer] Refreshing MaxChunkSize: 1MB [js_test:auth] 2015-10-13T18:48:30.251-0400 s20264| 2015-10-13T18:48:30.251-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776510000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.252-0400 s20264| 2015-10-13T18:48:30.251-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:30.252-0400 s20264| 2015-10-13T18:48:30.252-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:49:00.252-0400 cmd:{ features: 1 } [js_test:auth] 2015-10-13T18:48:30.252-0400 s20264| 2015-10-13T18:48:30.252-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:30.252-0400 s20264| 2015-10-13T18:48:30.252-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:49:00.252-0400 cmd:{ features: 1 } [js_test:auth] 2015-10-13T18:48:30.253-0400 s20264| 2015-10-13T18:48:30.252-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:48:30.253-0400 s20264| 2015-10-13T18:48:30.252-0400 D SHARDING [Balancer] trying to acquire new distributed lock for balancer ( lock timeout : 900000 ms, ping interval : 30000 ms, process : ubuntu:20264:1444776427:399327856 ) with lockSessionID: 561d8a3ec06b51335e5d68a1, why: doing balance round [js_test:auth] 2015-10-13T18:48:30.253-0400 s20264| 2015-10-13T18:48:30.252-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:00.252-0400 cmd:{ findAndModify: "locks", query: { _id: "balancer", state: 0 }, update: { $set: { ts: ObjectId('561d8a3ec06b51335e5d68a1'), state: 2, who: "ubuntu:20264:1444776427:399327856:Balancer", process: "ubuntu:20264:1444776427:399327856", when: new Date(1444776510252), why: "doing balance round" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.254-0400 s20264| 2015-10-13T18:48:30.252-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:30.274-0400 s20264| 2015-10-13T18:48:30.274-0400 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 561d8a3ec06b51335e5d68a1 [js_test:auth] 2015-10-13T18:48:30.274-0400 s20264| 2015-10-13T18:48:30.274-0400 D SHARDING [Balancer] *** start balancing round. waitForDelete: 1, secondaryThrottle: { w: 1, wtimeout: 0 } [js_test:auth] 2015-10-13T18:48:30.275-0400 s20264| 2015-10-13T18:48:30.274-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776510000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.275-0400 s20264| 2015-10-13T18:48:30.274-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:30.276-0400 s20264| 2015-10-13T18:48:30.274-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776510000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.276-0400 s20264| 2015-10-13T18:48:30.274-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:30.276-0400 s20264| 2015-10-13T18:48:30.275-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:49:00.275-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:48:30.277-0400 s20264| 2015-10-13T18:48:30.275-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:30.277-0400 s20264| 2015-10-13T18:48:30.275-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:49:00.275-0400 cmd:{ serverStatus: 1 } [js_test:auth] 2015-10-13T18:48:30.277-0400 s20264| 2015-10-13T18:48:30.275-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:30.277-0400 s20264| 2015-10-13T18:48:30.276-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:49:00.276-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:48:30.278-0400 s20264| 2015-10-13T18:48:30.276-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:48:30.278-0400 s20264| 2015-10-13T18:48:30.277-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:49:00.277-0400 cmd:{ serverStatus: 1 } [js_test:auth] 2015-10-13T18:48:30.278-0400 s20264| 2015-10-13T18:48:30.277-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:48:30.278-0400 s20264| 2015-10-13T18:48:30.277-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { min: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776510000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.278-0400 s20264| 2015-10-13T18:48:30.277-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:30.278-0400 s20264| 2015-10-13T18:48:30.278-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "tags", filter: { ns: "test.foo" }, sort: { min: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776510000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.279-0400 s20264| 2015-10-13T18:48:30.278-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:30.279-0400 s20264| 2015-10-13T18:48:30.278-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776510000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.279-0400 s20264| 2015-10-13T18:48:30.278-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:30.279-0400 s20264| 2015-10-13T18:48:30.279-0400 D SHARDING [Balancer] collection : test.foo [js_test:auth] 2015-10-13T18:48:30.279-0400 s20264| 2015-10-13T18:48:30.279-0400 D SHARDING [Balancer] donor : d1 chunks on 4 [js_test:auth] 2015-10-13T18:48:30.279-0400 s20264| 2015-10-13T18:48:30.279-0400 D SHARDING [Balancer] receiver : d2 chunks on 0 [js_test:auth] 2015-10-13T18:48:30.279-0400 s20264| 2015-10-13T18:48:30.279-0400 D SHARDING [Balancer] threshold : 2 [js_test:auth] 2015-10-13T18:48:30.280-0400 s20264| 2015-10-13T18:48:30.279-0400 I SHARDING [Balancer] ns: test.foo going to move { _id: "test.foo-x_MinKey", ns: "test.foo", min: { x: MinKey }, max: { x: 1.0 }, shard: "d1", version: Timestamp 1000|1, versionEpoch: ObjectId('561d8a03c06b51335e5d6897'), lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897') } from: d1 to: d2 tag [] [js_test:auth] 2015-10-13T18:48:30.280-0400 s20264| 2015-10-13T18:48:30.279-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776510000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.280-0400 s20264| 2015-10-13T18:48:30.279-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:30.281-0400 s20264| 2015-10-13T18:48:30.279-0400 I SHARDING [Balancer] moving chunk ns: test.foo moving ( ns: test.foo, shard: d1, lastmod: 1|1||561d8a03c06b51335e5d6897, min: { x: MinKey }, max: { x: 1.0 }) d1 -> d2 [js_test:auth] 2015-10-13T18:48:30.281-0400 d20266| 2015-10-13T18:48:30.280-0400 I SHARDING [conn16] moveChunk waiting for full cleanup after move [js_test:auth] 2015-10-13T18:48:30.282-0400 d20266| 2015-10-13T18:48:30.280-0400 I SHARDING [conn16] received moveChunk request: { moveChunk: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", to: "d2/ubuntu:20268,ubuntu:20269,ubuntu:20270", fromShard: "d1", toShard: "d2", min: { x: MinKey }, max: { x: 1.0 }, maxChunkSizeBytes: 1048576, configdb: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", secondaryThrottle: false, waitForDelete: true, maxTimeMS: 0, shardVersion: [ Timestamp 1000|4, ObjectId('561d8a03c06b51335e5d6897') ], epoch: ObjectId('561d8a03c06b51335e5d6897') } [js_test:auth] 2015-10-13T18:48:30.282-0400 d20266| 2015-10-13T18:48:30.282-0400 W SHARDING [conn16] could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo [js_test:auth] 2015-10-13T18:48:30.283-0400 d20266| 2015-10-13T18:48:30.282-0400 I SHARDING [conn16] about to log metadata event: { _id: "ubuntu-2015-10-13T18:48:30.282-0400-561d8a3ebcc93d4b7b68fb06", server: "ubuntu", clientAddr: "127.0.0.1:38631", time: new Date(1444776510282), what: "moveChunk.from", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 1.0 }, step 1 of 6: 0, to: "d2", from: "d1", note: "aborted" } } [js_test:auth] 2015-10-13T18:48:30.301-0400 s20264| 2015-10-13T18:48:30.300-0400 I SHARDING [Balancer] moveChunk result: { ok: 0.0, errmsg: "could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo", code: 46 } [js_test:auth] 2015-10-13T18:48:30.301-0400 s20264| 2015-10-13T18:48:30.300-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776510000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.302-0400 s20264| 2015-10-13T18:48:30.300-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:30.302-0400 s20264| 2015-10-13T18:48:30.301-0400 I SHARDING [Balancer] balancer move failed: { ok: 0.0, errmsg: "could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo", code: 46 } from: d1 to: d2 chunk: min: { x: MinKey } max: { x: 1.0 } [js_test:auth] 2015-10-13T18:48:30.303-0400 s20264| 2015-10-13T18:48:30.301-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:00.301-0400 cmd:{ insert: "actionlog", documents: [ { _id: ObjectId('561d8a3ec06b51335e5d68a2'), server: "ubuntu", what: "balancer.round", time: new Date(1444776510301), details: { executionTimeMillis: 66, errorOccured: false, candidateChunks: 1, chunksMoved: 0 } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.303-0400 s20264| 2015-10-13T18:48:30.301-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:30.326-0400 s20264| 2015-10-13T18:48:30.326-0400 D SHARDING [Balancer] *** end of balancing round [js_test:auth] 2015-10-13T18:48:30.327-0400 s20264| 2015-10-13T18:48:30.326-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:00.326-0400 cmd:{ findAndModify: "locks", query: { ts: ObjectId('561d8a3ec06b51335e5d68a1') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.327-0400 s20264| 2015-10-13T18:48:30.326-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:30.345-0400 s20264| 2015-10-13T18:48:30.345-0400 I SHARDING [Balancer] distributed lock with ts: 561d8a3ec06b51335e5d68a1' unlocked. [js_test:auth] 2015-10-13T18:48:30.346-0400 s20264| 2015-10-13T18:48:30.345-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:00.345-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776510345), up: 83, waiting: true, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:30.346-0400 s20264| 2015-10-13T18:48:30.345-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:30.470-0400 s20264| 2015-10-13T18:48:30.470-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host :27017 [js_test:auth] 2015-10-13T18:48:30.470-0400 s20264| 2015-10-13T18:48:30.470-0400 D ASIO [NetworkInterfaceASIO] failed to close stream: Transport endpoint is not connected [js_test:auth] 2015-10-13T18:48:31.816-0400 d20267| 2015-10-13T18:48:31.815-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:31.816-0400 d20267| 2015-10-13T18:48:31.816-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:31.816-0400 d20267| 2015-10-13T18:48:31.816-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:31.816-0400 d20267| 2015-10-13T18:48:31.816-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:31.816-0400 d20267| 2015-10-13T18:48:31.816-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:31.816-0400 d20267| 2015-10-13T18:48:31.816-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:31.819-0400 d20266| 2015-10-13T18:48:31.819-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:31.819-0400 d20266| 2015-10-13T18:48:31.819-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:31.820-0400 d20266| 2015-10-13T18:48:31.819-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:31.820-0400 d20266| 2015-10-13T18:48:31.819-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:31.820-0400 d20266| 2015-10-13T18:48:31.819-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:31.820-0400 d20266| 2015-10-13T18:48:31.819-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:33.345-0400 s20264| 2015-10-13T18:48:33.345-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:48:33.345-0400 s20264| 2015-10-13T18:48:33.345-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:33.345-0400 s20264| 2015-10-13T18:48:33.345-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:33.346-0400 s20264| 2015-10-13T18:48:33.345-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:33.346-0400 s20264| 2015-10-13T18:48:33.345-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:33.346-0400 s20264| 2015-10-13T18:48:33.345-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:33.346-0400 s20264| 2015-10-13T18:48:33.345-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:33.346-0400 s20264| 2015-10-13T18:48:33.345-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:33.347-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:48:33.347-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:33.347-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:33.347-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:33.347-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:33.347-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:33.347-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:33.348-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:48:33.348-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:33.348-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:33.348-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:33.348-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:33.348-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:33.348-0400 s20264| 2015-10-13T18:48:33.346-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:33.349-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:48:33.817-0400 d20267| 2015-10-13T18:48:33.816-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:33.817-0400 d20267| 2015-10-13T18:48:33.816-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:33.817-0400 d20267| 2015-10-13T18:48:33.817-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:33.817-0400 d20267| 2015-10-13T18:48:33.817-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:33.817-0400 d20267| 2015-10-13T18:48:33.817-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:33.817-0400 d20267| 2015-10-13T18:48:33.817-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:33.820-0400 d20266| 2015-10-13T18:48:33.820-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:33.820-0400 d20266| 2015-10-13T18:48:33.820-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:33.820-0400 d20266| 2015-10-13T18:48:33.820-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:33.820-0400 d20266| 2015-10-13T18:48:33.820-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:33.820-0400 d20266| 2015-10-13T18:48:33.820-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:33.821-0400 d20266| 2015-10-13T18:48:33.820-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:35.817-0400 d20267| 2015-10-13T18:48:35.817-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:35.817-0400 d20267| 2015-10-13T18:48:35.817-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:35.818-0400 d20267| 2015-10-13T18:48:35.817-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:35.818-0400 d20267| 2015-10-13T18:48:35.817-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:35.818-0400 d20267| 2015-10-13T18:48:35.818-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:35.818-0400 d20267| 2015-10-13T18:48:35.818-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:35.820-0400 d20266| 2015-10-13T18:48:35.820-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:35.820-0400 d20266| 2015-10-13T18:48:35.820-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:35.820-0400 d20266| 2015-10-13T18:48:35.820-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:35.820-0400 d20266| 2015-10-13T18:48:35.820-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:35.821-0400 d20266| 2015-10-13T18:48:35.820-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:35.821-0400 d20266| 2015-10-13T18:48:35.820-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.818-0400 d20267| 2015-10-13T18:48:37.818-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.819-0400 d20267| 2015-10-13T18:48:37.818-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.819-0400 d20267| 2015-10-13T18:48:37.819-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.819-0400 d20267| 2015-10-13T18:48:37.819-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.819-0400 d20267| 2015-10-13T18:48:37.819-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.819-0400 d20267| 2015-10-13T18:48:37.819-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.820-0400 d20266| 2015-10-13T18:48:37.820-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.821-0400 d20266| 2015-10-13T18:48:37.820-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.821-0400 d20266| 2015-10-13T18:48:37.821-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.821-0400 d20266| 2015-10-13T18:48:37.821-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.821-0400 d20266| 2015-10-13T18:48:37.821-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.821-0400 d20266| 2015-10-13T18:48:37.821-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:37.880-0400 s20264| 2015-10-13T18:48:37.880-0400 D ASIO [UserCacheInvalidator] startCommand: RemoteCommand -- target:ubuntu:20260 db:admin expDate:2015-10-13T18:49:07.880-0400 cmd:{ _getUserCacheGeneration: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:37.880-0400 s20264| 2015-10-13T18:48:37.880-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:37.927-0400 s20264| 2015-10-13T18:48:37.926-0400 D ASIO [replSetDistLockPinger] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:07.926-0400 cmd:{ findAndModify: "lockpings", query: { _id: "ubuntu:20264:1444776427:399327856" }, update: { $set: { ping: new Date(1444776517926) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:37.927-0400 s20264| 2015-10-13T18:48:37.927-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:38.215-0400 d20268| 2015-10-13T18:48:38.214-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:38.347-0400 s20264| 2015-10-13T18:48:38.347-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:48:38.348-0400 s20264| 2015-10-13T18:48:38.347-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:38.348-0400 s20264| 2015-10-13T18:48:38.347-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:38.348-0400 s20264| 2015-10-13T18:48:38.347-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:38.348-0400 s20264| 2015-10-13T18:48:38.347-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:38.348-0400 s20264| 2015-10-13T18:48:38.347-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:38.349-0400 s20264| 2015-10-13T18:48:38.347-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:38.349-0400 s20264| 2015-10-13T18:48:38.348-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:38.349-0400 s20264| 2015-10-13T18:48:38.348-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:48:38.349-0400 s20264| 2015-10-13T18:48:38.348-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:38.349-0400 s20264| 2015-10-13T18:48:38.348-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:38.349-0400 s20264| 2015-10-13T18:48:38.348-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:38.350-0400 s20264| 2015-10-13T18:48:38.348-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:38.350-0400 s20264| 2015-10-13T18:48:38.348-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:38.350-0400 s20264| 2015-10-13T18:48:38.348-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:38.350-0400 s20264| 2015-10-13T18:48:38.349-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:48:38.350-0400 s20264| 2015-10-13T18:48:38.349-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:38.350-0400 s20264| 2015-10-13T18:48:38.349-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:38.351-0400 s20264| 2015-10-13T18:48:38.349-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:38.351-0400 s20264| 2015-10-13T18:48:38.349-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:38.351-0400 s20264| 2015-10-13T18:48:38.349-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:38.351-0400 s20264| 2015-10-13T18:48:38.349-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:38.351-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:48:39.575-0400 s20264| 2015-10-13T18:48:39.575-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d1 [js_test:auth] 2015-10-13T18:48:39.575-0400 s20264| 2015-10-13T18:48:39.575-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d1 [js_test:auth] 2015-10-13T18:48:39.575-0400 s20264| 2015-10-13T18:48:39.575-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20266, no events [js_test:auth] 2015-10-13T18:48:39.575-0400 s20264| 2015-10-13T18:48:39.575-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20267, no events [js_test:auth] 2015-10-13T18:48:39.575-0400 s20264| 2015-10-13T18:48:39.575-0400 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:ubuntu:20265 [js_test:auth] 2015-10-13T18:48:39.575-0400 s20264| 2015-10-13T18:48:39.575-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:48:39.576-0400 s20264| 2015-10-13T18:48:39.575-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:39.576-0400 s20264| 2015-10-13T18:48:39.576-0400 D - [ReplicaSetMonitorWatcher] User Assertion: 13328:connection pool: connect failed ubuntu:20265 : couldn't connect to server ubuntu:20265, connection attempt failed [js_test:auth] 2015-10-13T18:48:39.576-0400 s20264| 2015-10-13T18:48:39.576-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: auth-configRS [js_test:auth] 2015-10-13T18:48:39.576-0400 s20264| 2015-10-13T18:48:39.576-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set auth-configRS [js_test:auth] 2015-10-13T18:48:39.576-0400 s20264| 2015-10-13T18:48:39.576-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:39.576-0400 s20264| 2015-10-13T18:48:39.576-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20262, no events [js_test:auth] 2015-10-13T18:48:39.576-0400 s20264| 2015-10-13T18:48:39.576-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20261, no events [js_test:auth] 2015-10-13T18:48:39.576-0400 s20264| 2015-10-13T18:48:39.576-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d2 [js_test:auth] 2015-10-13T18:48:39.576-0400 s20264| 2015-10-13T18:48:39.576-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d2 [js_test:auth] 2015-10-13T18:48:39.576-0400 s20264| 2015-10-13T18:48:39.576-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20268, no events [js_test:auth] 2015-10-13T18:48:39.577-0400 s20264| 2015-10-13T18:48:39.577-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20270, no events [js_test:auth] 2015-10-13T18:48:39.577-0400 s20264| 2015-10-13T18:48:39.577-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20269, no events [js_test:auth] 2015-10-13T18:48:39.820-0400 d20267| 2015-10-13T18:48:39.820-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:39.820-0400 d20267| 2015-10-13T18:48:39.820-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:39.820-0400 d20267| 2015-10-13T18:48:39.820-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:39.820-0400 d20267| 2015-10-13T18:48:39.820-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:39.821-0400 d20267| 2015-10-13T18:48:39.820-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:39.821-0400 d20267| 2015-10-13T18:48:39.820-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:39.821-0400 d20266| 2015-10-13T18:48:39.821-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:39.821-0400 d20266| 2015-10-13T18:48:39.821-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:39.821-0400 d20266| 2015-10-13T18:48:39.821-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:39.821-0400 d20266| 2015-10-13T18:48:39.821-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:39.821-0400 d20266| 2015-10-13T18:48:39.821-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:39.822-0400 d20266| 2015-10-13T18:48:39.821-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:40.153-0400 d20266| 2015-10-13T18:48:40.153-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:40.370-0400 s20264| 2015-10-13T18:48:40.370-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:10.370-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776520369), up: 93, waiting: false, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.370-0400 s20264| 2015-10-13T18:48:40.370-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:40.404-0400 s20264| 2015-10-13T18:48:40.404-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776520000|1, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.405-0400 s20264| 2015-10-13T18:48:40.404-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:40.405-0400 s20264| 2015-10-13T18:48:40.405-0400 D SHARDING [Balancer] found 2 shards listed on config server(s) [js_test:auth] 2015-10-13T18:48:40.406-0400 s20264| 2015-10-13T18:48:40.405-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776520000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.406-0400 s20264| 2015-10-13T18:48:40.405-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:40.406-0400 s20264| 2015-10-13T18:48:40.405-0400 D SHARDING [Balancer] Refreshing MaxChunkSize: 1MB [js_test:auth] 2015-10-13T18:48:40.407-0400 s20264| 2015-10-13T18:48:40.405-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776520000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.407-0400 s20264| 2015-10-13T18:48:40.405-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:48:40.407-0400 s20264| 2015-10-13T18:48:40.406-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:49:10.406-0400 cmd:{ features: 1 } [js_test:auth] 2015-10-13T18:48:40.407-0400 s20264| 2015-10-13T18:48:40.406-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:40.407-0400 s20264| 2015-10-13T18:48:40.406-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:49:10.406-0400 cmd:{ features: 1 } [js_test:auth] 2015-10-13T18:48:40.408-0400 s20264| 2015-10-13T18:48:40.406-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:48:40.408-0400 s20264| 2015-10-13T18:48:40.406-0400 D SHARDING [Balancer] trying to acquire new distributed lock for balancer ( lock timeout : 900000 ms, ping interval : 30000 ms, process : ubuntu:20264:1444776427:399327856 ) with lockSessionID: 561d8a48c06b51335e5d68a3, why: doing balance round [js_test:auth] 2015-10-13T18:48:40.408-0400 s20264| 2015-10-13T18:48:40.406-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:10.406-0400 cmd:{ findAndModify: "locks", query: { _id: "balancer", state: 0 }, update: { $set: { ts: ObjectId('561d8a48c06b51335e5d68a3'), state: 2, who: "ubuntu:20264:1444776427:399327856:Balancer", process: "ubuntu:20264:1444776427:399327856", when: new Date(1444776520406), why: "doing balance round" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.409-0400 s20264| 2015-10-13T18:48:40.406-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:40.431-0400 s20264| 2015-10-13T18:48:40.430-0400 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 561d8a48c06b51335e5d68a3 [js_test:auth] 2015-10-13T18:48:40.431-0400 s20264| 2015-10-13T18:48:40.430-0400 D SHARDING [Balancer] *** start balancing round. waitForDelete: 1, secondaryThrottle: { w: 1, wtimeout: 0 } [js_test:auth] 2015-10-13T18:48:40.431-0400 s20264| 2015-10-13T18:48:40.430-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776520000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.431-0400 s20264| 2015-10-13T18:48:40.430-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:48:40.431-0400 s20264| 2015-10-13T18:48:40.431-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776520000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.431-0400 s20264| 2015-10-13T18:48:40.431-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:48:40.432-0400 s20264| 2015-10-13T18:48:40.431-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:49:10.431-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:48:40.432-0400 s20264| 2015-10-13T18:48:40.431-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:40.432-0400 s20264| 2015-10-13T18:48:40.432-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:49:10.432-0400 cmd:{ serverStatus: 1 } [js_test:auth] 2015-10-13T18:48:40.433-0400 s20264| 2015-10-13T18:48:40.432-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:40.433-0400 s20264| 2015-10-13T18:48:40.433-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:49:10.433-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:48:40.433-0400 s20264| 2015-10-13T18:48:40.433-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:48:40.434-0400 s20264| 2015-10-13T18:48:40.433-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:49:10.433-0400 cmd:{ serverStatus: 1 } [js_test:auth] 2015-10-13T18:48:40.434-0400 s20264| 2015-10-13T18:48:40.433-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:48:40.434-0400 s20264| 2015-10-13T18:48:40.434-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { min: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776520000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.435-0400 s20264| 2015-10-13T18:48:40.434-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:40.435-0400 s20264| 2015-10-13T18:48:40.435-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "tags", filter: { ns: "test.foo" }, sort: { min: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776520000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.435-0400 s20264| 2015-10-13T18:48:40.435-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:40.435-0400 s20264| 2015-10-13T18:48:40.435-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776520000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.436-0400 s20264| 2015-10-13T18:48:40.435-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:40.436-0400 s20264| 2015-10-13T18:48:40.436-0400 D SHARDING [Balancer] collection : test.foo [js_test:auth] 2015-10-13T18:48:40.436-0400 s20264| 2015-10-13T18:48:40.436-0400 D SHARDING [Balancer] donor : d1 chunks on 4 [js_test:auth] 2015-10-13T18:48:40.436-0400 s20264| 2015-10-13T18:48:40.436-0400 D SHARDING [Balancer] receiver : d2 chunks on 0 [js_test:auth] 2015-10-13T18:48:40.436-0400 s20264| 2015-10-13T18:48:40.436-0400 D SHARDING [Balancer] threshold : 2 [js_test:auth] 2015-10-13T18:48:40.436-0400 s20264| 2015-10-13T18:48:40.436-0400 I SHARDING [Balancer] ns: test.foo going to move { _id: "test.foo-x_MinKey", ns: "test.foo", min: { x: MinKey }, max: { x: 1.0 }, shard: "d1", version: Timestamp 1000|1, versionEpoch: ObjectId('561d8a03c06b51335e5d6897'), lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897') } from: d1 to: d2 tag [] [js_test:auth] 2015-10-13T18:48:40.436-0400 s20264| 2015-10-13T18:48:40.436-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776520000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.437-0400 s20264| 2015-10-13T18:48:40.436-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:40.437-0400 s20264| 2015-10-13T18:48:40.436-0400 I SHARDING [Balancer] moving chunk ns: test.foo moving ( ns: test.foo, shard: d1, lastmod: 1|1||561d8a03c06b51335e5d6897, min: { x: MinKey }, max: { x: 1.0 }) d1 -> d2 [js_test:auth] 2015-10-13T18:48:40.438-0400 d20266| 2015-10-13T18:48:40.438-0400 I SHARDING [conn16] moveChunk waiting for full cleanup after move [js_test:auth] 2015-10-13T18:48:40.438-0400 d20266| 2015-10-13T18:48:40.438-0400 I SHARDING [conn16] received moveChunk request: { moveChunk: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", to: "d2/ubuntu:20268,ubuntu:20269,ubuntu:20270", fromShard: "d1", toShard: "d2", min: { x: MinKey }, max: { x: 1.0 }, maxChunkSizeBytes: 1048576, configdb: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", secondaryThrottle: false, waitForDelete: true, maxTimeMS: 0, shardVersion: [ Timestamp 1000|4, ObjectId('561d8a03c06b51335e5d6897') ], epoch: ObjectId('561d8a03c06b51335e5d6897') } [js_test:auth] 2015-10-13T18:48:40.440-0400 d20266| 2015-10-13T18:48:40.440-0400 W SHARDING [conn16] could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo [js_test:auth] 2015-10-13T18:48:40.440-0400 d20266| 2015-10-13T18:48:40.440-0400 I SHARDING [conn16] about to log metadata event: { _id: "ubuntu-2015-10-13T18:48:40.440-0400-561d8a48bcc93d4b7b68fb08", server: "ubuntu", clientAddr: "127.0.0.1:38631", time: new Date(1444776520440), what: "moveChunk.from", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 1.0 }, step 1 of 6: 0, to: "d2", from: "d1", note: "aborted" } } [js_test:auth] 2015-10-13T18:48:40.452-0400 s20264| 2015-10-13T18:48:40.452-0400 I SHARDING [Balancer] moveChunk result: { ok: 0.0, errmsg: "could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo", code: 46 } [js_test:auth] 2015-10-13T18:48:40.452-0400 s20264| 2015-10-13T18:48:40.452-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776520000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.453-0400 s20264| 2015-10-13T18:48:40.452-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:40.453-0400 s20264| 2015-10-13T18:48:40.453-0400 I SHARDING [Balancer] balancer move failed: { ok: 0.0, errmsg: "could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo", code: 46 } from: d1 to: d2 chunk: min: { x: MinKey } max: { x: 1.0 } [js_test:auth] 2015-10-13T18:48:40.454-0400 s20264| 2015-10-13T18:48:40.453-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:10.453-0400 cmd:{ insert: "actionlog", documents: [ { _id: ObjectId('561d8a48c06b51335e5d68a4'), server: "ubuntu", what: "balancer.round", time: new Date(1444776520453), details: { executionTimeMillis: 83, errorOccured: false, candidateChunks: 1, chunksMoved: 0 } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.454-0400 s20264| 2015-10-13T18:48:40.453-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:40.471-0400 s20264| 2015-10-13T18:48:40.471-0400 D SHARDING [Balancer] *** end of balancing round [js_test:auth] 2015-10-13T18:48:40.472-0400 s20264| 2015-10-13T18:48:40.471-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:10.471-0400 cmd:{ findAndModify: "locks", query: { ts: ObjectId('561d8a48c06b51335e5d68a3') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.473-0400 s20264| 2015-10-13T18:48:40.471-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:40.489-0400 s20264| 2015-10-13T18:48:40.488-0400 I SHARDING [Balancer] distributed lock with ts: 561d8a48c06b51335e5d68a3' unlocked. [js_test:auth] 2015-10-13T18:48:40.489-0400 s20264| 2015-10-13T18:48:40.488-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:10.488-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776520488), up: 93, waiting: true, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:40.489-0400 s20264| 2015-10-13T18:48:40.488-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:40.841-0400 s20264| 2015-10-13T18:48:40.841-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host :27017 [js_test:auth] 2015-10-13T18:48:41.820-0400 d20267| 2015-10-13T18:48:41.820-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:41.821-0400 d20267| 2015-10-13T18:48:41.820-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:41.821-0400 d20267| 2015-10-13T18:48:41.821-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:41.821-0400 d20267| 2015-10-13T18:48:41.821-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:41.821-0400 d20267| 2015-10-13T18:48:41.821-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:41.821-0400 d20267| 2015-10-13T18:48:41.821-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:41.821-0400 d20266| 2015-10-13T18:48:41.821-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:41.821-0400 d20266| 2015-10-13T18:48:41.821-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:41.822-0400 d20266| 2015-10-13T18:48:41.821-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:41.822-0400 d20266| 2015-10-13T18:48:41.822-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:41.822-0400 d20266| 2015-10-13T18:48:41.822-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:41.822-0400 d20266| 2015-10-13T18:48:41.822-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:43.350-0400 s20264| 2015-10-13T18:48:43.349-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:48:43.350-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:43.350-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:43.351-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:43.351-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:43.351-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:43.351-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:43.352-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:43.352-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:48:43.352-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:43.352-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:43.352-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:43.353-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:43.353-0400 s20264| 2015-10-13T18:48:43.350-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:43.353-0400 s20264| 2015-10-13T18:48:43.351-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:43.353-0400 s20264| 2015-10-13T18:48:43.351-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:48:43.353-0400 s20264| 2015-10-13T18:48:43.351-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:43.353-0400 s20264| 2015-10-13T18:48:43.351-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:43.354-0400 s20264| 2015-10-13T18:48:43.351-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:43.354-0400 s20264| 2015-10-13T18:48:43.351-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:43.354-0400 s20264| 2015-10-13T18:48:43.351-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:43.354-0400 s20264| 2015-10-13T18:48:43.351-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:43.354-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:48:43.821-0400 d20267| 2015-10-13T18:48:43.821-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:43.822-0400 d20267| 2015-10-13T18:48:43.821-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:43.822-0400 d20267| 2015-10-13T18:48:43.822-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:43.822-0400 d20267| 2015-10-13T18:48:43.822-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:43.822-0400 d20267| 2015-10-13T18:48:43.822-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:43.822-0400 d20267| 2015-10-13T18:48:43.822-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:43.822-0400 d20266| 2015-10-13T18:48:43.822-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:43.823-0400 d20266| 2015-10-13T18:48:43.822-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:43.823-0400 d20266| 2015-10-13T18:48:43.823-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:43.823-0400 d20266| 2015-10-13T18:48:43.823-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:43.823-0400 d20266| 2015-10-13T18:48:43.823-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:43.823-0400 d20266| 2015-10-13T18:48:43.823-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:45.823-0400 d20267| 2015-10-13T18:48:45.823-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:45.823-0400 d20267| 2015-10-13T18:48:45.823-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:45.824-0400 d20266| 2015-10-13T18:48:45.823-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:45.824-0400 d20267| 2015-10-13T18:48:45.823-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:45.824-0400 d20267| 2015-10-13T18:48:45.823-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:45.825-0400 d20266| 2015-10-13T18:48:45.823-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:45.825-0400 d20267| 2015-10-13T18:48:45.823-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:45.825-0400 d20267| 2015-10-13T18:48:45.823-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:45.826-0400 d20266| 2015-10-13T18:48:45.823-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:45.826-0400 d20266| 2015-10-13T18:48:45.823-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:45.826-0400 d20266| 2015-10-13T18:48:45.823-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:45.827-0400 d20266| 2015-10-13T18:48:45.823-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:47.823-0400 d20267| 2015-10-13T18:48:47.823-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:47.824-0400 d20266| 2015-10-13T18:48:47.823-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:47.824-0400 d20267| 2015-10-13T18:48:47.823-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:47.824-0400 d20266| 2015-10-13T18:48:47.823-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:47.824-0400 d20266| 2015-10-13T18:48:47.823-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:47.824-0400 d20266| 2015-10-13T18:48:47.824-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:47.824-0400 d20267| 2015-10-13T18:48:47.824-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:47.824-0400 d20267| 2015-10-13T18:48:47.824-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:47.825-0400 d20266| 2015-10-13T18:48:47.824-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:47.825-0400 d20266| 2015-10-13T18:48:47.824-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:47.825-0400 d20267| 2015-10-13T18:48:47.824-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:47.825-0400 d20267| 2015-10-13T18:48:47.824-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:48.217-0400 d20268| 2015-10-13T18:48:48.216-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:48.352-0400 s20264| 2015-10-13T18:48:48.352-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:48:48.352-0400 s20264| 2015-10-13T18:48:48.352-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:48.352-0400 s20264| 2015-10-13T18:48:48.352-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:48.352-0400 s20264| 2015-10-13T18:48:48.352-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:48.353-0400 s20264| 2015-10-13T18:48:48.352-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:48.353-0400 s20264| 2015-10-13T18:48:48.352-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:48.353-0400 s20264| 2015-10-13T18:48:48.352-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:48.353-0400 s20264| 2015-10-13T18:48:48.352-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:48.354-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:48:48.354-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:48.354-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:48.354-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:48.354-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:48.355-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:48.355-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:48.355-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:48:48.355-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:48.355-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:48.355-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:48.355-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:48.356-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:48.356-0400 s20264| 2015-10-13T18:48:48.353-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:48.356-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:48:49.577-0400 s20264| 2015-10-13T18:48:49.577-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d1 [js_test:auth] 2015-10-13T18:48:49.577-0400 s20264| 2015-10-13T18:48:49.577-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d1 [js_test:auth] 2015-10-13T18:48:49.578-0400 s20264| 2015-10-13T18:48:49.577-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20266, no events [js_test:auth] 2015-10-13T18:48:49.578-0400 s20264| 2015-10-13T18:48:49.577-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20267, no events [js_test:auth] 2015-10-13T18:48:49.578-0400 s20264| 2015-10-13T18:48:49.578-0400 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:ubuntu:20265 [js_test:auth] 2015-10-13T18:48:49.578-0400 s20264| 2015-10-13T18:48:49.578-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:48:49.578-0400 s20264| 2015-10-13T18:48:49.578-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:49.578-0400 s20264| 2015-10-13T18:48:49.578-0400 D - [ReplicaSetMonitorWatcher] User Assertion: 13328:connection pool: connect failed ubuntu:20265 : couldn't connect to server ubuntu:20265, connection attempt failed [js_test:auth] 2015-10-13T18:48:49.579-0400 s20264| 2015-10-13T18:48:49.578-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: auth-configRS [js_test:auth] 2015-10-13T18:48:49.579-0400 s20264| 2015-10-13T18:48:49.578-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set auth-configRS [js_test:auth] 2015-10-13T18:48:49.579-0400 s20264| 2015-10-13T18:48:49.578-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:49.579-0400 s20264| 2015-10-13T18:48:49.579-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20261, no events [js_test:auth] 2015-10-13T18:48:49.579-0400 s20264| 2015-10-13T18:48:49.579-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20262, no events [js_test:auth] 2015-10-13T18:48:49.579-0400 s20264| 2015-10-13T18:48:49.579-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d2 [js_test:auth] 2015-10-13T18:48:49.579-0400 s20264| 2015-10-13T18:48:49.579-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d2 [js_test:auth] 2015-10-13T18:48:49.579-0400 s20264| 2015-10-13T18:48:49.579-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20268, no events [js_test:auth] 2015-10-13T18:48:49.579-0400 s20264| 2015-10-13T18:48:49.579-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20269, no events [js_test:auth] 2015-10-13T18:48:49.580-0400 s20264| 2015-10-13T18:48:49.580-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20270, no events [js_test:auth] 2015-10-13T18:48:49.825-0400 d20267| 2015-10-13T18:48:49.824-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:49.825-0400 d20267| 2015-10-13T18:48:49.824-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:49.826-0400 d20266| 2015-10-13T18:48:49.824-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:49.826-0400 d20266| 2015-10-13T18:48:49.825-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:49.826-0400 d20266| 2015-10-13T18:48:49.825-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:49.827-0400 d20267| 2015-10-13T18:48:49.825-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:49.827-0400 d20267| 2015-10-13T18:48:49.825-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:49.827-0400 d20266| 2015-10-13T18:48:49.825-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:49.828-0400 d20266| 2015-10-13T18:48:49.825-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:49.828-0400 d20266| 2015-10-13T18:48:49.825-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:49.828-0400 d20267| 2015-10-13T18:48:49.825-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:49.828-0400 d20267| 2015-10-13T18:48:49.825-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:50.156-0400 d20266| 2015-10-13T18:48:50.156-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:50.508-0400 s20264| 2015-10-13T18:48:50.508-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:20.508-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776530508), up: 103, waiting: false, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.509-0400 s20264| 2015-10-13T18:48:50.508-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:50.524-0400 s20264| 2015-10-13T18:48:50.524-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776530000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.524-0400 s20264| 2015-10-13T18:48:50.524-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:50.525-0400 s20264| 2015-10-13T18:48:50.525-0400 D SHARDING [Balancer] found 2 shards listed on config server(s) [js_test:auth] 2015-10-13T18:48:50.525-0400 s20264| 2015-10-13T18:48:50.525-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776530000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.525-0400 s20264| 2015-10-13T18:48:50.525-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:50.525-0400 s20264| 2015-10-13T18:48:50.525-0400 D SHARDING [Balancer] Refreshing MaxChunkSize: 1MB [js_test:auth] 2015-10-13T18:48:50.525-0400 s20264| 2015-10-13T18:48:50.525-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776530000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.525-0400 s20264| 2015-10-13T18:48:50.525-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:48:50.526-0400 s20264| 2015-10-13T18:48:50.526-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:49:20.526-0400 cmd:{ features: 1 } [js_test:auth] 2015-10-13T18:48:50.526-0400 s20264| 2015-10-13T18:48:50.526-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:50.526-0400 s20264| 2015-10-13T18:48:50.526-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:49:20.526-0400 cmd:{ features: 1 } [js_test:auth] 2015-10-13T18:48:50.526-0400 s20264| 2015-10-13T18:48:50.526-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:48:50.526-0400 s20264| 2015-10-13T18:48:50.526-0400 D SHARDING [Balancer] trying to acquire new distributed lock for balancer ( lock timeout : 900000 ms, ping interval : 30000 ms, process : ubuntu:20264:1444776427:399327856 ) with lockSessionID: 561d8a52c06b51335e5d68a5, why: doing balance round [js_test:auth] 2015-10-13T18:48:50.526-0400 s20264| 2015-10-13T18:48:50.526-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:20.526-0400 cmd:{ findAndModify: "locks", query: { _id: "balancer", state: 0 }, update: { $set: { ts: ObjectId('561d8a52c06b51335e5d68a5'), state: 2, who: "ubuntu:20264:1444776427:399327856:Balancer", process: "ubuntu:20264:1444776427:399327856", when: new Date(1444776530526), why: "doing balance round" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.527-0400 s20264| 2015-10-13T18:48:50.526-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:50.555-0400 s20264| 2015-10-13T18:48:50.555-0400 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 561d8a52c06b51335e5d68a5 [js_test:auth] 2015-10-13T18:48:50.555-0400 s20264| 2015-10-13T18:48:50.555-0400 D SHARDING [Balancer] *** start balancing round. waitForDelete: 1, secondaryThrottle: { w: 1, wtimeout: 0 } [js_test:auth] 2015-10-13T18:48:50.557-0400 s20264| 2015-10-13T18:48:50.555-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776530000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.557-0400 s20264| 2015-10-13T18:48:50.555-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:50.558-0400 s20264| 2015-10-13T18:48:50.555-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776530000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.558-0400 s20264| 2015-10-13T18:48:50.555-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:50.558-0400 s20264| 2015-10-13T18:48:50.556-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:49:20.556-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:48:50.559-0400 s20264| 2015-10-13T18:48:50.556-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:50.559-0400 s20264| 2015-10-13T18:48:50.557-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:49:20.556-0400 cmd:{ serverStatus: 1 } [js_test:auth] 2015-10-13T18:48:50.559-0400 s20264| 2015-10-13T18:48:50.557-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:48:50.559-0400 s20264| 2015-10-13T18:48:50.558-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:49:20.558-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:48:50.559-0400 s20264| 2015-10-13T18:48:50.558-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:48:50.560-0400 s20264| 2015-10-13T18:48:50.558-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:49:20.558-0400 cmd:{ serverStatus: 1 } [js_test:auth] 2015-10-13T18:48:50.560-0400 s20264| 2015-10-13T18:48:50.558-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:48:50.560-0400 s20264| 2015-10-13T18:48:50.559-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { min: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776530000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.561-0400 s20264| 2015-10-13T18:48:50.559-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:50.561-0400 s20264| 2015-10-13T18:48:50.560-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "tags", filter: { ns: "test.foo" }, sort: { min: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776530000|3, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.561-0400 s20264| 2015-10-13T18:48:50.560-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:50.561-0400 s20264| 2015-10-13T18:48:50.560-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776530000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.561-0400 s20264| 2015-10-13T18:48:50.560-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:50.562-0400 s20264| 2015-10-13T18:48:50.561-0400 D SHARDING [Balancer] collection : test.foo [js_test:auth] 2015-10-13T18:48:50.562-0400 s20264| 2015-10-13T18:48:50.561-0400 D SHARDING [Balancer] donor : d1 chunks on 4 [js_test:auth] 2015-10-13T18:48:50.562-0400 s20264| 2015-10-13T18:48:50.561-0400 D SHARDING [Balancer] receiver : d2 chunks on 0 [js_test:auth] 2015-10-13T18:48:50.562-0400 s20264| 2015-10-13T18:48:50.561-0400 D SHARDING [Balancer] threshold : 2 [js_test:auth] 2015-10-13T18:48:50.562-0400 s20264| 2015-10-13T18:48:50.561-0400 I SHARDING [Balancer] ns: test.foo going to move { _id: "test.foo-x_MinKey", ns: "test.foo", min: { x: MinKey }, max: { x: 1.0 }, shard: "d1", version: Timestamp 1000|1, versionEpoch: ObjectId('561d8a03c06b51335e5d6897'), lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897') } from: d1 to: d2 tag [] [js_test:auth] 2015-10-13T18:48:50.563-0400 s20264| 2015-10-13T18:48:50.561-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776530000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.563-0400 s20264| 2015-10-13T18:48:50.561-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:48:50.563-0400 s20264| 2015-10-13T18:48:50.561-0400 I SHARDING [Balancer] moving chunk ns: test.foo moving ( ns: test.foo, shard: d1, lastmod: 1|1||561d8a03c06b51335e5d6897, min: { x: MinKey }, max: { x: 1.0 }) d1 -> d2 [js_test:auth] 2015-10-13T18:48:50.563-0400 c20261| 2015-10-13T18:48:50.562-0400 I NETWORK [initandlisten] connection accepted from 127.0.0.1:53808 #17 (10 connections now open) [js_test:auth] 2015-10-13T18:48:50.581-0400 c20261| 2015-10-13T18:48:50.580-0400 I ACCESS [conn17] Successfully authenticated as principal __system on local [js_test:auth] 2015-10-13T18:48:50.581-0400 d20266| 2015-10-13T18:48:50.581-0400 I ASIO [NetworkInterfaceASIO] Successfully connected to ubuntu:20261 [js_test:auth] 2015-10-13T18:48:50.581-0400 d20266| 2015-10-13T18:48:50.581-0400 I SHARDING [conn16] moveChunk waiting for full cleanup after move [js_test:auth] 2015-10-13T18:48:50.582-0400 d20266| 2015-10-13T18:48:50.581-0400 I SHARDING [conn16] received moveChunk request: { moveChunk: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", to: "d2/ubuntu:20268,ubuntu:20269,ubuntu:20270", fromShard: "d1", toShard: "d2", min: { x: MinKey }, max: { x: 1.0 }, maxChunkSizeBytes: 1048576, configdb: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", secondaryThrottle: false, waitForDelete: true, maxTimeMS: 0, shardVersion: [ Timestamp 1000|4, ObjectId('561d8a03c06b51335e5d6897') ], epoch: ObjectId('561d8a03c06b51335e5d6897') } [js_test:auth] 2015-10-13T18:48:50.583-0400 d20266| 2015-10-13T18:48:50.583-0400 W SHARDING [conn16] could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo [js_test:auth] 2015-10-13T18:48:50.583-0400 d20266| 2015-10-13T18:48:50.583-0400 I SHARDING [conn16] about to log metadata event: { _id: "ubuntu-2015-10-13T18:48:50.583-0400-561d8a52bcc93d4b7b68fb0a", server: "ubuntu", clientAddr: "127.0.0.1:38631", time: new Date(1444776530583), what: "moveChunk.from", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 1.0 }, step 1 of 6: 0, to: "d2", from: "d1", note: "aborted" } } [js_test:auth] 2015-10-13T18:48:50.600-0400 s20264| 2015-10-13T18:48:50.600-0400 I SHARDING [Balancer] moveChunk result: { ok: 0.0, errmsg: "could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo", code: 46 } [js_test:auth] 2015-10-13T18:48:50.600-0400 s20264| 2015-10-13T18:48:50.600-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776530000|4, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.600-0400 s20264| 2015-10-13T18:48:50.600-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:48:50.600-0400 s20264| 2015-10-13T18:48:50.600-0400 I SHARDING [Balancer] balancer move failed: { ok: 0.0, errmsg: "could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo", code: 46 } from: d1 to: d2 chunk: min: { x: MinKey } max: { x: 1.0 } [js_test:auth] 2015-10-13T18:48:50.600-0400 s20264| 2015-10-13T18:48:50.600-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:20.600-0400 cmd:{ insert: "actionlog", documents: [ { _id: ObjectId('561d8a52c06b51335e5d68a6'), server: "ubuntu", what: "balancer.round", time: new Date(1444776530600), details: { executionTimeMillis: 92, errorOccured: false, candidateChunks: 1, chunksMoved: 0 } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.601-0400 s20264| 2015-10-13T18:48:50.600-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:50.629-0400 s20264| 2015-10-13T18:48:50.629-0400 D SHARDING [Balancer] *** end of balancing round [js_test:auth] 2015-10-13T18:48:50.629-0400 s20264| 2015-10-13T18:48:50.629-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:20.629-0400 cmd:{ findAndModify: "locks", query: { ts: ObjectId('561d8a52c06b51335e5d68a5') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.630-0400 s20264| 2015-10-13T18:48:50.629-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:50.653-0400 s20264| 2015-10-13T18:48:50.653-0400 I SHARDING [Balancer] distributed lock with ts: 561d8a52c06b51335e5d68a5' unlocked. [js_test:auth] 2015-10-13T18:48:50.653-0400 s20264| 2015-10-13T18:48:50.653-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:20.653-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776530653), up: 103, waiting: true, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:48:50.654-0400 s20264| 2015-10-13T18:48:50.653-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:48:51.825-0400 d20266| 2015-10-13T18:48:51.825-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:51.825-0400 d20266| 2015-10-13T18:48:51.825-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:51.826-0400 d20266| 2015-10-13T18:48:51.825-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:51.826-0400 d20266| 2015-10-13T18:48:51.825-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:51.826-0400 d20267| 2015-10-13T18:48:51.825-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:51.826-0400 d20266| 2015-10-13T18:48:51.826-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:51.826-0400 d20267| 2015-10-13T18:48:51.826-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:51.826-0400 d20266| 2015-10-13T18:48:51.826-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:51.826-0400 d20267| 2015-10-13T18:48:51.826-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:51.826-0400 d20267| 2015-10-13T18:48:51.826-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:51.827-0400 d20267| 2015-10-13T18:48:51.826-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:51.827-0400 d20267| 2015-10-13T18:48:51.826-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:53.354-0400 s20264| 2015-10-13T18:48:53.354-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:48:53.354-0400 s20264| 2015-10-13T18:48:53.354-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:53.354-0400 s20264| 2015-10-13T18:48:53.354-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:53.355-0400 s20264| 2015-10-13T18:48:53.354-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:53.355-0400 s20264| 2015-10-13T18:48:53.354-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:53.355-0400 s20264| 2015-10-13T18:48:53.354-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:53.355-0400 s20264| 2015-10-13T18:48:53.354-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:53.355-0400 s20264| 2015-10-13T18:48:53.354-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:53.355-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:48:53.355-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:53.355-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:53.356-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:53.356-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:53.356-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:53.356-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:53.356-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:48:53.356-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:53.356-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:53.357-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:53.357-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:53.357-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:53.357-0400 s20264| 2015-10-13T18:48:53.355-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:53.358-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:48:53.827-0400 d20266| 2015-10-13T18:48:53.827-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:53.827-0400 d20267| 2015-10-13T18:48:53.827-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:53.827-0400 d20266| 2015-10-13T18:48:53.827-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:53.828-0400 d20267| 2015-10-13T18:48:53.827-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:53.828-0400 d20266| 2015-10-13T18:48:53.827-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:53.828-0400 d20266| 2015-10-13T18:48:53.827-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:53.828-0400 d20267| 2015-10-13T18:48:53.827-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:53.829-0400 d20267| 2015-10-13T18:48:53.827-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:53.829-0400 d20266| 2015-10-13T18:48:53.827-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:53.829-0400 d20266| 2015-10-13T18:48:53.827-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:53.829-0400 d20267| 2015-10-13T18:48:53.827-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:53.829-0400 d20267| 2015-10-13T18:48:53.827-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:55.828-0400 d20266| 2015-10-13T18:48:55.827-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:55.828-0400 d20266| 2015-10-13T18:48:55.827-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:55.829-0400 d20267| 2015-10-13T18:48:55.828-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:55.829-0400 d20266| 2015-10-13T18:48:55.828-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:55.829-0400 d20266| 2015-10-13T18:48:55.828-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:55.829-0400 d20267| 2015-10-13T18:48:55.828-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:55.830-0400 d20266| 2015-10-13T18:48:55.828-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:55.830-0400 d20266| 2015-10-13T18:48:55.828-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:55.830-0400 d20267| 2015-10-13T18:48:55.828-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:55.831-0400 d20267| 2015-10-13T18:48:55.828-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:55.831-0400 d20267| 2015-10-13T18:48:55.828-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:55.831-0400 d20267| 2015-10-13T18:48:55.828-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:57.829-0400 d20267| 2015-10-13T18:48:57.829-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:57.829-0400 d20267| 2015-10-13T18:48:57.829-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:57.829-0400 d20266| 2015-10-13T18:48:57.829-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:57.829-0400 d20267| 2015-10-13T18:48:57.829-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:57.829-0400 d20267| 2015-10-13T18:48:57.829-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:57.829-0400 d20267| 2015-10-13T18:48:57.829-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:57.830-0400 d20267| 2015-10-13T18:48:57.829-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:57.830-0400 d20266| 2015-10-13T18:48:57.830-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:57.830-0400 d20266| 2015-10-13T18:48:57.830-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:57.830-0400 d20266| 2015-10-13T18:48:57.830-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:57.830-0400 d20266| 2015-10-13T18:48:57.830-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:57.830-0400 d20266| 2015-10-13T18:48:57.830-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:58.101-0400 c20260| 2015-10-13T18:48:58.100-0400 I NETWORK [conn20] end connection 127.0.0.1:49633 (15 connections now open) [js_test:auth] 2015-10-13T18:48:58.219-0400 d20268| 2015-10-13T18:48:58.219-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:58.256-0400 s20264| 2015-10-13T18:48:58.256-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host :27017 [js_test:auth] 2015-10-13T18:48:58.256-0400 s20264| 2015-10-13T18:48:58.256-0400 D ASIO [NetworkInterfaceASIO] failed to close stream: Transport endpoint is not connected [js_test:auth] 2015-10-13T18:48:58.356-0400 s20264| 2015-10-13T18:48:58.356-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d1" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d1" } } [js_test:auth] 2015-10-13T18:48:58.356-0400 s20264| 2015-10-13T18:48:58.356-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:58.356-0400 s20264| 2015-10-13T18:48:58.356-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:58.356-0400 s20264| 2015-10-13T18:48:58.356-0400 D NETWORK [conn1] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:58.357-0400 s20264| 2015-10-13T18:48:58.356-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:58.357-0400 s20264| 2015-10-13T18:48:58.356-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:58.357-0400 s20264| 2015-10-13T18:48:58.356-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:58.357-0400 s20264| 2015-10-13T18:48:58.356-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:58.357-0400 s20264| 2015-10-13T18:48:58.356-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "d2" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "d2" } } [js_test:auth] 2015-10-13T18:48:58.357-0400 s20264| 2015-10-13T18:48:58.356-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:58.357-0400 s20264| 2015-10-13T18:48:58.357-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:58.358-0400 s20264| 2015-10-13T18:48:58.357-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:58.358-0400 s20264| 2015-10-13T18:48:58.357-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:58.358-0400 s20264| 2015-10-13T18:48:58.357-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:58.358-0400 s20264| 2015-10-13T18:48:58.357-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:58.358-0400 s20264| 2015-10-13T18:48:58.357-0400 D NETWORK [conn1] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { ns: "test.foo" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { ns: "test.foo" } } [js_test:auth] 2015-10-13T18:48:58.358-0400 s20264| 2015-10-13T18:48:58.357-0400 D NETWORK [conn1] initializing over 1 shards required by [unsharded @ config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262] [js_test:auth] 2015-10-13T18:48:58.358-0400 s20264| 2015-10-13T18:48:58.357-0400 D NETWORK [conn1] initializing on shard config, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:58.359-0400 s20264| 2015-10-13T18:48:58.357-0400 D NETWORK [conn1] initialized command (lazily) on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:58.359-0400 s20264| 2015-10-13T18:48:58.357-0400 D NETWORK [conn1] finishing over 1 shards [js_test:auth] 2015-10-13T18:48:58.360-0400 s20264| 2015-10-13T18:48:58.357-0400 D NETWORK [conn1] finishing on shard config, current connection state is { state: { conn: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } [js_test:auth] 2015-10-13T18:48:58.360-0400 s20264| 2015-10-13T18:48:58.357-0400 D NETWORK [conn1] finished on shard config, current connection state is { state: { conn: "(done)", vinfo: "config:auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", cursor: { waitedMS: 0, n: 4, ok: 1.0, $gleStats: { lastOpTime: Timestamp 0|0, electionId: ObjectId('561d89e30000000000000001') } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [js_test:auth] 2015-10-13T18:48:58.360-0400 chunks: 4 0 4 [js_test:auth] 2015-10-13T18:48:58.360-0400 assert.soon failed, msg:Chunks failed to balance [js_test:auth] 2015-10-13T18:48:58.360-0400 doassert@src/mongo/shell/assert.js:15:14 [js_test:auth] 2015-10-13T18:48:58.360-0400 assert.soon@src/mongo/shell/assert.js:194:13 [js_test:auth] 2015-10-13T18:48:58.360-0400 @jstests/sharding/auth.js:192:1 [js_test:auth] 2015-10-13T18:48:58.361-0400 @jstests/sharding/auth.js:3:2 [js_test:auth] 2015-10-13T18:48:58.361-0400 [js_test:auth] 2015-10-13T18:48:58.361-0400 2015-10-13T18:48:58.357-0400 E QUERY [thread1] Error: assert.soon failed, msg:Chunks failed to balance : [js_test:auth] 2015-10-13T18:48:58.361-0400 doassert@src/mongo/shell/assert.js:15:14 [js_test:auth] 2015-10-13T18:48:58.361-0400 assert.soon@src/mongo/shell/assert.js:194:13 [js_test:auth] 2015-10-13T18:48:58.361-0400 @jstests/sharding/auth.js:192:1 [js_test:auth] 2015-10-13T18:48:58.361-0400 @jstests/sharding/auth.js:3:2 [js_test:auth] 2015-10-13T18:48:58.361-0400 [js_test:auth] 2015-10-13T18:48:58.362-0400 failed to load: jstests/sharding/auth.js [js_test:auth] 2015-10-13T18:48:58.362-0400 d20270| 2015-10-13T18:48:58.357-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:48:58.362-0400 d20270| 2015-10-13T18:48:58.357-0400 I FTDC [signalProcessingThread] Stopping full-time diagnostic data capture [js_test:auth] 2015-10-13T18:48:58.362-0400 d20270| 2015-10-13T18:48:58.359-0400 I REPL [signalProcessingThread] Stopping replication applier threads [js_test:auth] 2015-10-13T18:48:58.629-0400 d20270| 2015-10-13T18:48:58.628-0400 I STORAGE [conn3] got request after shutdown() [js_test:auth] 2015-10-13T18:48:58.629-0400 d20268| 2015-10-13T18:48:58.629-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20270; HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:58.765-0400 d20270| 2015-10-13T18:48:58.765-0400 I EXECUTOR [rsBackgroundSync] killCursors command failed: CallbackCanceled Callback canceled [js_test:auth] 2015-10-13T18:48:58.778-0400 d20270| 2015-10-13T18:48:58.778-0400 I CONTROL [signalProcessingThread] now exiting [js_test:auth] 2015-10-13T18:48:58.779-0400 d20270| 2015-10-13T18:48:58.778-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... [js_test:auth] 2015-10-13T18:48:58.779-0400 d20270| 2015-10-13T18:48:58.778-0400 I NETWORK [signalProcessingThread] closing listening socket: 43 [js_test:auth] 2015-10-13T18:48:58.779-0400 d20270| 2015-10-13T18:48:58.778-0400 I NETWORK [signalProcessingThread] closing listening socket: 44 [js_test:auth] 2015-10-13T18:48:58.779-0400 d20270| 2015-10-13T18:48:58.778-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20270.sock [js_test:auth] 2015-10-13T18:48:58.779-0400 d20270| 2015-10-13T18:48:58.778-0400 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... [js_test:auth] 2015-10-13T18:48:58.779-0400 d20268| 2015-10-13T18:48:58.778-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20270 - HostUnreachable Connection reset by peer [js_test:auth] 2015-10-13T18:48:58.779-0400 d20270| 2015-10-13T18:48:58.778-0400 I NETWORK [signalProcessingThread] shutdown: going to close sockets... [js_test:auth] 2015-10-13T18:48:58.779-0400 d20270| 2015-10-13T18:48:58.778-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down [js_test:auth] 2015-10-13T18:48:58.779-0400 d20268| 2015-10-13T18:48:58.778-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20270; HostUnreachable Connection reset by peer [js_test:auth] 2015-10-13T18:48:58.780-0400 d20270| 2015-10-13T18:48:58.778-0400 I NETWORK [conn7] end connection 127.0.0.1:50963 (4 connections now open) [js_test:auth] 2015-10-13T18:48:58.780-0400 d20270| 2015-10-13T18:48:58.778-0400 I NETWORK [conn6] end connection 127.0.0.1:50266 (4 connections now open) [js_test:auth] 2015-10-13T18:48:58.780-0400 d20270| 2015-10-13T18:48:58.779-0400 I NETWORK [conn9] end connection 127.0.0.1:52391 (4 connections now open) [js_test:auth] 2015-10-13T18:48:58.780-0400 d20268| 2015-10-13T18:48:58.779-0400 I NETWORK [conn13] end connection 127.0.0.1:54375 (11 connections now open) [js_test:auth] 2015-10-13T18:48:58.780-0400 d20270| 2015-10-13T18:48:58.779-0400 I NETWORK [conn1] end connection 127.0.0.1:60988 (3 connections now open) [js_test:auth] 2015-10-13T18:48:58.780-0400 d20270| 2015-10-13T18:48:58.779-0400 I NETWORK [conn10] end connection 127.0.0.1:53957 (2 connections now open) [js_test:auth] 2015-10-13T18:48:58.780-0400 d20268| 2015-10-13T18:48:58.779-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20270 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:58.780-0400 d20268| 2015-10-13T18:48:58.779-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20270; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:58.845-0400 d20269| 2015-10-13T18:48:58.845-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20270; HostUnreachable End of file [js_test:auth] 2015-10-13T18:48:58.845-0400 d20269| 2015-10-13T18:48:58.845-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20270 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:58.845-0400 d20269| 2015-10-13T18:48:58.845-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20270; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:58.847-0400 d20269| 2015-10-13T18:48:58.847-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20270 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:58.847-0400 d20269| 2015-10-13T18:48:58.847-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20270; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:59.221-0400 d20270| 2015-10-13T18:48:59.221-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock... [js_test:auth] 2015-10-13T18:48:59.221-0400 d20270| 2015-10-13T18:48:59.221-0400 I CONTROL [signalProcessingThread] dbexit: rc: 0 [js_test:auth] 2015-10-13T18:48:59.229-0400 d20268| 2015-10-13T18:48:59.229-0400 I NETWORK [conn4] end connection 127.0.0.1:53808 (10 connections now open) [js_test:auth] 2015-10-13T18:48:59.229-0400 d20269| 2015-10-13T18:48:59.229-0400 I NETWORK [conn6] end connection 127.0.0.1:47896 (5 connections now open) [js_test:auth] 2015-10-13T18:48:59.229-0400 d20268| 2015-10-13T18:48:59.229-0400 I NETWORK [conn14] end connection 127.0.0.1:54376 (10 connections now open) [js_test:auth] 2015-10-13T18:48:59.358-0400 d20269| 2015-10-13T18:48:59.358-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:48:59.358-0400 d20269| 2015-10-13T18:48:59.358-0400 I FTDC [signalProcessingThread] Stopping full-time diagnostic data capture [js_test:auth] 2015-10-13T18:48:59.360-0400 d20269| 2015-10-13T18:48:59.360-0400 I REPL [signalProcessingThread] Stopping replication applier threads [js_test:auth] 2015-10-13T18:48:59.580-0400 s20264| 2015-10-13T18:48:59.580-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d1 [js_test:auth] 2015-10-13T18:48:59.580-0400 s20264| 2015-10-13T18:48:59.580-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d1 [js_test:auth] 2015-10-13T18:48:59.580-0400 s20264| 2015-10-13T18:48:59.580-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20266, no events [js_test:auth] 2015-10-13T18:48:59.581-0400 s20264| 2015-10-13T18:48:59.580-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20267, no events [js_test:auth] 2015-10-13T18:48:59.581-0400 s20264| 2015-10-13T18:48:59.581-0400 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:ubuntu:20265 [js_test:auth] 2015-10-13T18:48:59.581-0400 s20264| 2015-10-13T18:48:59.581-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:48:59.581-0400 s20264| 2015-10-13T18:48:59.581-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:59.581-0400 s20264| 2015-10-13T18:48:59.581-0400 D - [ReplicaSetMonitorWatcher] User Assertion: 13328:connection pool: connect failed ubuntu:20265 : couldn't connect to server ubuntu:20265, connection attempt failed [js_test:auth] 2015-10-13T18:48:59.581-0400 s20264| 2015-10-13T18:48:59.581-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: auth-configRS [js_test:auth] 2015-10-13T18:48:59.581-0400 s20264| 2015-10-13T18:48:59.581-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set auth-configRS [js_test:auth] 2015-10-13T18:48:59.581-0400 s20264| 2015-10-13T18:48:59.581-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20260, no events [js_test:auth] 2015-10-13T18:48:59.581-0400 s20264| 2015-10-13T18:48:59.581-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20261, no events [js_test:auth] 2015-10-13T18:48:59.582-0400 s20264| 2015-10-13T18:48:59.581-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20262, no events [js_test:auth] 2015-10-13T18:48:59.582-0400 s20264| 2015-10-13T18:48:59.582-0400 D NETWORK [ReplicaSetMonitorWatcher] checking replica set: d2 [js_test:auth] 2015-10-13T18:48:59.582-0400 s20264| 2015-10-13T18:48:59.582-0400 D NETWORK [ReplicaSetMonitorWatcher] Starting new refresh of replica set d2 [js_test:auth] 2015-10-13T18:48:59.582-0400 s20264| 2015-10-13T18:48:59.582-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20268, no events [js_test:auth] 2015-10-13T18:48:59.582-0400 s20264| 2015-10-13T18:48:59.582-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20270, event detected [js_test:auth] 2015-10-13T18:48:59.582-0400 s20264| 2015-10-13T18:48:59.582-0400 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 10 secs, remote host 127.0.1.1:20270) [js_test:auth] 2015-10-13T18:48:59.582-0400 s20264| 2015-10-13T18:48:59.582-0400 D NETWORK [ReplicaSetMonitorWatcher] creating new connection to:ubuntu:20270 [js_test:auth] 2015-10-13T18:48:59.582-0400 s20264| 2015-10-13T18:48:59.582-0400 D COMMAND [ConnectBG] BackgroundJob starting: ConnectBG [js_test:auth] 2015-10-13T18:48:59.582-0400 s20264| 2015-10-13T18:48:59.582-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20270, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:48:59.583-0400 s20264| 2015-10-13T18:48:59.582-0400 D - [ReplicaSetMonitorWatcher] User Assertion: 13328:connection pool: connect failed ubuntu:20270 : couldn't connect to server ubuntu:20270, connection attempt failed [js_test:auth] 2015-10-13T18:48:59.583-0400 s20264| 2015-10-13T18:48:59.582-0400 D NETWORK [ReplicaSetMonitorWatcher] polling for status of connection to 127.0.1.1:20269, no events [js_test:auth] 2015-10-13T18:48:59.583-0400 d20269| 2015-10-13T18:48:59.582-0400 I STORAGE [conn7] got request after shutdown() [js_test:auth] 2015-10-13T18:48:59.583-0400 s20264| 2015-10-13T18:48:59.583-0400 D NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 127.0.1.1:20269 error: 9001 socket exception [CLOSED] server [127.0.1.1:20269] [js_test:auth] 2015-10-13T18:48:59.583-0400 s20264| 2015-10-13T18:48:59.583-0400 D - [ReplicaSetMonitorWatcher] User Assertion: 6:network error while attempting to run command 'ismaster' on host 'ubuntu:20269' [js_test:auth] 2015-10-13T18:48:59.583-0400 s20264| 2015-10-13T18:48:59.583-0400 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1444776467817187 microSec, clearing pool for ubuntu:20269 of 0 connections [js_test:auth] 2015-10-13T18:48:59.829-0400 d20267| 2015-10-13T18:48:59.829-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:59.830-0400 d20267| 2015-10-13T18:48:59.829-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:59.830-0400 d20267| 2015-10-13T18:48:59.829-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:59.830-0400 d20267| 2015-10-13T18:48:59.830-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:59.830-0400 d20267| 2015-10-13T18:48:59.830-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:59.830-0400 d20267| 2015-10-13T18:48:59.830-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:59.830-0400 d20266| 2015-10-13T18:48:59.830-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:59.830-0400 d20266| 2015-10-13T18:48:59.830-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:59.831-0400 d20266| 2015-10-13T18:48:59.831-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:59.831-0400 d20266| 2015-10-13T18:48:59.831-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:59.831-0400 d20266| 2015-10-13T18:48:59.831-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:48:59.831-0400 d20266| 2015-10-13T18:48:59.831-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:00.159-0400 d20266| 2015-10-13T18:49:00.159-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20265, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:49:00.160-0400 d20266| 2015-10-13T18:49:00.160-0400 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 10 secs, remote host 127.0.1.1:20270) [js_test:auth] 2015-10-13T18:49:00.160-0400 d20266| 2015-10-13T18:49:00.160-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20270, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:49:00.160-0400 d20269| 2015-10-13T18:49:00.160-0400 I STORAGE [conn10] got request after shutdown() [js_test:auth] 2015-10-13T18:49:00.161-0400 d20266| 2015-10-13T18:49:00.161-0400 I NETWORK [ReplicaSetMonitorWatcher] Detected bad connection created at 1444776510135312 microSec, clearing pool for ubuntu:20269 of 0 connections [js_test:auth] 2015-10-13T18:49:00.629-0400 d20269| 2015-10-13T18:49:00.629-0400 I STORAGE [conn3] got request after shutdown() [js_test:auth] 2015-10-13T18:49:00.629-0400 d20268| 2015-10-13T18:49:00.629-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20269; HostUnreachable End of file [js_test:auth] 2015-10-13T18:49:00.674-0400 s20264| 2015-10-13T18:49:00.673-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:30.673-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776540673), up: 113, waiting: false, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.674-0400 s20264| 2015-10-13T18:49:00.674-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:49:00.692-0400 s20264| 2015-10-13T18:49:00.691-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776540000|1, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.692-0400 s20264| 2015-10-13T18:49:00.691-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:49:00.692-0400 s20264| 2015-10-13T18:49:00.692-0400 D SHARDING [Balancer] found 2 shards listed on config server(s) [js_test:auth] 2015-10-13T18:49:00.693-0400 s20264| 2015-10-13T18:49:00.692-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776540000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.693-0400 s20264| 2015-10-13T18:49:00.692-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:49:00.693-0400 s20264| 2015-10-13T18:49:00.693-0400 D SHARDING [Balancer] Refreshing MaxChunkSize: 1MB [js_test:auth] 2015-10-13T18:49:00.693-0400 s20264| 2015-10-13T18:49:00.693-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776540000|1, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.693-0400 s20264| 2015-10-13T18:49:00.693-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:49:00.694-0400 s20264| 2015-10-13T18:49:00.693-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:49:30.693-0400 cmd:{ features: 1 } [js_test:auth] 2015-10-13T18:49:00.694-0400 s20264| 2015-10-13T18:49:00.693-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:49:00.694-0400 s20264| 2015-10-13T18:49:00.693-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:49:30.693-0400 cmd:{ features: 1 } [js_test:auth] 2015-10-13T18:49:00.694-0400 s20264| 2015-10-13T18:49:00.694-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:49:00.694-0400 s20264| 2015-10-13T18:49:00.694-0400 D SHARDING [Balancer] trying to acquire new distributed lock for balancer ( lock timeout : 900000 ms, ping interval : 30000 ms, process : ubuntu:20264:1444776427:399327856 ) with lockSessionID: 561d8a5cc06b51335e5d68a7, why: doing balance round [js_test:auth] 2015-10-13T18:49:00.694-0400 s20264| 2015-10-13T18:49:00.694-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:30.694-0400 cmd:{ findAndModify: "locks", query: { _id: "balancer", state: 0 }, update: { $set: { ts: ObjectId('561d8a5cc06b51335e5d68a7'), state: 2, who: "ubuntu:20264:1444776427:399327856:Balancer", process: "ubuntu:20264:1444776427:399327856", when: new Date(1444776540694), why: "doing balance round" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.695-0400 s20264| 2015-10-13T18:49:00.694-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:49:00.713-0400 s20264| 2015-10-13T18:49:00.712-0400 I SHARDING [Balancer] distributed lock 'balancer' acquired for 'doing balance round', ts : 561d8a5cc06b51335e5d68a7 [js_test:auth] 2015-10-13T18:49:00.714-0400 s20264| 2015-10-13T18:49:00.712-0400 D SHARDING [Balancer] *** start balancing round. waitForDelete: 1, secondaryThrottle: { w: 1, wtimeout: 0 } [js_test:auth] 2015-10-13T18:49:00.714-0400 s20264| 2015-10-13T18:49:00.712-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "collections", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776540000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.714-0400 s20264| 2015-10-13T18:49:00.713-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:49:00.715-0400 s20264| 2015-10-13T18:49:00.713-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776540000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.715-0400 s20264| 2015-10-13T18:49:00.714-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:49:00.715-0400 s20264| 2015-10-13T18:49:00.714-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:49:30.714-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:49:00.715-0400 s20264| 2015-10-13T18:49:00.714-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:49:00.716-0400 s20264| 2015-10-13T18:49:00.715-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20266 db:admin expDate:2015-10-13T18:49:30.715-0400 cmd:{ serverStatus: 1 } [js_test:auth] 2015-10-13T18:49:00.716-0400 s20264| 2015-10-13T18:49:00.715-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20266 [js_test:auth] 2015-10-13T18:49:00.716-0400 s20264| 2015-10-13T18:49:00.716-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:49:30.716-0400 cmd:{ listDatabases: 1 } [js_test:auth] 2015-10-13T18:49:00.716-0400 s20264| 2015-10-13T18:49:00.716-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:49:00.717-0400 s20264| 2015-10-13T18:49:00.717-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20268 db:admin expDate:2015-10-13T18:49:30.717-0400 cmd:{ serverStatus: 1 } [js_test:auth] 2015-10-13T18:49:00.717-0400 s20264| 2015-10-13T18:49:00.717-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20268 [js_test:auth] 2015-10-13T18:49:00.718-0400 s20264| 2015-10-13T18:49:00.718-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { min: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776540000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.718-0400 s20264| 2015-10-13T18:49:00.718-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:49:00.719-0400 s20264| 2015-10-13T18:49:00.719-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "tags", filter: { ns: "test.foo" }, sort: { min: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776540000|2, t: 1 } }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.719-0400 s20264| 2015-10-13T18:49:00.719-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:49:00.719-0400 s20264| 2015-10-13T18:49:00.719-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20261 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776540000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.720-0400 s20264| 2015-10-13T18:49:00.719-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20261 [js_test:auth] 2015-10-13T18:49:00.720-0400 s20264| 2015-10-13T18:49:00.720-0400 D SHARDING [Balancer] collection : test.foo [js_test:auth] 2015-10-13T18:49:00.720-0400 s20264| 2015-10-13T18:49:00.720-0400 D SHARDING [Balancer] donor : d1 chunks on 4 [js_test:auth] 2015-10-13T18:49:00.720-0400 s20264| 2015-10-13T18:49:00.720-0400 D SHARDING [Balancer] receiver : d2 chunks on 0 [js_test:auth] 2015-10-13T18:49:00.720-0400 s20264| 2015-10-13T18:49:00.720-0400 D SHARDING [Balancer] threshold : 2 [js_test:auth] 2015-10-13T18:49:00.720-0400 s20264| 2015-10-13T18:49:00.720-0400 I SHARDING [Balancer] ns: test.foo going to move { _id: "test.foo-x_MinKey", ns: "test.foo", min: { x: MinKey }, max: { x: 1.0 }, shard: "d1", version: Timestamp 1000|1, versionEpoch: ObjectId('561d8a03c06b51335e5d6897'), lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('561d8a03c06b51335e5d6897') } from: d1 to: d2 tag [] [js_test:auth] 2015-10-13T18:49:00.721-0400 s20264| 2015-10-13T18:49:00.720-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20262 db:config cmd:{ find: "settings", filter: { _id: "balancer" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776540000|2, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.721-0400 s20264| 2015-10-13T18:49:00.720-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20262 [js_test:auth] 2015-10-13T18:49:00.721-0400 s20264| 2015-10-13T18:49:00.720-0400 I SHARDING [Balancer] moving chunk ns: test.foo moving ( ns: test.foo, shard: d1, lastmod: 1|1||561d8a03c06b51335e5d6897, min: { x: MinKey }, max: { x: 1.0 }) d1 -> d2 [js_test:auth] 2015-10-13T18:49:00.721-0400 d20266| 2015-10-13T18:49:00.721-0400 I SHARDING [conn16] moveChunk waiting for full cleanup after move [js_test:auth] 2015-10-13T18:49:00.722-0400 d20266| 2015-10-13T18:49:00.721-0400 I SHARDING [conn16] received moveChunk request: { moveChunk: "test.foo", from: "d1/ubuntu:20265,ubuntu:20266,ubuntu:20267", to: "d2/ubuntu:20268,ubuntu:20269,ubuntu:20270", fromShard: "d1", toShard: "d2", min: { x: MinKey }, max: { x: 1.0 }, maxChunkSizeBytes: 1048576, configdb: "auth-configRS/ubuntu:20260,ubuntu:20261,ubuntu:20262", secondaryThrottle: false, waitForDelete: true, maxTimeMS: 0, shardVersion: [ Timestamp 1000|4, ObjectId('561d8a03c06b51335e5d6897') ], epoch: ObjectId('561d8a03c06b51335e5d6897') } [js_test:auth] 2015-10-13T18:49:00.724-0400 d20266| 2015-10-13T18:49:00.724-0400 W SHARDING [conn16] could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo [js_test:auth] 2015-10-13T18:49:00.724-0400 d20266| 2015-10-13T18:49:00.724-0400 I SHARDING [conn16] about to log metadata event: { _id: "ubuntu-2015-10-13T18:49:00.724-0400-561d8a5cbcc93d4b7b68fb0c", server: "ubuntu", clientAddr: "127.0.0.1:38631", time: new Date(1444776540724), what: "moveChunk.from", ns: "test.foo", details: { min: { x: MinKey }, max: { x: 1.0 }, step 1 of 6: 0, to: "d2", from: "d1", note: "aborted" } } [js_test:auth] 2015-10-13T18:49:00.739-0400 s20264| 2015-10-13T18:49:00.739-0400 I SHARDING [Balancer] moveChunk result: { ok: 0.0, errmsg: "could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo", code: 46 } [js_test:auth] 2015-10-13T18:49:00.740-0400 s20264| 2015-10-13T18:49:00.739-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config cmd:{ find: "chunks", filter: { ns: "test.foo" }, sort: { lastmod: -1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp 1444776540000|3, t: 1 } }, limit: 1, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.740-0400 s20264| 2015-10-13T18:49:00.739-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:49:00.741-0400 s20264| 2015-10-13T18:49:00.740-0400 I SHARDING [Balancer] balancer move failed: { ok: 0.0, errmsg: "could not acquire collection lock for test.foo to migrate chunk [{ x: MinKey },{ x: 1.0 }) :: caused by :: timed out waiting for test.foo", code: 46 } from: d1 to: d2 chunk: min: { x: MinKey } max: { x: 1.0 } [js_test:auth] 2015-10-13T18:49:00.741-0400 s20264| 2015-10-13T18:49:00.740-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:30.740-0400 cmd:{ insert: "actionlog", documents: [ { _id: ObjectId('561d8a5cc06b51335e5d68a8'), server: "ubuntu", what: "balancer.round", time: new Date(1444776540740), details: { executionTimeMillis: 66, errorOccured: false, candidateChunks: 1, chunksMoved: 0 } } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.741-0400 s20264| 2015-10-13T18:49:00.740-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:49:00.763-0400 s20264| 2015-10-13T18:49:00.762-0400 D SHARDING [Balancer] *** end of balancing round [js_test:auth] 2015-10-13T18:49:00.763-0400 s20264| 2015-10-13T18:49:00.762-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:30.762-0400 cmd:{ findAndModify: "locks", query: { ts: ObjectId('561d8a5cc06b51335e5d68a7') }, update: { $set: { state: 0 } }, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.763-0400 s20264| 2015-10-13T18:49:00.763-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:49:00.780-0400 d20268| 2015-10-13T18:49:00.780-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20270 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:00.780-0400 d20268| 2015-10-13T18:49:00.780-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20270; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:00.780-0400 d20268| 2015-10-13T18:49:00.780-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20270 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:00.780-0400 d20268| 2015-10-13T18:49:00.780-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20270; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:00.781-0400 d20268| 2015-10-13T18:49:00.780-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20270 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:00.781-0400 d20268| 2015-10-13T18:49:00.780-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20270; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:00.792-0400 s20264| 2015-10-13T18:49:00.792-0400 I SHARDING [Balancer] distributed lock with ts: 561d8a5cc06b51335e5d68a7' unlocked. [js_test:auth] 2015-10-13T18:49:00.792-0400 s20264| 2015-10-13T18:49:00.792-0400 D ASIO [Balancer] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:30.792-0400 cmd:{ update: "mongos", updates: [ { q: { _id: "ubuntu:20264" }, u: { $set: { _id: "ubuntu:20264", ping: new Date(1444776540792), up: 113, waiting: true, mongoVersion: "3.1.10-pre-" } }, multi: false, upsert: true } ], writeConcern: { w: "majority" }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:00.793-0400 s20264| 2015-10-13T18:49:00.792-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:49:01.265-0400 d20269| 2015-10-13T18:49:01.265-0400 I EXECUTOR [rsBackgroundSync] killCursors command failed: CallbackCanceled Callback canceled [js_test:auth] 2015-10-13T18:49:01.266-0400 d20269| 2015-10-13T18:49:01.266-0400 I CONTROL [signalProcessingThread] now exiting [js_test:auth] 2015-10-13T18:49:01.266-0400 d20269| 2015-10-13T18:49:01.266-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... [js_test:auth] 2015-10-13T18:49:01.266-0400 d20269| 2015-10-13T18:49:01.266-0400 I NETWORK [signalProcessingThread] closing listening socket: 40 [js_test:auth] 2015-10-13T18:49:01.266-0400 d20269| 2015-10-13T18:49:01.266-0400 I NETWORK [signalProcessingThread] closing listening socket: 41 [js_test:auth] 2015-10-13T18:49:01.266-0400 d20268| 2015-10-13T18:49:01.266-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20269 - HostUnreachable Connection reset by peer [js_test:auth] 2015-10-13T18:49:01.266-0400 d20269| 2015-10-13T18:49:01.266-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20269.sock [js_test:auth] 2015-10-13T18:49:01.266-0400 d20269| 2015-10-13T18:49:01.266-0400 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... [js_test:auth] 2015-10-13T18:49:01.267-0400 d20269| 2015-10-13T18:49:01.266-0400 I NETWORK [signalProcessingThread] shutdown: going to close sockets... [js_test:auth] 2015-10-13T18:49:01.267-0400 d20268| 2015-10-13T18:49:01.266-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20269; HostUnreachable Connection reset by peer [js_test:auth] 2015-10-13T18:49:01.267-0400 d20269| 2015-10-13T18:49:01.266-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down [js_test:auth] 2015-10-13T18:49:01.267-0400 d20268| 2015-10-13T18:49:01.266-0400 I NETWORK [conn15] end connection 127.0.0.1:54377 (8 connections now open) [js_test:auth] 2015-10-13T18:49:01.267-0400 d20269| 2015-10-13T18:49:01.266-0400 I NETWORK [conn9] end connection 127.0.0.1:50028 (1 connection now open) [js_test:auth] 2015-10-13T18:49:01.267-0400 d20269| 2015-10-13T18:49:01.266-0400 I NETWORK [conn1] end connection 127.0.0.1:57617 (1 connection now open) [js_test:auth] 2015-10-13T18:49:01.267-0400 d20268| 2015-10-13T18:49:01.266-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20269 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.267-0400 d20268| 2015-10-13T18:49:01.266-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20269; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.267-0400 d20268| 2015-10-13T18:49:01.266-0400 I REPL [ReplicationExecutor] can't see a majority of the set, relinquishing primary [js_test:auth] 2015-10-13T18:49:01.268-0400 d20268| 2015-10-13T18:49:01.266-0400 I REPL [ReplicationExecutor] Stepping down from primary in response to heartbeat [js_test:auth] 2015-10-13T18:49:01.268-0400 d20268| 2015-10-13T18:49:01.267-0400 I REPL [replExecDBWorker-1] transition to SECONDARY [js_test:auth] 2015-10-13T18:49:01.268-0400 d20268| 2015-10-13T18:49:01.267-0400 I NETWORK [conn10] end connection 127.0.0.1:54210 (7 connections now open) [js_test:auth] 2015-10-13T18:49:01.268-0400 c20262| 2015-10-13T18:49:01.267-0400 I NETWORK [conn14] end connection 127.0.0.1:50052 (9 connections now open) [js_test:auth] 2015-10-13T18:49:01.268-0400 d20268| 2015-10-13T18:49:01.267-0400 I NETWORK [conn17] end connection 127.0.0.1:55268 (7 connections now open) [js_test:auth] 2015-10-13T18:49:01.268-0400 c20260| 2015-10-13T18:49:01.267-0400 I NETWORK [conn24] end connection 127.0.0.1:52936 (14 connections now open) [js_test:auth] 2015-10-13T18:49:01.268-0400 c20261| 2015-10-13T18:49:01.267-0400 I NETWORK [conn14] end connection 127.0.0.1:50061 (9 connections now open) [js_test:auth] 2015-10-13T18:49:01.268-0400 d20266| 2015-10-13T18:49:01.267-0400 I NETWORK [conn11] end connection 127.0.0.1:37761 (8 connections now open) [js_test:auth] 2015-10-13T18:49:01.268-0400 d20267| 2015-10-13T18:49:01.267-0400 I NETWORK [conn9] end connection 127.0.0.1:49205 (4 connections now open) [js_test:auth] 2015-10-13T18:49:01.268-0400 d20268| 2015-10-13T18:49:01.267-0400 I NETWORK [conn1] end connection 127.0.0.1:36786 (7 connections now open) [js_test:auth] 2015-10-13T18:49:01.269-0400 d20268| 2015-10-13T18:49:01.267-0400 I NETWORK [conn22] end connection 127.0.0.1:57635 (7 connections now open) [js_test:auth] 2015-10-13T18:49:01.269-0400 d20268| 2015-10-13T18:49:01.267-0400 I NETWORK [conn16] end connection 127.0.0.1:54378 (7 connections now open) [js_test:auth] 2015-10-13T18:49:01.269-0400 d20268| 2015-10-13T18:49:01.267-0400 I NETWORK [conn9] end connection 127.0.0.1:54209 (7 connections now open) [js_test:auth] 2015-10-13T18:49:01.269-0400 d20268| 2015-10-13T18:49:01.267-0400 I NETWORK [conn21] end connection 127.0.0.1:56074 (7 connections now open) [js_test:auth] 2015-10-13T18:49:01.614-0400 d20268| 2015-10-13T18:49:01.614-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20269 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.614-0400 d20268| 2015-10-13T18:49:01.614-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20270 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.614-0400 d20268| 2015-10-13T18:49:01.614-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20269; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.614-0400 d20268| 2015-10-13T18:49:01.614-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20270; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.614-0400 d20268| 2015-10-13T18:49:01.614-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20269 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.614-0400 d20268| 2015-10-13T18:49:01.614-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20269; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.615-0400 d20268| 2015-10-13T18:49:01.614-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20270 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.615-0400 d20268| 2015-10-13T18:49:01.614-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20270; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.615-0400 d20268| 2015-10-13T18:49:01.614-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20269 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.615-0400 d20268| 2015-10-13T18:49:01.614-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20269; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.616-0400 d20268| 2015-10-13T18:49:01.614-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20270 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.616-0400 d20268| 2015-10-13T18:49:01.614-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20270; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.654-0400 d20269| 2015-10-13T18:49:01.654-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock... [js_test:auth] 2015-10-13T18:49:01.654-0400 d20269| 2015-10-13T18:49:01.654-0400 I CONTROL [signalProcessingThread] dbexit: rc: 0 [js_test:auth] 2015-10-13T18:49:01.660-0400 d20268| 2015-10-13T18:49:01.660-0400 I NETWORK [conn3] end connection 127.0.0.1:53807 (0 connections now open) [js_test:auth] 2015-10-13T18:49:01.830-0400 d20267| 2015-10-13T18:49:01.830-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.831-0400 d20267| 2015-10-13T18:49:01.830-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.831-0400 d20267| 2015-10-13T18:49:01.831-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.831-0400 d20267| 2015-10-13T18:49:01.831-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.831-0400 d20267| 2015-10-13T18:49:01.831-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.831-0400 d20267| 2015-10-13T18:49:01.831-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.831-0400 d20266| 2015-10-13T18:49:01.831-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.831-0400 d20266| 2015-10-13T18:49:01.831-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.832-0400 d20266| 2015-10-13T18:49:01.831-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.832-0400 d20266| 2015-10-13T18:49:01.831-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.832-0400 d20266| 2015-10-13T18:49:01.832-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:01.832-0400 d20266| 2015-10-13T18:49:01.832-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:02.358-0400 d20268| 2015-10-13T18:49:02.358-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:49:02.358-0400 d20268| 2015-10-13T18:49:02.358-0400 I FTDC [signalProcessingThread] Stopping full-time diagnostic data capture [js_test:auth] 2015-10-13T18:49:02.360-0400 d20268| 2015-10-13T18:49:02.360-0400 I REPL [signalProcessingThread] Stopping replication applier threads [js_test:auth] 2015-10-13T18:49:02.633-0400 d20268| 2015-10-13T18:49:02.633-0400 I CONTROL [signalProcessingThread] now exiting [js_test:auth] 2015-10-13T18:49:02.633-0400 d20268| 2015-10-13T18:49:02.633-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... [js_test:auth] 2015-10-13T18:49:02.633-0400 d20268| 2015-10-13T18:49:02.633-0400 I NETWORK [signalProcessingThread] closing listening socket: 37 [js_test:auth] 2015-10-13T18:49:02.634-0400 d20268| 2015-10-13T18:49:02.633-0400 I NETWORK [signalProcessingThread] closing listening socket: 38 [js_test:auth] 2015-10-13T18:49:02.634-0400 d20268| 2015-10-13T18:49:02.633-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20268.sock [js_test:auth] 2015-10-13T18:49:02.634-0400 d20268| 2015-10-13T18:49:02.633-0400 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... [js_test:auth] 2015-10-13T18:49:02.634-0400 d20268| 2015-10-13T18:49:02.633-0400 I NETWORK [signalProcessingThread] shutdown: going to close sockets... [js_test:auth] 2015-10-13T18:49:02.634-0400 d20268| 2015-10-13T18:49:02.633-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down [js_test:auth] 2015-10-13T18:49:02.977-0400 d20268| 2015-10-13T18:49:02.977-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock... [js_test:auth] 2015-10-13T18:49:02.977-0400 d20268| 2015-10-13T18:49:02.977-0400 I CONTROL [signalProcessingThread] dbexit: rc: 0 [js_test:auth] 2015-10-13T18:49:02.984-0400 c20260| 2015-10-13T18:49:02.984-0400 I NETWORK [conn25] end connection 127.0.0.1:52938 (13 connections now open) [js_test:auth] 2015-10-13T18:49:02.984-0400 c20261| 2015-10-13T18:49:02.984-0400 I NETWORK [conn15] end connection 127.0.0.1:50081 (8 connections now open) [js_test:auth] 2015-10-13T18:49:02.984-0400 c20262| 2015-10-13T18:49:02.984-0400 I NETWORK [conn15] end connection 127.0.0.1:50056 (8 connections now open) [js_test:auth] 2015-10-13T18:49:03.358-0400 d20266| 2015-10-13T18:49:03.358-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:49:03.358-0400 d20266| 2015-10-13T18:49:03.358-0400 I FTDC [signalProcessingThread] Stopping full-time diagnostic data capture [js_test:auth] 2015-10-13T18:49:03.360-0400 d20266| 2015-10-13T18:49:03.360-0400 I REPL [signalProcessingThread] Stopping replication applier threads [js_test:auth] 2015-10-13T18:49:03.596-0400 d20266| 2015-10-13T18:49:03.595-0400 I CONTROL [signalProcessingThread] now exiting [js_test:auth] 2015-10-13T18:49:03.596-0400 d20266| 2015-10-13T18:49:03.595-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... [js_test:auth] 2015-10-13T18:49:03.596-0400 d20266| 2015-10-13T18:49:03.595-0400 I NETWORK [signalProcessingThread] closing listening socket: 31 [js_test:auth] 2015-10-13T18:49:03.596-0400 d20266| 2015-10-13T18:49:03.596-0400 I NETWORK [signalProcessingThread] closing listening socket: 32 [js_test:auth] 2015-10-13T18:49:03.596-0400 d20266| 2015-10-13T18:49:03.596-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20266.sock [js_test:auth] 2015-10-13T18:49:03.597-0400 d20266| 2015-10-13T18:49:03.596-0400 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... [js_test:auth] 2015-10-13T18:49:03.597-0400 d20266| 2015-10-13T18:49:03.596-0400 I NETWORK [signalProcessingThread] shutdown: going to close sockets... [js_test:auth] 2015-10-13T18:49:03.597-0400 d20266| 2015-10-13T18:49:03.596-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down [js_test:auth] 2015-10-13T18:49:03.597-0400 d20266| 2015-10-13T18:49:03.596-0400 I NETWORK [conn1] end connection 127.0.0.1:34538 (7 connections now open) [js_test:auth] 2015-10-13T18:49:03.597-0400 c20262| 2015-10-13T18:49:03.596-0400 I NETWORK [conn16] end connection 127.0.0.1:51713 (7 connections now open) [js_test:auth] 2015-10-13T18:49:03.597-0400 d20267| 2015-10-13T18:49:03.596-0400 I NETWORK [conn10] end connection 127.0.0.1:51540 (3 connections now open) [js_test:auth] 2015-10-13T18:49:03.597-0400 c20261| 2015-10-13T18:49:03.596-0400 I NETWORK [conn16] end connection 127.0.0.1:51722 (7 connections now open) [js_test:auth] 2015-10-13T18:49:03.597-0400 d20266| 2015-10-13T18:49:03.596-0400 I NETWORK [conn14] end connection 127.0.0.1:37868 (7 connections now open) [js_test:auth] 2015-10-13T18:49:03.597-0400 d20266| 2015-10-13T18:49:03.596-0400 I NETWORK [conn6] end connection 127.0.0.1:35099 (7 connections now open) [js_test:auth] 2015-10-13T18:49:03.597-0400 d20266| 2015-10-13T18:49:03.596-0400 I NETWORK [conn17] end connection 127.0.0.1:39327 (7 connections now open) [js_test:auth] 2015-10-13T18:49:03.597-0400 d20267| 2015-10-13T18:49:03.596-0400 I REPL [ReplicationExecutor] could not find member to sync from [js_test:auth] 2015-10-13T18:49:03.597-0400 d20266| 2015-10-13T18:49:03.596-0400 I NETWORK [conn9] end connection 127.0.0.1:35722 (7 connections now open) [js_test:auth] 2015-10-13T18:49:03.598-0400 c20260| 2015-10-13T18:49:03.596-0400 I NETWORK [conn27] end connection 127.0.0.1:54593 (12 connections now open) [js_test:auth] 2015-10-13T18:49:03.598-0400 d20267| 2015-10-13T18:49:03.596-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20266; HostUnreachable End of file [js_test:auth] 2015-10-13T18:49:03.598-0400 d20267| 2015-10-13T18:49:03.596-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:03.598-0400 d20266| 2015-10-13T18:49:03.596-0400 I NETWORK [conn16] end connection 127.0.0.1:38631 (5 connections now open) [js_test:auth] 2015-10-13T18:49:03.598-0400 d20266| 2015-10-13T18:49:03.596-0400 I NETWORK [conn15] end connection 127.0.0.1:38621 (5 connections now open) [js_test:auth] 2015-10-13T18:49:03.598-0400 d20267| 2015-10-13T18:49:03.596-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:03.598-0400 d20267| 2015-10-13T18:49:03.596-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20266 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:03.599-0400 d20267| 2015-10-13T18:49:03.596-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20266; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:03.599-0400 d20267| 2015-10-13T18:49:03.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:03.599-0400 d20267| 2015-10-13T18:49:03.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:03.599-0400 d20267| 2015-10-13T18:49:03.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20266 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:03.599-0400 d20267| 2015-10-13T18:49:03.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20266; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:03.600-0400 d20267| 2015-10-13T18:49:03.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:03.600-0400 d20267| 2015-10-13T18:49:03.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:03.944-0400 d20266| 2015-10-13T18:49:03.944-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock... [js_test:auth] 2015-10-13T18:49:03.945-0400 d20266| 2015-10-13T18:49:03.944-0400 I CONTROL [signalProcessingThread] dbexit: rc: 0 [js_test:auth] 2015-10-13T18:49:03.953-0400 d20267| 2015-10-13T18:49:03.953-0400 I NETWORK [conn6] end connection 127.0.0.1:47312 (2 connections now open) [js_test:auth] 2015-10-13T18:49:03.953-0400 c20262| 2015-10-13T18:49:03.953-0400 I NETWORK [conn17] end connection 127.0.0.1:51715 (6 connections now open) [js_test:auth] 2015-10-13T18:49:03.953-0400 c20261| 2015-10-13T18:49:03.953-0400 I NETWORK [conn17] end connection 127.0.0.1:53808 (6 connections now open) [js_test:auth] 2015-10-13T18:49:03.953-0400 c20260| 2015-10-13T18:49:03.953-0400 I NETWORK [conn28] end connection 127.0.0.1:54597 (11 connections now open) [js_test:auth] 2015-10-13T18:49:04.358-0400 s20264| 2015-10-13T18:49:04.358-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:49:04.358-0400 s20264| 2015-10-13T18:49:04.358-0400 D SHARDING [signalProcessingThread] CatalogManagerReplicaSet::shutDown() called. [js_test:auth] 2015-10-13T18:49:04.359-0400 s20264| 2015-10-13T18:49:04.358-0400 D ASIO [signalProcessingThread] startCommand: RemoteCommand -- target:ubuntu:20260 db:config expDate:2015-10-13T18:49:34.358-0400 cmd:{ findAndModify: "lockpings", query: { _id: "ubuntu:20264:1444776427:399327856" }, remove: true, writeConcern: { w: "majority", wtimeout: 5000 }, maxTimeMS: 30000 } [js_test:auth] 2015-10-13T18:49:04.360-0400 s20264| 2015-10-13T18:49:04.358-0400 D ASIO [NetworkInterfaceASIO] Starting asynchronous command on host ubuntu:20260 [js_test:auth] 2015-10-13T18:49:04.375-0400 s20264| 2015-10-13T18:49:04.375-0400 I SHARDING [signalProcessingThread] dbexit: rc:0 [js_test:auth] 2015-10-13T18:49:04.376-0400 c20261| 2015-10-13T18:49:04.376-0400 I NETWORK [conn10] end connection 127.0.0.1:46752 (5 connections now open) [js_test:auth] 2015-10-13T18:49:04.377-0400 c20260| 2015-10-13T18:49:04.376-0400 I NETWORK [conn21] end connection 127.0.0.1:49636 (10 connections now open) [js_test:auth] 2015-10-13T18:49:04.377-0400 c20261| 2015-10-13T18:49:04.376-0400 I NETWORK [conn11] end connection 127.0.0.1:48314 (5 connections now open) [js_test:auth] 2015-10-13T18:49:04.377-0400 d20267| 2015-10-13T18:49:04.376-0400 I NETWORK [conn7] end connection 127.0.0.1:47498 (1 connection now open) [js_test:auth] 2015-10-13T18:49:04.377-0400 c20260| 2015-10-13T18:49:04.376-0400 I NETWORK [conn26] end connection 127.0.0.1:52944 (9 connections now open) [js_test:auth] 2015-10-13T18:49:04.377-0400 c20260| 2015-10-13T18:49:04.376-0400 I NETWORK [conn19] end connection 127.0.0.1:49632 (9 connections now open) [js_test:auth] 2015-10-13T18:49:04.378-0400 c20262| 2015-10-13T18:49:04.376-0400 I NETWORK [conn10] end connection 127.0.0.1:46745 (5 connections now open) [js_test:auth] 2015-10-13T18:49:04.378-0400 c20262| 2015-10-13T18:49:04.376-0400 I NETWORK [conn11] end connection 127.0.0.1:46753 (5 connections now open) [js_test:auth] 2015-10-13T18:49:05.358-0400 c20262| 2015-10-13T18:49:05.358-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:49:05.359-0400 c20262| 2015-10-13T18:49:05.358-0400 I FTDC [signalProcessingThread] Stopping full-time diagnostic data capture [js_test:auth] 2015-10-13T18:49:05.361-0400 c20262| 2015-10-13T18:49:05.361-0400 I REPL [signalProcessingThread] Stopping replication applier threads [js_test:auth] 2015-10-13T18:49:05.597-0400 d20267| 2015-10-13T18:49:05.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:05.597-0400 d20267| 2015-10-13T18:49:05.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20266 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:05.598-0400 d20267| 2015-10-13T18:49:05.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:05.598-0400 d20267| 2015-10-13T18:49:05.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20266; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:05.598-0400 d20267| 2015-10-13T18:49:05.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:05.598-0400 d20267| 2015-10-13T18:49:05.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:05.599-0400 d20267| 2015-10-13T18:49:05.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20266 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:05.599-0400 d20267| 2015-10-13T18:49:05.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20266; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:05.599-0400 d20267| 2015-10-13T18:49:05.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:05.599-0400 d20267| 2015-10-13T18:49:05.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:05.600-0400 d20267| 2015-10-13T18:49:05.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20266 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:05.600-0400 d20267| 2015-10-13T18:49:05.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20266; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:05.890-0400 c20262| 2015-10-13T18:49:05.890-0400 I STORAGE [conn3] got request after shutdown() [js_test:auth] 2015-10-13T18:49:05.891-0400 c20260| 2015-10-13T18:49:05.890-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable End of file [js_test:auth] 2015-10-13T18:49:06.654-0400 c20262| 2015-10-13T18:49:06.654-0400 I STORAGE [conn6] got request after shutdown() [js_test:auth] 2015-10-13T18:49:06.654-0400 c20261| 2015-10-13T18:49:06.654-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable End of file [js_test:auth] 2015-10-13T18:49:06.862-0400 c20262| 2015-10-13T18:49:06.862-0400 I EXECUTOR [rsBackgroundSync] killCursors command failed: CallbackCanceled Callback canceled [js_test:auth] 2015-10-13T18:49:06.863-0400 c20262| 2015-10-13T18:49:06.863-0400 I CONTROL [signalProcessingThread] now exiting [js_test:auth] 2015-10-13T18:49:06.863-0400 c20262| 2015-10-13T18:49:06.863-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... [js_test:auth] 2015-10-13T18:49:06.863-0400 c20262| 2015-10-13T18:49:06.863-0400 I NETWORK [signalProcessingThread] closing listening socket: 15 [js_test:auth] 2015-10-13T18:49:06.863-0400 c20262| 2015-10-13T18:49:06.863-0400 I NETWORK [signalProcessingThread] closing listening socket: 16 [js_test:auth] 2015-10-13T18:49:06.863-0400 c20262| 2015-10-13T18:49:06.863-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20262.sock [js_test:auth] 2015-10-13T18:49:06.863-0400 c20261| 2015-10-13T18:49:06.863-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection reset by peer [js_test:auth] 2015-10-13T18:49:06.864-0400 c20262| 2015-10-13T18:49:06.863-0400 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... [js_test:auth] 2015-10-13T18:49:06.864-0400 c20260| 2015-10-13T18:49:06.863-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection reset by peer [js_test:auth] 2015-10-13T18:49:06.864-0400 c20260| 2015-10-13T18:49:06.863-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection reset by peer [js_test:auth] 2015-10-13T18:49:06.864-0400 c20262| 2015-10-13T18:49:06.863-0400 I NETWORK [signalProcessingThread] shutdown: going to close sockets... [js_test:auth] 2015-10-13T18:49:06.864-0400 c20261| 2015-10-13T18:49:06.863-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection reset by peer [js_test:auth] 2015-10-13T18:49:06.864-0400 c20262| 2015-10-13T18:49:06.863-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down [js_test:auth] 2015-10-13T18:49:06.864-0400 c20260| 2015-10-13T18:49:06.863-0400 I NETWORK [conn13] end connection 127.0.0.1:49279 (7 connections now open) [js_test:auth] 2015-10-13T18:49:06.864-0400 c20262| 2015-10-13T18:49:06.863-0400 I NETWORK [conn1] end connection 127.0.0.1:55904 (1 connection now open) [js_test:auth] 2015-10-13T18:49:06.865-0400 c20262| 2015-10-13T18:49:06.863-0400 I NETWORK [conn7] end connection 127.0.0.1:46280 (0 connections now open) [js_test:auth] 2015-10-13T18:49:06.865-0400 c20260| 2015-10-13T18:49:06.863-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:06.865-0400 c20261| 2015-10-13T18:49:06.863-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:06.865-0400 c20260| 2015-10-13T18:49:06.863-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:06.865-0400 c20261| 2015-10-13T18:49:06.863-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.597-0400 d20267| 2015-10-13T18:49:07.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.597-0400 d20267| 2015-10-13T18:49:07.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20266 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.597-0400 d20267| 2015-10-13T18:49:07.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.597-0400 d20267| 2015-10-13T18:49:07.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20266; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.598-0400 d20267| 2015-10-13T18:49:07.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.598-0400 d20267| 2015-10-13T18:49:07.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.598-0400 d20267| 2015-10-13T18:49:07.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20266 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.598-0400 d20267| 2015-10-13T18:49:07.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20266; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.598-0400 d20267| 2015-10-13T18:49:07.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20265 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.599-0400 d20267| 2015-10-13T18:49:07.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20265; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.599-0400 d20267| 2015-10-13T18:49:07.597-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20266 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.599-0400 d20267| 2015-10-13T18:49:07.597-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20266; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:07.621-0400 c20262| 2015-10-13T18:49:07.621-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock... [js_test:auth] 2015-10-13T18:49:07.624-0400 c20262| 2015-10-13T18:49:07.623-0400 I CONTROL [signalProcessingThread] dbexit: rc: 0 [js_test:auth] 2015-10-13T18:49:07.632-0400 c20261| 2015-10-13T18:49:07.632-0400 I NETWORK [conn6] end connection 127.0.0.1:46063 (3 connections now open) [js_test:auth] 2015-10-13T18:49:07.632-0400 c20260| 2015-10-13T18:49:07.632-0400 I NETWORK [conn14] end connection 127.0.0.1:49280 (6 connections now open) [js_test:auth] 2015-10-13T18:49:07.632-0400 c20260| 2015-10-13T18:49:07.632-0400 I NETWORK [conn3] end connection 127.0.0.1:48804 (6 connections now open) [js_test:auth] 2015-10-13T18:49:08.359-0400 d20267| 2015-10-13T18:49:08.358-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:49:08.359-0400 d20267| 2015-10-13T18:49:08.359-0400 I FTDC [signalProcessingThread] Stopping full-time diagnostic data capture [js_test:auth] 2015-10-13T18:49:08.361-0400 d20267| 2015-10-13T18:49:08.360-0400 I REPL [signalProcessingThread] Stopping replication applier threads [js_test:auth] 2015-10-13T18:49:08.634-0400 d20267| 2015-10-13T18:49:08.634-0400 I CONTROL [signalProcessingThread] now exiting [js_test:auth] 2015-10-13T18:49:08.634-0400 d20267| 2015-10-13T18:49:08.634-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... [js_test:auth] 2015-10-13T18:49:08.634-0400 d20267| 2015-10-13T18:49:08.634-0400 I NETWORK [signalProcessingThread] closing listening socket: 34 [js_test:auth] 2015-10-13T18:49:08.634-0400 d20267| 2015-10-13T18:49:08.634-0400 I NETWORK [signalProcessingThread] closing listening socket: 35 [js_test:auth] 2015-10-13T18:49:08.634-0400 d20267| 2015-10-13T18:49:08.634-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20267.sock [js_test:auth] 2015-10-13T18:49:08.635-0400 d20267| 2015-10-13T18:49:08.634-0400 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... [js_test:auth] 2015-10-13T18:49:08.635-0400 d20267| 2015-10-13T18:49:08.634-0400 I NETWORK [signalProcessingThread] shutdown: going to close sockets... [js_test:auth] 2015-10-13T18:49:08.635-0400 d20267| 2015-10-13T18:49:08.634-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down [js_test:auth] 2015-10-13T18:49:08.635-0400 d20267| 2015-10-13T18:49:08.634-0400 I NETWORK [conn1] end connection 127.0.0.1:56169 (0 connections now open) [js_test:auth] 2015-10-13T18:49:08.864-0400 c20260| 2015-10-13T18:49:08.863-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:08.864-0400 c20260| 2015-10-13T18:49:08.864-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:08.864-0400 c20261| 2015-10-13T18:49:08.864-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:08.864-0400 c20261| 2015-10-13T18:49:08.864-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:08.864-0400 c20260| 2015-10-13T18:49:08.864-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:08.864-0400 c20260| 2015-10-13T18:49:08.864-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:08.865-0400 c20261| 2015-10-13T18:49:08.864-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:08.865-0400 c20261| 2015-10-13T18:49:08.864-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:08.865-0400 c20260| 2015-10-13T18:49:08.864-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:08.865-0400 c20261| 2015-10-13T18:49:08.864-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:08.865-0400 c20260| 2015-10-13T18:49:08.864-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:08.865-0400 c20261| 2015-10-13T18:49:08.864-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:09.027-0400 d20267| 2015-10-13T18:49:09.027-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock... [js_test:auth] 2015-10-13T18:49:09.032-0400 d20267| 2015-10-13T18:49:09.031-0400 I CONTROL [signalProcessingThread] dbexit: rc: 0 [js_test:auth] 2015-10-13T18:49:09.359-0400 c20261| 2015-10-13T18:49:09.359-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:49:09.360-0400 c20261| 2015-10-13T18:49:09.359-0400 I FTDC [signalProcessingThread] Stopping full-time diagnostic data capture [js_test:auth] 2015-10-13T18:49:09.361-0400 c20261| 2015-10-13T18:49:09.361-0400 I REPL [signalProcessingThread] Stopping replication applier threads [js_test:auth] 2015-10-13T18:49:09.363-0400 c20261| 2015-10-13T18:49:09.363-0400 I EXECUTOR [rsBackgroundSync] killCursors command failed: CallbackCanceled Callback canceled [js_test:auth] 2015-10-13T18:49:09.377-0400 c20261| 2015-10-13T18:49:09.377-0400 I CONTROL [signalProcessingThread] now exiting [js_test:auth] 2015-10-13T18:49:09.377-0400 c20261| 2015-10-13T18:49:09.377-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... [js_test:auth] 2015-10-13T18:49:09.377-0400 c20261| 2015-10-13T18:49:09.377-0400 I NETWORK [signalProcessingThread] closing listening socket: 12 [js_test:auth] 2015-10-13T18:49:09.377-0400 c20261| 2015-10-13T18:49:09.377-0400 I NETWORK [signalProcessingThread] closing listening socket: 13 [js_test:auth] 2015-10-13T18:49:09.377-0400 c20261| 2015-10-13T18:49:09.377-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20261.sock [js_test:auth] 2015-10-13T18:49:09.377-0400 c20261| 2015-10-13T18:49:09.377-0400 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... [js_test:auth] 2015-10-13T18:49:09.378-0400 c20261| 2015-10-13T18:49:09.377-0400 I NETWORK [signalProcessingThread] shutdown: going to close sockets... [js_test:auth] 2015-10-13T18:49:09.378-0400 c20261| 2015-10-13T18:49:09.377-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down [js_test:auth] 2015-10-13T18:49:09.378-0400 c20261| 2015-10-13T18:49:09.377-0400 I NETWORK [conn1] end connection 127.0.0.1:41341 (2 connections now open) [js_test:auth] 2015-10-13T18:49:09.378-0400 c20260| 2015-10-13T18:49:09.377-0400 I NETWORK [conn16] end connection 127.0.0.1:49324 (4 connections now open) [js_test:auth] 2015-10-13T18:49:09.378-0400 c20261| 2015-10-13T18:49:09.377-0400 I NETWORK [conn3] end connection 127.0.0.1:45928 (2 connections now open) [js_test:auth] 2015-10-13T18:49:09.378-0400 c20261| 2015-10-13T18:49:09.377-0400 I NETWORK [conn7] end connection 127.0.0.1:46289 (2 connections now open) [js_test:auth] 2015-10-13T18:49:09.893-0400 c20260| 2015-10-13T18:49:09.893-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20261; HostUnreachable End of file [js_test:auth] 2015-10-13T18:49:09.893-0400 c20260| 2015-10-13T18:49:09.893-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20261 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:09.893-0400 c20260| 2015-10-13T18:49:09.893-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20261; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:09.894-0400 c20260| 2015-10-13T18:49:09.893-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20261 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:09.894-0400 c20260| 2015-10-13T18:49:09.893-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20261; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:09.894-0400 c20260| 2015-10-13T18:49:09.894-0400 I REPL [ReplicationExecutor] can't see a majority of the set, relinquishing primary [js_test:auth] 2015-10-13T18:49:09.894-0400 c20260| 2015-10-13T18:49:09.894-0400 I REPL [ReplicationExecutor] Stepping down from primary in response to heartbeat [js_test:auth] 2015-10-13T18:49:09.894-0400 c20260| 2015-10-13T18:49:09.894-0400 I REPL [replExecDBWorker-0] transition to SECONDARY [js_test:auth] 2015-10-13T18:49:09.894-0400 c20260| 2015-10-13T18:49:09.894-0400 I NETWORK [conn9] end connection 127.0.0.1:49163 (3 connections now open) [js_test:auth] 2015-10-13T18:49:09.894-0400 c20260| 2015-10-13T18:49:09.894-0400 I NETWORK [conn17] end connection 127.0.0.1:49325 (3 connections now open) [js_test:auth] 2015-10-13T18:49:09.894-0400 c20260| 2015-10-13T18:49:09.894-0400 I NETWORK [conn1] end connection 127.0.0.1:55071 (1 connection now open) [js_test:auth] 2015-10-13T18:49:10.174-0400 c20261| 2015-10-13T18:49:10.173-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock... [js_test:auth] 2015-10-13T18:49:10.177-0400 c20261| 2015-10-13T18:49:10.176-0400 I CONTROL [signalProcessingThread] dbexit: rc: 0 [js_test:auth] 2015-10-13T18:49:10.184-0400 c20260| 2015-10-13T18:49:10.184-0400 I NETWORK [conn4] end connection 127.0.0.1:48805 (0 connections now open) [js_test:auth] 2015-10-13T18:49:10.314-0400 c20260| 2015-10-13T18:49:10.314-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20261 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:10.315-0400 c20260| 2015-10-13T18:49:10.314-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:10.315-0400 c20260| 2015-10-13T18:49:10.314-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20261; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:10.316-0400 c20260| 2015-10-13T18:49:10.314-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:10.317-0400 c20260| 2015-10-13T18:49:10.314-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20261 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:10.317-0400 c20260| 2015-10-13T18:49:10.314-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:10.318-0400 c20260| 2015-10-13T18:49:10.314-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20261; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:10.318-0400 c20260| 2015-10-13T18:49:10.314-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:10.318-0400 c20260| 2015-10-13T18:49:10.314-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20261 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:10.319-0400 c20260| 2015-10-13T18:49:10.315-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20261; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:10.319-0400 c20260| 2015-10-13T18:49:10.315-0400 I ASIO [NetworkInterfaceASIO] Failed to connect to ubuntu:20262 - HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:10.319-0400 c20260| 2015-10-13T18:49:10.315-0400 I REPL [ReplicationExecutor] Error in heartbeat request to ubuntu:20262; HostUnreachable Connection refused [js_test:auth] 2015-10-13T18:49:10.359-0400 c20260| 2015-10-13T18:49:10.359-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends [js_test:auth] 2015-10-13T18:49:10.359-0400 c20260| 2015-10-13T18:49:10.359-0400 I FTDC [signalProcessingThread] Stopping full-time diagnostic data capture [js_test:auth] 2015-10-13T18:49:10.361-0400 c20260| 2015-10-13T18:49:10.360-0400 I REPL [signalProcessingThread] Stopping replication applier threads [js_test:auth] 2015-10-13T18:49:10.421-0400 2015-10-13T18:49:10.421-0400 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 10 secs, remote host 127.0.1.1:20260) [js_test:auth] 2015-10-13T18:49:11.316-0400 c20260| 2015-10-13T18:49:11.315-0400 I CONTROL [signalProcessingThread] now exiting [js_test:auth] 2015-10-13T18:49:11.316-0400 c20260| 2015-10-13T18:49:11.316-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... [js_test:auth] 2015-10-13T18:49:11.316-0400 c20260| 2015-10-13T18:49:11.316-0400 I NETWORK [signalProcessingThread] closing listening socket: 9 [js_test:auth] 2015-10-13T18:49:11.316-0400 c20260| 2015-10-13T18:49:11.316-0400 I NETWORK [signalProcessingThread] closing listening socket: 10 [js_test:auth] 2015-10-13T18:49:11.316-0400 2015-10-13T18:49:11.316-0400 I NETWORK [ReplicaSetMonitorWatcher] Socket recv() errno:104 Connection reset by peer 127.0.1.1:20260 [js_test:auth] 2015-10-13T18:49:11.316-0400 c20260| 2015-10-13T18:49:11.316-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20260.sock [js_test:auth] 2015-10-13T18:49:11.316-0400 c20260| 2015-10-13T18:49:11.316-0400 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... [js_test:auth] 2015-10-13T18:49:11.317-0400 c20260| 2015-10-13T18:49:11.316-0400 I NETWORK [signalProcessingThread] shutdown: going to close sockets... [js_test:auth] 2015-10-13T18:49:11.317-0400 2015-10-13T18:49:11.316-0400 I NETWORK [ReplicaSetMonitorWatcher] SocketException: remote: 127.0.1.1:20260 error: 9001 socket exception [RECV_ERROR] server [127.0.1.1:20260] [js_test:auth] 2015-10-13T18:49:11.317-0400 c20260| 2015-10-13T18:49:11.316-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down [js_test:auth] 2015-10-13T18:49:11.317-0400 2015-10-13T18:49:11.316-0400 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 11 secs, remote host 127.0.1.1:20261) [js_test:auth] 2015-10-13T18:49:11.317-0400 2015-10-13T18:49:11.316-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20261, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:49:11.317-0400 2015-10-13T18:49:11.316-0400 I NETWORK [ReplicaSetMonitorWatcher] Socket closed remotely, no longer connected (idle 11 secs, remote host 127.0.1.1:20262) [js_test:auth] 2015-10-13T18:49:11.317-0400 2015-10-13T18:49:11.316-0400 W NETWORK [ReplicaSetMonitorWatcher] Failed to connect to 127.0.1.1:20262, reason: errno:111 Connection refused [js_test:auth] 2015-10-13T18:49:11.317-0400 2015-10-13T18:49:11.316-0400 W NETWORK [ReplicaSetMonitorWatcher] No primary detected for set auth-configRS [js_test:auth] 2015-10-13T18:49:11.317-0400 2015-10-13T18:49:11.316-0400 I NETWORK [ReplicaSetMonitorWatcher] All nodes for set auth-configRS are down. This has happened for 1 checks in a row. Polling will stop after 29 more failed checks [js_test:auth] 2015-10-13T18:49:12.043-0400 c20260| 2015-10-13T18:49:12.043-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock... [js_test:auth] 2015-10-13T18:49:12.043-0400 c20260| 2015-10-13T18:49:12.043-0400 I CONTROL [signalProcessingThread] dbexit: rc: 0