[resmoke] 2019-07-25T18:24:47.258-0400 verbatim resmoke.py invocation: buildscripts/resmoke.py jstests/sharding/configsvr_failover_repro.js
[resmoke] 2019-07-25T18:24:47.263-0400 YAML configuration of suite with_server
test_kind: js_test
selector:
roots:
- jstests/sharding/configsvr_failover_repro.js
executor:
config:
shell_options:
readMode: commands
fixture:
class: MongoDFixture
mongod_options:
set_parameters:
enableTestCommands: 1
logging:
executor:
format: '[%(name)s] %(asctime)s %(message)s'
handlers:
- class: logging.StreamHandler
fixture:
format: '[%(name)s] %(message)s'
handlers:
- class: logging.StreamHandler
tests:
format: '[%(name)s] %(asctime)s %(message)s'
handlers:
- class: logging.StreamHandler
[executor] 2019-07-25T18:24:47.263-0400 Starting execution of js_tests...
[executor:js_test:job0] 2019-07-25T18:24:47.263-0400 Running job0_fixture_setup...
[js_test:job0_fixture_setup] 2019-07-25T18:24:47.263-0400 Starting the setup of MongoDFixture (Job #0).
[MongoDFixture:job0] Starting mongod on port 20000...
./mongod --setParameter enableTestCommands=1 --setParameter logComponentVerbosity={'replication': {'rollback': 2}, 'transaction': 4} --setParameter disableLogicalSessionCacheRefresh=true --setParameter transactionLifetimeLimitSeconds=86400 --setParameter maxIndexBuildDrainBatchSize=10 --dbpath=/data/db/job0/resmoke --port=20000 --enableMajorityReadConcern=True
[MongoDFixture:job0] mongod started on port 20000 with pid 2741.
[js_test:job0_fixture_setup] 2019-07-25T18:24:47.293-0400 Waiting for MongoDFixture (Job #0) to be ready.
[MongoDFixture:job0] 2019-07-25T18:24:47.439-0400 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[MongoDFixture:job0] 2019-07-25T18:24:47.447-0400 I CONTROL [initandlisten] MongoDB starting : pid=2741 port=20000 dbpath=/data/db/job0/resmoke 64-bit host=Jasons-MacBook-Pro.local
[MongoDFixture:job0] 2019-07-25T18:24:47.447-0400 I CONTROL [initandlisten] DEBUG build (which is slower)
[MongoDFixture:job0] 2019-07-25T18:24:47.447-0400 I CONTROL [initandlisten] db version v4.3.0-703-g917d338
[MongoDFixture:job0] 2019-07-25T18:24:47.448-0400 I CONTROL [initandlisten] git version: 917d338c4bf52dc8dce2c0e585a676385e81ed1c
[MongoDFixture:job0] 2019-07-25T18:24:47.448-0400 I CONTROL [initandlisten] allocator: system
[MongoDFixture:job0] 2019-07-25T18:24:47.448-0400 I CONTROL [initandlisten] modules: enterprise ninja
[MongoDFixture:job0] 2019-07-25T18:24:47.448-0400 I CONTROL [initandlisten] build environment:
[MongoDFixture:job0] 2019-07-25T18:24:47.448-0400 I CONTROL [initandlisten] distarch: x86_64
[MongoDFixture:job0] 2019-07-25T18:24:47.448-0400 I CONTROL [initandlisten] target_arch: x86_64
[MongoDFixture:job0] 2019-07-25T18:24:47.448-0400 I CONTROL [initandlisten] options: { net: { port: 20000 }, replication: { enableMajorityReadConcern: true }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{'replication': {'rollback': 2}, 'transaction': 4}", maxIndexBuildDrainBatchSize: "10", transactionLifetimeLimitSeconds: "86400" }, storage: { dbPath: "/data/db/job0/resmoke" } }
[MongoDFixture:job0] 2019-07-25T18:24:47.449-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7680M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],debug_mode=(table_logging=true),
[MongoDFixture:job0] Waiting to connect to mongod on port 20000.
[MongoDFixture:job0] 2019-07-25T18:24:48.013-0400 I STORAGE [initandlisten] WiredTiger message [1564093488:13064][2741:0x1271975c0], txn-recover: Set global recovery timestamp: (0,0)
[MongoDFixture:job0] 2019-07-25T18:24:48.102-0400 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[MongoDFixture:job0] 2019-07-25T18:24:48.164-0400 I STORAGE [initandlisten] Timestamp monitor starting
[MongoDFixture:job0] 2019-07-25T18:24:48.168-0400 I CONTROL [initandlisten]
[MongoDFixture:job0] 2019-07-25T18:24:48.168-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (4.3.0-703-g917d338) of MongoDB.
[MongoDFixture:job0] 2019-07-25T18:24:48.168-0400 I CONTROL [initandlisten] ** Not recommended for production.
[MongoDFixture:job0] 2019-07-25T18:24:48.168-0400 I CONTROL [initandlisten]
[MongoDFixture:job0] 2019-07-25T18:24:48.169-0400 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[MongoDFixture:job0] 2019-07-25T18:24:48.169-0400 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[MongoDFixture:job0] 2019-07-25T18:24:48.169-0400 I CONTROL [initandlisten]
[MongoDFixture:job0] 2019-07-25T18:24:48.169-0400 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
[MongoDFixture:job0] 2019-07-25T18:24:48.169-0400 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
[MongoDFixture:job0] 2019-07-25T18:24:48.169-0400 I CONTROL [initandlisten] ** Start the server with --bind_ip
to specify which IP
[MongoDFixture:job0] 2019-07-25T18:24:48.169-0400 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
[MongoDFixture:job0] 2019-07-25T18:24:48.169-0400 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
[MongoDFixture:job0] 2019-07-25T18:24:48.169-0400 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
[MongoDFixture:job0] 2019-07-25T18:24:48.169-0400 I CONTROL [initandlisten]
[MongoDFixture:job0] 2019-07-25T18:24:48.173-0400 I STORAGE [initandlisten] createCollection: admin.system.version with provided UUID: 10526679-9876-438b-a929-115eee41aec3 and options: { uuid: UUID("10526679-9876-438b-a929-115eee41aec3") }
[MongoDFixture:job0] 2019-07-25T18:24:48.230-0400 I INDEX [initandlisten] index build: done building index _id_ on ns admin.system.version
[MongoDFixture:job0] 2019-07-25T18:24:48.231-0400 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
[MongoDFixture:job0] 2019-07-25T18:24:48.231-0400 I COMMAND [initandlisten] setting featureCompatibilityVersion to 4.2
[MongoDFixture:job0] 2019-07-25T18:24:48.238-0400 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
[MongoDFixture:job0] 2019-07-25T18:24:48.239-0400 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[MongoDFixture:job0] 2019-07-25T18:24:48.240-0400 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
[MongoDFixture:job0] 2019-07-25T18:24:48.241-0400 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: b0102912-0016-44ce-a919-7877dcaa9a76 and options: { capped: true, size: 10485760 }
[MongoDFixture:job0] 2019-07-25T18:24:48.288-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
[MongoDFixture:job0] 2019-07-25T18:24:48.289-0400 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
[MongoDFixture:job0] 2019-07-25T18:24:48.290-0400 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/job0/resmoke/diagnostic.data'
[MongoDFixture:job0] 2019-07-25T18:24:48.291-0400 I NETWORK [initandlisten] Listening on /tmp/mongodb-20000.sock
[MongoDFixture:job0] 2019-07-25T18:24:48.291-0400 I NETWORK [initandlisten] Listening on 127.0.0.1
[MongoDFixture:job0] 2019-07-25T18:24:48.291-0400 I NETWORK [initandlisten] waiting for connections on port 20000
[MongoDFixture:job0] 2019-07-25T18:24:48.312-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49457 #1 (1 connection now open)
[MongoDFixture:job0] 2019-07-25T18:24:48.313-0400 I NETWORK [conn1] received client metadata from 127.0.0.1:49457 conn1: { driver: { name: "PyMongo", version: "3.8.0" }, os: { type: "Darwin", name: "Darwin", architecture: "x86_64", version: "10.14.5" }, platform: "CPython 3.7.3.final.0" }
[MongoDFixture:job0] 2019-07-25T18:24:48.315-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49458 #2 (2 connections now open)
[MongoDFixture:job0] 2019-07-25T18:24:48.316-0400 I NETWORK [conn2] received client metadata from 127.0.0.1:49458 conn2: { driver: { name: "PyMongo", version: "3.8.0" }, os: { type: "Darwin", name: "Darwin", architecture: "x86_64", version: "10.14.5" }, platform: "CPython 3.7.3.final.0" }
[MongoDFixture:job0] Successfully contacted the mongod on port 20000.
[js_test:job0_fixture_setup] 2019-07-25T18:24:48.317-0400 Finished the setup of MongoDFixture (Job #0).
[executor:js_test:job0] 2019-07-25T18:24:48.317-0400 job0_fixture_setup ran in 1.05 seconds: no failures detected.
[MongoDFixture:job0] 2019-07-25T18:24:48.317-0400 I NETWORK [conn2] end connection 127.0.0.1:49458 (1 connection now open)
[MongoDFixture:job0] 2019-07-25T18:24:48.317-0400 I NETWORK [conn1] end connection 127.0.0.1:49457 (0 connections now open)
[executor:js_test:job0] 2019-07-25T18:24:48.320-0400 Running configsvr_failover_repro.js...
./mongo --eval MongoRunner.dataDir = "/data/db/job0/mongorunner"; MongoRunner.dataPath = "/data/db/job0/mongorunner/"; MongoRunner.mongoShellPath = "/Users/jason.zhang/mongodb/mongo/mongo"; TestData = new Object(); TestData.minPort = 20020; TestData.maxPort = 20249; TestData.failIfUnterminatedProcesses = true; TestData.enableMajorityReadConcern = true; TestData.noJournal = false; TestData.serviceExecutor = ""; TestData.storageEngine = ""; TestData.storageEngineCacheSizeGB = ""; TestData.testName = "configsvr_failover_repro"; TestData.transportLayer = ""; TestData.wiredTigerCollectionConfigString = ""; TestData.wiredTigerEngineConfigString = ""; TestData.wiredTigerIndexConfigString = ""; TestData.setParameters = new Object(); TestData.setParameters.logComponentVerbosity = new Object(); TestData.setParameters.logComponentVerbosity.replication = new Object(); TestData.setParameters.logComponentVerbosity.replication.rollback = 2; TestData.setParameters.logComponentVerbosity.transaction = 4; TestData.setParametersMongos = new Object(); TestData.setParametersMongos.logComponentVerbosity = new Object(); TestData.setParametersMongos.logComponentVerbosity.transaction = 3; TestData.transactionLifetimeLimitSeconds = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20000 jstests/sharding/configsvr_failover_repro.js
[js_test:configsvr_failover_repro] 2019-07-25T18:24:48.320-0400 Starting JSTest jstests/sharding/configsvr_failover_repro.js...
./mongo --eval MongoRunner.dataDir = "/data/db/job0/mongorunner"; MongoRunner.dataPath = "/data/db/job0/mongorunner/"; MongoRunner.mongoShellPath = "/Users/jason.zhang/mongodb/mongo/mongo"; TestData = new Object(); TestData.minPort = 20020; TestData.maxPort = 20249; TestData.failIfUnterminatedProcesses = true; TestData.isMainTest = true; TestData.numTestClients = 1; TestData.enableMajorityReadConcern = true; TestData.noJournal = false; TestData.serviceExecutor = ""; TestData.storageEngine = ""; TestData.storageEngineCacheSizeGB = ""; TestData.testName = "configsvr_failover_repro"; TestData.transportLayer = ""; TestData.wiredTigerCollectionConfigString = ""; TestData.wiredTigerEngineConfigString = ""; TestData.wiredTigerIndexConfigString = ""; TestData.setParameters = new Object(); TestData.setParameters.logComponentVerbosity = new Object(); TestData.setParameters.logComponentVerbosity.replication = new Object(); TestData.setParameters.logComponentVerbosity.replication.rollback = 2; TestData.setParameters.logComponentVerbosity.transaction = 4; TestData.setParametersMongos = new Object(); TestData.setParametersMongos.logComponentVerbosity = new Object(); TestData.setParametersMongos.logComponentVerbosity.transaction = 3; TestData.transactionLifetimeLimitSeconds = 86400; load('jstests/libs/override_methods/validate_collections_on_shutdown.js');; load('jstests/libs/override_methods/check_uuids_consistent_across_cluster.js');; load('jstests/libs/override_methods/implicitly_retry_on_background_op_in_progress.js'); --readMode=commands mongodb://localhost:20000 jstests/sharding/configsvr_failover_repro.js
[js_test:configsvr_failover_repro] 2019-07-25T18:24:48.326-0400 JSTest jstests/sharding/configsvr_failover_repro.js started with pid 2744.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:48.534-0400 MongoDB shell version v4.3.0-703-g917d338
[js_test:configsvr_failover_repro] 2019-07-25T18:24:48.935-0400 connecting to: mongodb://localhost:20000/?compressors=disabled&gssapiServiceName=mongodb
[MongoDFixture:job0] 2019-07-25T18:24:48.936-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49459 #3 (1 connection now open)
[MongoDFixture:job0] 2019-07-25T18:24:48.936-0400 I NETWORK [conn3] received client metadata from 127.0.0.1:49459 conn3: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:48.950-0400 Implicit session: session { "id" : UUID("1ed16f4b-7705-4857-b0e0-547c9a9e36e9") }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:48.962-0400 MongoDB server version: 4.3.0-703-g917d338
[js_test:configsvr_failover_repro] 2019-07-25T18:24:48.972-0400 true
[js_test:configsvr_failover_repro] 2019-07-25T18:24:48.996-0400 Starting new replica set configsvr_failover_repro-rs0
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.000-0400 ReplSetTest starting set
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.002-0400 ReplSetTest n is : 0
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.011-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "useHostName" : true,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "oplogSize" : 16,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "keyFile" : undefined,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "port" : 20020,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "replSet" : "configsvr_failover_repro-rs0",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "dbpath" : "$set-$node",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "useHostname" : true,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "shardsvr" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "pathOpts" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "testName" : "configsvr_failover_repro",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "shard" : 0,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "node" : 0,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "set" : "configsvr_failover_repro-rs0"
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 },
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.012-0400 "setParameter" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.013-0400 "migrationLockAcquisitionMaxWaitMS" : 30000,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.013-0400 "writePeriodicNoops" : false,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.013-0400 "numInitialSyncConnectAttempts" : 60
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.013-0400 },
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.013-0400 "restart" : undefined
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.013-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.013-0400 ReplSetTest Starting....
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.033-0400 Resetting db path '/data/db/job0/mongorunner/configsvr_failover_repro-rs0-0'
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.084-0400 2019-07-25T18:24:49.083-0400 I - [js] shell: started program (sh2745): /Users/jason.zhang/mongodb/mongo/mongod --oplogSize 16 --port 20020 --replSet configsvr_failover_repro-rs0 --dbpath /data/db/job0/mongorunner/configsvr_failover_repro-rs0-0 --shardsvr --setParameter migrationLockAcquisitionMaxWaitMS=30000 --setParameter writePeriodicNoops=false --setParameter numInitialSyncConnectAttempts=60 --bind_ip 0.0.0.0 --setParameter enableTestCommands=1 --setParameter disableLogicalSessionCacheRefresh=true --setParameter transactionLifetimeLimitSeconds=86400 --setParameter orphanCleanupDelaySecs=1 --enableMajorityReadConcern true --setParameter logComponentVerbosity={"replication":{"rollback":2},"transaction":4}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.178-0400 d20020| 2019-07-25T18:24:49.177-0400 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.234-0400 d20020| 2019-07-25T18:24:49.234-0400 I CONTROL [initandlisten] MongoDB starting : pid=2745 port=20020 dbpath=/data/db/job0/mongorunner/configsvr_failover_repro-rs0-0 64-bit host=Jasons-MacBook-Pro.local
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.234-0400 d20020| 2019-07-25T18:24:49.234-0400 I CONTROL [initandlisten] DEBUG build (which is slower)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.234-0400 d20020| 2019-07-25T18:24:49.234-0400 I CONTROL [initandlisten] db version v4.3.0-703-g917d338
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.234-0400 d20020| 2019-07-25T18:24:49.234-0400 I CONTROL [initandlisten] git version: 917d338c4bf52dc8dce2c0e585a676385e81ed1c
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.234-0400 d20020| 2019-07-25T18:24:49.234-0400 I CONTROL [initandlisten] allocator: system
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.234-0400 d20020| 2019-07-25T18:24:49.234-0400 I CONTROL [initandlisten] modules: enterprise ninja
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.234-0400 d20020| 2019-07-25T18:24:49.234-0400 I CONTROL [initandlisten] build environment:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.234-0400 d20020| 2019-07-25T18:24:49.234-0400 I CONTROL [initandlisten] distarch: x86_64
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.234-0400 d20020| 2019-07-25T18:24:49.234-0400 I CONTROL [initandlisten] target_arch: x86_64
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.234-0400 d20020| 2019-07-25T18:24:49.234-0400 I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 20020 }, replication: { enableMajorityReadConcern: true, oplogSizeMB: 16, replSet: "configsvr_failover_repro-rs0" }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{"replication":{"rollback":2},"transaction":4}", migrationLockAcquisitionMaxWaitMS: "30000", numInitialSyncConnectAttempts: "60", orphanCleanupDelaySecs: "1", transactionLifetimeLimitSeconds: "86400", writePeriodicNoops: "false" }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/data/db/job0/mongorunner/configsvr_failover_repro-rs0-0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.235-0400 d20020| 2019-07-25T18:24:49.235-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7680M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],debug_mode=(table_logging=true),
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.766-0400 d20020| 2019-07-25T18:24:49.765-0400 I STORAGE [initandlisten] WiredTiger message [1564093489:765915][2745:0x121cdf5c0], txn-recover: Set global recovery timestamp: (0,0)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.828-0400 d20020| 2019-07-25T18:24:49.827-0400 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.891-0400 d20020| 2019-07-25T18:24:49.891-0400 I STORAGE [initandlisten] Timestamp monitor starting
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.895-0400 d20020| 2019-07-25T18:24:49.895-0400 I CONTROL [initandlisten]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.895-0400 d20020| 2019-07-25T18:24:49.895-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (4.3.0-703-g917d338) of MongoDB.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.895-0400 d20020| 2019-07-25T18:24:49.895-0400 I CONTROL [initandlisten] ** Not recommended for production.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.895-0400 d20020| 2019-07-25T18:24:49.895-0400 I CONTROL [initandlisten]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.895-0400 d20020| 2019-07-25T18:24:49.895-0400 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.896-0400 d20020| 2019-07-25T18:24:49.895-0400 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.896-0400 d20020| 2019-07-25T18:24:49.895-0400 I CONTROL [initandlisten]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.900-0400 d20020| 2019-07-25T18:24:49.900-0400 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.901-0400 d20020| 2019-07-25T18:24:49.901-0400 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.902-0400 d20020| 2019-07-25T18:24:49.902-0400 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.902-0400 d20020| 2019-07-25T18:24:49.902-0400 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.903-0400 d20020| 2019-07-25T18:24:49.903-0400 W SHARDING [initandlisten] Started with --shardsvr, but no shardIdentity document was found on disk in admin.system.version. This most likely means this server has not yet been added to a sharded cluster.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.905-0400 d20020| 2019-07-25T18:24:49.905-0400 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 853d0bbb-8ea9-4c98-8372-510603570068 and options: { capped: true, size: 10485760 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.956-0400 d20020| 2019-07-25T18:24:49.956-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.957-0400 d20020| 2019-07-25T18:24:49.957-0400 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.957-0400 d20020| 2019-07-25T18:24:49.957-0400 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/job0/mongorunner/configsvr_failover_repro-rs0-0/diagnostic.data'
[js_test:configsvr_failover_repro] 2019-07-25T18:24:49.958-0400 d20020| 2019-07-25T18:24:49.958-0400 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 5ca27ba1-8f6a-47f5-9625-a3de4098e12f and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.011-0400 d20020| 2019-07-25T18:24:50.010-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.011-0400 d20020| 2019-07-25T18:24:50.011-0400 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: e8263b28-ca9a-4c97-941d-fb2dc499603f and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.013-0400 d20020| 2019-07-25T18:24:50.013-0400 W REPL [ftdc] Rollback ID is not initialized yet.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.064-0400 d20020| 2019-07-25T18:24:50.064-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.065-0400 d20020| 2019-07-25T18:24:50.065-0400 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.065-0400 d20020| 2019-07-25T18:24:50.065-0400 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.066-0400 d20020| 2019-07-25T18:24:50.066-0400 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: 9ccc327f-c6ab-4ca2-b7fa-6a714cac8cf7 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.117-0400 d20020| 2019-07-25T18:24:50.117-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.118-0400 d20020| 2019-07-25T18:24:50.118-0400 I SHARDING [initandlisten] Marking collection local.replset.election as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.120-0400 d20020| 2019-07-25T18:24:50.120-0400 I REPL [initandlisten] Did not find local initialized voted for document at startup.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.120-0400 d20020| 2019-07-25T18:24:50.120-0400 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.121-0400 d20020| 2019-07-25T18:24:50.120-0400 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: b624fc86-aa3a-4115-8ae9-6fa3be0dbf2c and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.177-0400 d20020| 2019-07-25T18:24:50.177-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.179-0400 d20020| 2019-07-25T18:24:50.179-0400 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.179-0400 d20020| 2019-07-25T18:24:50.179-0400 I REPL [initandlisten] Initialized the rollback ID to 1
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.179-0400 d20020| 2019-07-25T18:24:50.179-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.180-0400 d20020| 2019-07-25T18:24:50.180-0400 I NETWORK [initandlisten] Listening on /tmp/mongodb-20020.sock
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.180-0400 d20020| 2019-07-25T18:24:50.180-0400 I NETWORK [initandlisten] Listening on 0.0.0.0
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.180-0400 d20020| 2019-07-25T18:24:50.180-0400 I NETWORK [initandlisten] waiting for connections on port 20020
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.405-0400 d20020| 2019-07-25T18:24:50.404-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49461 #1 (1 connection now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.406-0400 d20020| 2019-07-25T18:24:50.406-0400 I NETWORK [conn1] received client metadata from 127.0.0.1:49461 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.423-0400 [ connection to Jasons-MacBook-Pro.local:20020 ]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.433-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.433-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.433-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.433-0400 [jsTest] New session started with sessionID: { "id" : UUID("5c6b2b51-4daa-4b01-98d8-ec4db4b4ba86") } and options: { "causalConsistency" : false }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.433-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.433-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.433-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.461-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.461-0400 "replSetInitiate" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.461-0400 "_id" : "configsvr_failover_repro-rs0",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.461-0400 "protocolVersion" : 1,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.462-0400 "members" : [
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.462-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.462-0400 "_id" : 0,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.462-0400 "host" : "Jasons-MacBook-Pro.local:20020"
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.462-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.462-0400 ]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.462-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.462-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.470-0400 d20020| 2019-07-25T18:24:50.470-0400 I REPL [conn1] replSetInitiate admin command received from client
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.473-0400 d20020| 2019-07-25T18:24:50.473-0400 I REPL [conn1] replSetInitiate config object with 1 members parses ok
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.473-0400 d20020| 2019-07-25T18:24:50.473-0400 I REPL [conn1] ******
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.473-0400 d20020| 2019-07-25T18:24:50.473-0400 I REPL [conn1] creating replication oplog of size: 16MB...
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.473-0400 d20020| 2019-07-25T18:24:50.473-0400 I STORAGE [conn1] createCollection: local.oplog.rs with generated UUID: 08673422-206a-444f-b25a-f5d97d203ce4 and options: { capped: true, size: 16777216, autoIndexId: false }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.502-0400 d20020| 2019-07-25T18:24:50.502-0400 I STORAGE [conn1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.502-0400 d20020| 2019-07-25T18:24:50.502-0400 I STORAGE [conn1] Scanning the oplog to determine where to place markers for truncation
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.695-0400 d20020| 2019-07-25T18:24:50.694-0400 I REPL [conn1] ******
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.695-0400 d20020| 2019-07-25T18:24:50.695-0400 I STORAGE [conn1] createCollection: local.system.replset with generated UUID: e035716a-4d5c-43a4-9f25-4a1cadba3ea8 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.746-0400 d20020| 2019-07-25T18:24:50.746-0400 I INDEX [conn1] index build: done building index _id_ on ns local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.757-0400 d20020| 2019-07-25T18:24:50.757-0400 I STORAGE [conn1] createCollection: admin.system.version with provided UUID: bc985b18-dc55-4ec5-9923-34d0a77fadef and options: { uuid: UUID("bc985b18-dc55-4ec5-9923-34d0a77fadef") }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.824-0400 d20020| 2019-07-25T18:24:50.823-0400 I INDEX [conn1] index build: done building index _id_ on ns admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.825-0400 d20020| 2019-07-25T18:24:50.824-0400 I COMMAND [conn1] setting featureCompatibilityVersion to 4.0
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.826-0400 d20020| 2019-07-25T18:24:50.826-0400 I REPL [conn1] New replica set config in use: { _id: "configsvr_failover_repro-rs0", version: 1, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "Jasons-MacBook-Pro.local:20020", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5d3a2c322d71daf4c4e5f00b') } }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.826-0400 d20020| 2019-07-25T18:24:50.826-0400 I REPL [conn1] This node is Jasons-MacBook-Pro.local:20020 in the config
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.826-0400 d20020| 2019-07-25T18:24:50.826-0400 I REPL [conn1] transition to STARTUP2 from STARTUP
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.826-0400 d20020| 2019-07-25T18:24:50.826-0400 I REPL [conn1] Starting replication storage threads
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.827-0400 d20020| 2019-07-25T18:24:50.827-0400 I REPL [conn1] transition to RECOVERING from STARTUP2
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.827-0400 d20020| 2019-07-25T18:24:50.827-0400 I REPL [conn1] Starting replication fetcher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.827-0400 d20020| 2019-07-25T18:24:50.827-0400 I REPL [conn1] Starting replication applier thread
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.828-0400 d20020| 2019-07-25T18:24:50.827-0400 I REPL [conn1] Starting replication reporter thread
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.828-0400 d20020| 2019-07-25T18:24:50.827-0400 I REPL [rsSync-0] Starting oplog application
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.828-0400 d20020| 2019-07-25T18:24:50.827-0400 I COMMAND [conn1] command local.system.replset appName: "MongoDB Shell" command: replSetInitiate { replSetInitiate: { _id: "configsvr_failover_repro-rs0", protocolVersion: 1.0, members: [ { _id: 0.0, host: "Jasons-MacBook-Pro.local:20020" } ] }, lsid: { id: UUID("5c6b2b51-4daa-4b01-98d8-ec4db4b4ba86") }, $clusterTime: { clusterTime: Timestamp(0, 0), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:163 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 10 } }, ReplicationStateTransition: { acquireCount: { w: 10 } }, Global: { acquireCount: { r: 3, w: 5, W: 2 } }, Database: { acquireCount: { r: 2, w: 2, W: 3 } }, Collection: { acquireCount: { r: 2, w: 2 } }, Mutex: { acquireCount: { r: 9 } }, oplog: { acquireCount: { r: 1, w: 1 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 357ms
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.829-0400 d20020| 2019-07-25T18:24:50.829-0400 I REPL [rsSync-0] transition to SECONDARY from RECOVERING
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.829-0400 d20020| 2019-07-25T18:24:50.829-0400 I ELECTION [rsSync-0] conducting a dry run election to see if we could be elected. current term: 0
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.830-0400 d20020| 2019-07-25T18:24:50.829-0400 I ELECTION [replexec-0] dry election run succeeded, running for election in term 1
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.839-0400 d20020| 2019-07-25T18:24:50.839-0400 I ELECTION [replexec-0] election succeeded, assuming primary role in term 1
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.839-0400 d20020| 2019-07-25T18:24:50.839-0400 I REPL [replexec-0] transition to PRIMARY from SECONDARY
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.839-0400 d20020| 2019-07-25T18:24:50.839-0400 I REPL [replexec-0] Resetting sync source to empty, which was :27017
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.839-0400 d20020| 2019-07-25T18:24:50.839-0400 I REPL [replexec-0] Entering primary catch-up mode.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.839-0400 d20020| 2019-07-25T18:24:50.839-0400 I REPL [replexec-0] Exited primary catch-up mode.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:50.839-0400 d20020| 2019-07-25T18:24:50.839-0400 I REPL [replexec-0] Stopping replication producer
[js_test:configsvr_failover_repro] 2019-07-25T18:24:51.831-0400 d20020| 2019-07-25T18:24:51.830-0400 I REPL [RstlKillOpThread] Starting to kill user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:24:51.831-0400 d20020| 2019-07-25T18:24:51.831-0400 I REPL [RstlKillOpThread] Stopped killing user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:24:52.837-0400 d20020| 2019-07-25T18:24:52.837-0400 I REPL [RstlKillOpThread] Starting to kill user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:24:52.838-0400 d20020| 2019-07-25T18:24:52.837-0400 I REPL [RstlKillOpThread] Stopped killing user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:24:52.839-0400 d20020| 2019-07-25T18:24:52.838-0400 I SHARDING [rsSync-0] Marking collection config.transactions as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:52.839-0400 d20020| 2019-07-25T18:24:52.839-0400 I STORAGE [rsSync-0] createCollection: config.transactions with generated UUID: f9f26136-f846-44b6-bee2-23bf19357d18 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:52.904-0400 d20020| 2019-07-25T18:24:52.904-0400 I INDEX [rsSync-0] index build: done building index _id_ on ns config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:24:52.907-0400 d20020| 2019-07-25T18:24:52.907-0400 I REPL [rsSync-0] transition to primary complete; database writes are now permitted
[js_test:configsvr_failover_repro] 2019-07-25T18:24:52.991-0400 d20020| 2019-07-25T18:24:52.991-0400 I STORAGE [WTJournalFlusher] Triggering the first stable checkpoint. Initial Data: Timestamp(1564093490, 1) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1564093492, 2)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.041-0400 AwaitNodesAgreeOnPrimary: Waiting for nodes to agree on any primary.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.051-0400 AwaitNodesAgreeOnPrimary: Nodes agreed on primary Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.062-0400 Set shouldWaitForKeys from RS options: false
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.073-0400 AwaitLastStableRecoveryTimestamp: Beginning for [ "Jasons-MacBook-Pro.local:20020" ]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.086-0400 AwaitNodesAgreeOnPrimary: Waiting for nodes to agree on any primary.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.096-0400 AwaitNodesAgreeOnPrimary: Nodes agreed on primary Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.107-0400 AwaitLastStableRecoveryTimestamp: ensuring the commit point advances for [ "Jasons-MacBook-Pro.local:20020" ]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.145-0400 AwaitLastStableRecoveryTimestamp: Waiting for stable recovery timestamps for [ "Jasons-MacBook-Pro.local:20020" ]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.158-0400 AwaitLastStableRecoveryTimestamp: A stable recovery timestamp has successfully established on [ "Jasons-MacBook-Pro.local:20020" ]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.192-0400 d20020| 2019-07-25T18:24:53.192-0400 I SHARDING [conn1] Marking collection admin.foo as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.192-0400 d20020| 2019-07-25T18:24:53.192-0400 I STORAGE [conn1] createCollection: admin.foo with generated UUID: ccb6f54c-06bc-4983-8255-a57c7806c0d8 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.250-0400 d20020| 2019-07-25T18:24:53.250-0400 I INDEX [conn1] index build: done building index _id_ on ns admin.foo
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.268-0400 2019-07-25T18:24:53.268-0400 I NETWORK [js] Starting new replica set monitor for configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.269-0400 2019-07-25T18:24:53.269-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.270-0400 d20020| 2019-07-25T18:24:53.270-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49462 #2 (2 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.271-0400 d20020| 2019-07-25T18:24:53.271-0400 I NETWORK [conn2] received client metadata from 127.0.0.1:49462 conn2: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.272-0400 2019-07-25T18:24:53.272-0400 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for configsvr_failover_repro-rs0 is configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.274-0400 Starting new replica set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.275-0400 ReplSetTest starting set
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.275-0400 ReplSetTest n is : 0
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "useHostName" : true,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "oplogSize" : 40,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "keyFile" : undefined,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "port" : 20021,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "replSet" : "configsvr_failover_repro-configRS",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "dbpath" : "$set-$node",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "pathOpts" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "testName" : "configsvr_failover_repro",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "node" : 0,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "set" : "configsvr_failover_repro-configRS"
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 },
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "journal" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "configsvr" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.283-0400 "storageEngine" : "wiredTiger",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.284-0400 "restart" : undefined,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.284-0400 "setParameter" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.284-0400 "writePeriodicNoops" : false,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.284-0400 "numInitialSyncConnectAttempts" : 60
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.284-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.284-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.284-0400 ReplSetTest Starting....
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.298-0400 Resetting db path '/data/db/job0/mongorunner/configsvr_failover_repro-configRS-0'
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.336-0400 2019-07-25T18:24:53.336-0400 I - [js] shell: started program (sh2746): /Users/jason.zhang/mongodb/mongo/mongod --oplogSize 40 --port 20021 --replSet configsvr_failover_repro-configRS --dbpath /data/db/job0/mongorunner/configsvr_failover_repro-configRS-0 --journal --configsvr --storageEngine wiredTiger --setParameter writePeriodicNoops=false --setParameter numInitialSyncConnectAttempts=60 --bind_ip 0.0.0.0 --setParameter enableTestCommands=1 --setParameter disableLogicalSessionCacheRefresh=true --setParameter transactionLifetimeLimitSeconds=86400 --setParameter orphanCleanupDelaySecs=1 --enableMajorityReadConcern true --setParameter logComponentVerbosity={"replication":{"rollback":2},"transaction":4}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.480-0400 c20021| 2019-07-25T18:24:53.479-0400 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.488-0400 c20021| 2019-07-25T18:24:53.488-0400 I CONTROL [initandlisten] MongoDB starting : pid=2746 port=20021 dbpath=/data/db/job0/mongorunner/configsvr_failover_repro-configRS-0 64-bit host=Jasons-MacBook-Pro.local
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.488-0400 c20021| 2019-07-25T18:24:53.488-0400 I CONTROL [initandlisten] DEBUG build (which is slower)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.488-0400 c20021| 2019-07-25T18:24:53.488-0400 I CONTROL [initandlisten] db version v4.3.0-703-g917d338
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.488-0400 c20021| 2019-07-25T18:24:53.488-0400 I CONTROL [initandlisten] git version: 917d338c4bf52dc8dce2c0e585a676385e81ed1c
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.488-0400 c20021| 2019-07-25T18:24:53.488-0400 I CONTROL [initandlisten] allocator: system
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.488-0400 c20021| 2019-07-25T18:24:53.488-0400 I CONTROL [initandlisten] modules: enterprise ninja
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.488-0400 c20021| 2019-07-25T18:24:53.488-0400 I CONTROL [initandlisten] build environment:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.488-0400 c20021| 2019-07-25T18:24:53.488-0400 I CONTROL [initandlisten] distarch: x86_64
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.488-0400 c20021| 2019-07-25T18:24:53.488-0400 I CONTROL [initandlisten] target_arch: x86_64
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.488-0400 c20021| 2019-07-25T18:24:53.488-0400 I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 20021 }, replication: { enableMajorityReadConcern: true, oplogSizeMB: 40, replSet: "configsvr_failover_repro-configRS" }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{"replication":{"rollback":2},"transaction":4}", numInitialSyncConnectAttempts: "60", orphanCleanupDelaySecs: "1", transactionLifetimeLimitSeconds: "86400", writePeriodicNoops: "false" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/db/job0/mongorunner/configsvr_failover_repro-configRS-0", engine: "wiredTiger", journal: { enabled: true } } }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:53.489-0400 c20021| 2019-07-25T18:24:53.489-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7680M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],debug_mode=(table_logging=true),
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.024-0400 c20021| 2019-07-25T18:24:54.024-0400 I STORAGE [initandlisten] WiredTiger message [1564093494:24277][2746:0x1290945c0], txn-recover: Set global recovery timestamp: (0,0)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.069-0400 c20021| 2019-07-25T18:24:54.069-0400 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.121-0400 c20021| 2019-07-25T18:24:54.121-0400 I STORAGE [initandlisten] Timestamp monitor starting
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.122-0400 c20021| 2019-07-25T18:24:54.122-0400 I CONTROL [initandlisten]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.122-0400 c20021| 2019-07-25T18:24:54.122-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (4.3.0-703-g917d338) of MongoDB.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.122-0400 c20021| 2019-07-25T18:24:54.122-0400 I CONTROL [initandlisten] ** Not recommended for production.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.122-0400 c20021| 2019-07-25T18:24:54.122-0400 I CONTROL [initandlisten]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.122-0400 c20021| 2019-07-25T18:24:54.122-0400 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.122-0400 c20021| 2019-07-25T18:24:54.122-0400 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.123-0400 c20021| 2019-07-25T18:24:54.122-0400 I CONTROL [initandlisten]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.127-0400 c20021| 2019-07-25T18:24:54.127-0400 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.127-0400 c20021| 2019-07-25T18:24:54.127-0400 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.128-0400 c20021| 2019-07-25T18:24:54.128-0400 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.129-0400 c20021| 2019-07-25T18:24:54.129-0400 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.131-0400 c20021| 2019-07-25T18:24:54.131-0400 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 84420a9d-4c68-4563-849d-b250fd14e9f7 and options: { capped: true, size: 10485760 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.179-0400 c20021| 2019-07-25T18:24:54.179-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.180-0400 c20021| 2019-07-25T18:24:54.180-0400 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.180-0400 c20021| 2019-07-25T18:24:54.180-0400 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/job0/mongorunner/configsvr_failover_repro-configRS-0/diagnostic.data'
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.183-0400 c20021| 2019-07-25T18:24:54.183-0400 I SHARDING [thread1] creating distributed lock ping thread for process ConfigServer (sleeping for 30000ms)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.183-0400 c20021| 2019-07-25T18:24:54.183-0400 I SHARDING [shard-registry-reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.184-0400 c20021| 2019-07-25T18:24:54.184-0400 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 96b5a123-08a3-4ab4-8639-2e342910c9d3 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.234-0400 c20021| 2019-07-25T18:24:54.234-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.235-0400 c20021| 2019-07-25T18:24:54.235-0400 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: e35ccdb9-1979-4a0e-afcc-2ad0c80ef4a3 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.285-0400 c20021| 2019-07-25T18:24:54.284-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.286-0400 c20021| 2019-07-25T18:24:54.286-0400 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.287-0400 c20021| 2019-07-25T18:24:54.287-0400 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: f6e92dc2-1307-4a6a-8605-87b9037fa27b and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.334-0400 c20021| 2019-07-25T18:24:54.334-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.335-0400 c20021| 2019-07-25T18:24:54.335-0400 I SHARDING [initandlisten] Marking collection local.replset.election as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.336-0400 c20021| 2019-07-25T18:24:54.336-0400 I REPL [initandlisten] Did not find local initialized voted for document at startup.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.336-0400 c20021| 2019-07-25T18:24:54.336-0400 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.337-0400 c20021| 2019-07-25T18:24:54.337-0400 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: 0f09bfc1-1827-4436-b3ef-50bc9cb16e6c and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.383-0400 c20021| 2019-07-25T18:24:54.383-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.384-0400 c20021| 2019-07-25T18:24:54.384-0400 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.385-0400 c20021| 2019-07-25T18:24:54.384-0400 I REPL [initandlisten] Initialized the rollback ID to 1
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.385-0400 c20021| 2019-07-25T18:24:54.385-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.386-0400 c20021| 2019-07-25T18:24:54.386-0400 I NETWORK [initandlisten] Listening on /tmp/mongodb-20021.sock
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.386-0400 c20021| 2019-07-25T18:24:54.386-0400 I NETWORK [initandlisten] Listening on 0.0.0.0
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.387-0400 c20021| 2019-07-25T18:24:54.386-0400 I NETWORK [initandlisten] waiting for connections on port 20021
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.655-0400 c20021| 2019-07-25T18:24:54.654-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49464 #1 (1 connection now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.655-0400 c20021| 2019-07-25T18:24:54.655-0400 I NETWORK [conn1] received client metadata from 127.0.0.1:49464 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.671-0400 [ connection to Jasons-MacBook-Pro.local:20021 ]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.672-0400 ReplSetTest n is : 1
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.678-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.678-0400 "useHostName" : true,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "oplogSize" : 40,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "keyFile" : undefined,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "port" : 20022,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "replSet" : "configsvr_failover_repro-configRS",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "dbpath" : "$set-$node",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "pathOpts" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "testName" : "configsvr_failover_repro",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "node" : 1,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "set" : "configsvr_failover_repro-configRS"
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 },
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "journal" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "configsvr" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "storageEngine" : "wiredTiger",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "restart" : undefined,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "setParameter" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "writePeriodicNoops" : false,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 "numInitialSyncConnectAttempts" : 60
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.679-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.680-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.680-0400 ReplSetTest Starting....
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.690-0400 Resetting db path '/data/db/job0/mongorunner/configsvr_failover_repro-configRS-1'
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.727-0400 2019-07-25T18:24:54.727-0400 I - [js] shell: started program (sh2747): /Users/jason.zhang/mongodb/mongo/mongod --oplogSize 40 --port 20022 --replSet configsvr_failover_repro-configRS --dbpath /data/db/job0/mongorunner/configsvr_failover_repro-configRS-1 --journal --configsvr --storageEngine wiredTiger --setParameter writePeriodicNoops=false --setParameter numInitialSyncConnectAttempts=60 --bind_ip 0.0.0.0 --setParameter enableTestCommands=1 --setParameter disableLogicalSessionCacheRefresh=true --setParameter transactionLifetimeLimitSeconds=86400 --setParameter orphanCleanupDelaySecs=1 --enableMajorityReadConcern true --setParameter logComponentVerbosity={"replication":{"rollback":2},"transaction":4}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.842-0400 c20022| 2019-07-25T18:24:54.841-0400 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.886-0400 c20022| 2019-07-25T18:24:54.886-0400 I CONTROL [initandlisten] MongoDB starting : pid=2747 port=20022 dbpath=/data/db/job0/mongorunner/configsvr_failover_repro-configRS-1 64-bit host=Jasons-MacBook-Pro.local
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.886-0400 c20022| 2019-07-25T18:24:54.886-0400 I CONTROL [initandlisten] DEBUG build (which is slower)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.886-0400 c20022| 2019-07-25T18:24:54.886-0400 I CONTROL [initandlisten] db version v4.3.0-703-g917d338
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.886-0400 c20022| 2019-07-25T18:24:54.886-0400 I CONTROL [initandlisten] git version: 917d338c4bf52dc8dce2c0e585a676385e81ed1c
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.887-0400 c20022| 2019-07-25T18:24:54.886-0400 I CONTROL [initandlisten] allocator: system
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.887-0400 c20022| 2019-07-25T18:24:54.886-0400 I CONTROL [initandlisten] modules: enterprise ninja
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.887-0400 c20022| 2019-07-25T18:24:54.886-0400 I CONTROL [initandlisten] build environment:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.887-0400 c20022| 2019-07-25T18:24:54.886-0400 I CONTROL [initandlisten] distarch: x86_64
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.887-0400 c20022| 2019-07-25T18:24:54.886-0400 I CONTROL [initandlisten] target_arch: x86_64
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.887-0400 c20022| 2019-07-25T18:24:54.886-0400 I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 20022 }, replication: { enableMajorityReadConcern: true, oplogSizeMB: 40, replSet: "configsvr_failover_repro-configRS" }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{"replication":{"rollback":2},"transaction":4}", numInitialSyncConnectAttempts: "60", orphanCleanupDelaySecs: "1", transactionLifetimeLimitSeconds: "86400", writePeriodicNoops: "false" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/db/job0/mongorunner/configsvr_failover_repro-configRS-1", engine: "wiredTiger", journal: { enabled: true } } }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:54.887-0400 c20022| 2019-07-25T18:24:54.887-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7680M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],debug_mode=(table_logging=true),
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.008-0400 c20021| 2019-07-25T18:24:55.008-0400 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.415-0400 c20022| 2019-07-25T18:24:55.415-0400 I STORAGE [initandlisten] WiredTiger message [1564093495:415068][2747:0x12e8775c0], txn-recover: Set global recovery timestamp: (0,0)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.460-0400 c20022| 2019-07-25T18:24:55.460-0400 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.546-0400 c20022| 2019-07-25T18:24:55.546-0400 I STORAGE [initandlisten] Timestamp monitor starting
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.551-0400 c20022| 2019-07-25T18:24:55.550-0400 I CONTROL [initandlisten]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.551-0400 c20022| 2019-07-25T18:24:55.550-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (4.3.0-703-g917d338) of MongoDB.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.551-0400 c20022| 2019-07-25T18:24:55.550-0400 I CONTROL [initandlisten] ** Not recommended for production.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.551-0400 c20022| 2019-07-25T18:24:55.550-0400 I CONTROL [initandlisten]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.551-0400 c20022| 2019-07-25T18:24:55.551-0400 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.551-0400 c20022| 2019-07-25T18:24:55.551-0400 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.551-0400 c20022| 2019-07-25T18:24:55.551-0400 I CONTROL [initandlisten]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.556-0400 c20022| 2019-07-25T18:24:55.556-0400 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.556-0400 c20022| 2019-07-25T18:24:55.556-0400 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.557-0400 c20022| 2019-07-25T18:24:55.557-0400 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.558-0400 c20022| 2019-07-25T18:24:55.558-0400 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.560-0400 c20022| 2019-07-25T18:24:55.560-0400 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 4d892c1c-600f-4312-98d1-33586fe9f3ec and options: { capped: true, size: 10485760 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.609-0400 c20022| 2019-07-25T18:24:55.608-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.610-0400 c20022| 2019-07-25T18:24:55.610-0400 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.610-0400 c20022| 2019-07-25T18:24:55.610-0400 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/job0/mongorunner/configsvr_failover_repro-configRS-1/diagnostic.data'
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.614-0400 c20022| 2019-07-25T18:24:55.613-0400 I SHARDING [thread1] creating distributed lock ping thread for process ConfigServer (sleeping for 30000ms)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.614-0400 c20022| 2019-07-25T18:24:55.613-0400 I SHARDING [shard-registry-reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.614-0400 c20022| 2019-07-25T18:24:55.614-0400 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 6ffcae3b-70ce-43e9-bb8d-7c62c9daa35c and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.664-0400 c20022| 2019-07-25T18:24:55.663-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.664-0400 c20022| 2019-07-25T18:24:55.664-0400 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: 5a4f9fe6-2cc5-49a2-ac03-9e5410450c17 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.714-0400 c20022| 2019-07-25T18:24:55.714-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.716-0400 c20022| 2019-07-25T18:24:55.716-0400 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.717-0400 c20022| 2019-07-25T18:24:55.716-0400 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: 0a8ecaac-2b80-4d37-bf64-ea6fbd07a116 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.766-0400 c20022| 2019-07-25T18:24:55.765-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.766-0400 c20022| 2019-07-25T18:24:55.766-0400 I SHARDING [initandlisten] Marking collection local.replset.election as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.768-0400 c20022| 2019-07-25T18:24:55.768-0400 I REPL [initandlisten] Did not find local initialized voted for document at startup.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.768-0400 c20022| 2019-07-25T18:24:55.768-0400 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.768-0400 c20022| 2019-07-25T18:24:55.768-0400 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: 43a8c4fd-c300-4cd6-bbb7-772dcc208de2 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.821-0400 c20022| 2019-07-25T18:24:55.821-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.822-0400 c20022| 2019-07-25T18:24:55.822-0400 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.823-0400 c20022| 2019-07-25T18:24:55.822-0400 I REPL [initandlisten] Initialized the rollback ID to 1
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.823-0400 c20022| 2019-07-25T18:24:55.823-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.824-0400 c20022| 2019-07-25T18:24:55.824-0400 I NETWORK [initandlisten] Listening on /tmp/mongodb-20022.sock
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.824-0400 c20022| 2019-07-25T18:24:55.824-0400 I NETWORK [initandlisten] Listening on 0.0.0.0
[js_test:configsvr_failover_repro] 2019-07-25T18:24:55.824-0400 c20022| 2019-07-25T18:24:55.824-0400 I NETWORK [initandlisten] waiting for connections on port 20022
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.013-0400 c20022| 2019-07-25T18:24:56.013-0400 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.043-0400 c20022| 2019-07-25T18:24:56.043-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49466 #1 (1 connection now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.043-0400 c20022| 2019-07-25T18:24:56.043-0400 I NETWORK [conn1] received client metadata from 127.0.0.1:49466 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.062-0400 [
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.062-0400 connection to Jasons-MacBook-Pro.local:20021,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.062-0400 connection to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.062-0400 ]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.062-0400 ReplSetTest n is : 2
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.069-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.069-0400 "useHostName" : true,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.069-0400 "oplogSize" : 40,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.069-0400 "keyFile" : undefined,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.069-0400 "port" : 20023,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.069-0400 "replSet" : "configsvr_failover_repro-configRS",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.069-0400 "dbpath" : "$set-$node",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.069-0400 "pathOpts" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.069-0400 "testName" : "configsvr_failover_repro",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.069-0400 "node" : 2,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.069-0400 "set" : "configsvr_failover_repro-configRS"
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.069-0400 },
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.070-0400 "journal" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.070-0400 "configsvr" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.070-0400 "storageEngine" : "wiredTiger",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.070-0400 "restart" : undefined,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.070-0400 "setParameter" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.070-0400 "writePeriodicNoops" : false,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.070-0400 "numInitialSyncConnectAttempts" : 60
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.070-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.070-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.070-0400 ReplSetTest Starting....
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.081-0400 Resetting db path '/data/db/job0/mongorunner/configsvr_failover_repro-configRS-2'
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.120-0400 2019-07-25T18:24:56.120-0400 I - [js] shell: started program (sh2748): /Users/jason.zhang/mongodb/mongo/mongod --oplogSize 40 --port 20023 --replSet configsvr_failover_repro-configRS --dbpath /data/db/job0/mongorunner/configsvr_failover_repro-configRS-2 --journal --configsvr --storageEngine wiredTiger --setParameter writePeriodicNoops=false --setParameter numInitialSyncConnectAttempts=60 --bind_ip 0.0.0.0 --setParameter enableTestCommands=1 --setParameter disableLogicalSessionCacheRefresh=true --setParameter transactionLifetimeLimitSeconds=86400 --setParameter orphanCleanupDelaySecs=1 --enableMajorityReadConcern true --setParameter logComponentVerbosity={"replication":{"rollback":2},"transaction":4}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.266-0400 c20023| 2019-07-25T18:24:56.265-0400 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.273-0400 c20023| 2019-07-25T18:24:56.273-0400 I CONTROL [initandlisten] MongoDB starting : pid=2748 port=20023 dbpath=/data/db/job0/mongorunner/configsvr_failover_repro-configRS-2 64-bit host=Jasons-MacBook-Pro.local
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.273-0400 c20023| 2019-07-25T18:24:56.273-0400 I CONTROL [initandlisten] DEBUG build (which is slower)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.273-0400 c20023| 2019-07-25T18:24:56.273-0400 I CONTROL [initandlisten] db version v4.3.0-703-g917d338
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.273-0400 c20023| 2019-07-25T18:24:56.273-0400 I CONTROL [initandlisten] git version: 917d338c4bf52dc8dce2c0e585a676385e81ed1c
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.273-0400 c20023| 2019-07-25T18:24:56.273-0400 I CONTROL [initandlisten] allocator: system
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.274-0400 c20023| 2019-07-25T18:24:56.273-0400 I CONTROL [initandlisten] modules: enterprise ninja
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.274-0400 c20023| 2019-07-25T18:24:56.273-0400 I CONTROL [initandlisten] build environment:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.274-0400 c20023| 2019-07-25T18:24:56.273-0400 I CONTROL [initandlisten] distarch: x86_64
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.274-0400 c20023| 2019-07-25T18:24:56.273-0400 I CONTROL [initandlisten] target_arch: x86_64
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.274-0400 c20023| 2019-07-25T18:24:56.273-0400 I CONTROL [initandlisten] options: { net: { bindIp: "0.0.0.0", port: 20023 }, replication: { enableMajorityReadConcern: true, oplogSizeMB: 40, replSet: "configsvr_failover_repro-configRS" }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{"replication":{"rollback":2},"transaction":4}", numInitialSyncConnectAttempts: "60", orphanCleanupDelaySecs: "1", transactionLifetimeLimitSeconds: "86400", writePeriodicNoops: "false" }, sharding: { clusterRole: "configsvr" }, storage: { dbPath: "/data/db/job0/mongorunner/configsvr_failover_repro-configRS-2", engine: "wiredTiger", journal: { enabled: true } } }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.274-0400 c20023| 2019-07-25T18:24:56.274-0400 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7680M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],debug_mode=(table_logging=true),
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.806-0400 c20023| 2019-07-25T18:24:56.806-0400 I STORAGE [initandlisten] WiredTiger message [1564093496:806176][2748:0x125bb85c0], txn-recover: Set global recovery timestamp: (0,0)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.854-0400 c20023| 2019-07-25T18:24:56.854-0400 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.926-0400 c20023| 2019-07-25T18:24:56.926-0400 I STORAGE [initandlisten] Timestamp monitor starting
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.930-0400 c20023| 2019-07-25T18:24:56.930-0400 I CONTROL [initandlisten]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.930-0400 c20023| 2019-07-25T18:24:56.930-0400 I CONTROL [initandlisten] ** NOTE: This is a development version (4.3.0-703-g917d338) of MongoDB.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.931-0400 c20023| 2019-07-25T18:24:56.930-0400 I CONTROL [initandlisten] ** Not recommended for production.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.931-0400 c20023| 2019-07-25T18:24:56.930-0400 I CONTROL [initandlisten]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.931-0400 c20023| 2019-07-25T18:24:56.930-0400 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.931-0400 c20023| 2019-07-25T18:24:56.930-0400 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.931-0400 c20023| 2019-07-25T18:24:56.930-0400 I CONTROL [initandlisten]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.936-0400 c20023| 2019-07-25T18:24:56.936-0400 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.936-0400 c20023| 2019-07-25T18:24:56.936-0400 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.937-0400 c20023| 2019-07-25T18:24:56.937-0400 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.938-0400 c20023| 2019-07-25T18:24:56.938-0400 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.940-0400 c20023| 2019-07-25T18:24:56.940-0400 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: 785c5cbb-6c24-4c5f-8954-dab243d20759 and options: { capped: true, size: 10485760 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.990-0400 c20023| 2019-07-25T18:24:56.990-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.startup_log
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.991-0400 c20023| 2019-07-25T18:24:56.991-0400 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.992-0400 c20023| 2019-07-25T18:24:56.992-0400 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/job0/mongorunner/configsvr_failover_repro-configRS-2/diagnostic.data'
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.995-0400 c20023| 2019-07-25T18:24:56.995-0400 I SHARDING [thread1] creating distributed lock ping thread for process ConfigServer (sleeping for 30000ms)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.995-0400 c20023| 2019-07-25T18:24:56.995-0400 I SHARDING [shard-registry-reload] Periodic reload of shard registry failed :: caused by :: ReadConcernMajorityNotAvailableYet: could not get updated shard list from config server :: caused by :: Read concern majority reads are currently not possible.; will retry after 30s
[js_test:configsvr_failover_repro] 2019-07-25T18:24:56.996-0400 c20023| 2019-07-25T18:24:56.996-0400 I STORAGE [initandlisten] createCollection: local.replset.oplogTruncateAfterPoint with generated UUID: 80f29b0c-4a5a-4fdb-871a-4f092019c759 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.009-0400 c20023| 2019-07-25T18:24:57.009-0400 W REPL [ftdc] Rollback ID is not initialized yet.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.044-0400 c20023| 2019-07-25T18:24:57.044-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.oplogTruncateAfterPoint
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.045-0400 c20023| 2019-07-25T18:24:57.045-0400 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.045-0400 c20023| 2019-07-25T18:24:57.045-0400 I STORAGE [initandlisten] createCollection: local.replset.minvalid with generated UUID: 5c18f3d3-fafa-400b-93c0-2cebd245146d and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.094-0400 c20023| 2019-07-25T18:24:57.094-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.minvalid
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.096-0400 c20023| 2019-07-25T18:24:57.095-0400 I SHARDING [initandlisten] Marking collection local.replset.minvalid as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.096-0400 c20023| 2019-07-25T18:24:57.096-0400 I STORAGE [initandlisten] createCollection: local.replset.election with generated UUID: 33c7a131-0d93-4e1c-94c1-0ae885978575 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.147-0400 c20023| 2019-07-25T18:24:57.147-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.replset.election
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.148-0400 c20023| 2019-07-25T18:24:57.148-0400 I SHARDING [initandlisten] Marking collection local.replset.election as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.149-0400 c20023| 2019-07-25T18:24:57.149-0400 I REPL [initandlisten] Did not find local initialized voted for document at startup.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.149-0400 c20023| 2019-07-25T18:24:57.149-0400 I REPL [initandlisten] Did not find local Rollback ID document at startup. Creating one.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.150-0400 c20023| 2019-07-25T18:24:57.149-0400 I STORAGE [initandlisten] createCollection: local.system.rollback.id with generated UUID: 2ca40ffd-cdea-4e90-a3d1-9170709b257f and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.197-0400 c20023| 2019-07-25T18:24:57.196-0400 I INDEX [initandlisten] index build: done building index _id_ on ns local.system.rollback.id
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.198-0400 c20023| 2019-07-25T18:24:57.198-0400 I SHARDING [initandlisten] Marking collection local.system.rollback.id as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.198-0400 c20023| 2019-07-25T18:24:57.198-0400 I REPL [initandlisten] Initialized the rollback ID to 1
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.198-0400 c20023| 2019-07-25T18:24:57.198-0400 I REPL [initandlisten] Did not find local replica set configuration document at startup; NoMatchingDocument: Did not find replica set configuration document in local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.199-0400 c20023| 2019-07-25T18:24:57.199-0400 I NETWORK [initandlisten] Listening on /tmp/mongodb-20023.sock
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.199-0400 c20023| 2019-07-25T18:24:57.199-0400 I NETWORK [initandlisten] Listening on 0.0.0.0
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.199-0400 c20023| 2019-07-25T18:24:57.199-0400 I NETWORK [initandlisten] waiting for connections on port 20023
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.432-0400 c20023| 2019-07-25T18:24:57.432-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49468 #1 (1 connection now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.433-0400 c20023| 2019-07-25T18:24:57.433-0400 I NETWORK [conn1] received client metadata from 127.0.0.1:49468 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.449-0400 [
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.449-0400 connection to Jasons-MacBook-Pro.local:20021,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.449-0400 connection to Jasons-MacBook-Pro.local:20022,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.449-0400 connection to Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.449-0400 ]
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.453-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.453-0400 "replSetInitiate" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.453-0400 "_id" : "configsvr_failover_repro-configRS",
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.453-0400 "protocolVersion" : 1,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.453-0400 "members" : [
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.453-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.453-0400 "_id" : 0,
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.453-0400 "host" : "Jasons-MacBook-Pro.local:20021"
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.453-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.453-0400 ],
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.453-0400 "configsvr" : true
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.454-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.454-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.456-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.456-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.457-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.457-0400 [jsTest] New session started with sessionID: { "id" : UUID("9a454f36-648a-450a-9eb9-b85dea2fbf25") } and options: { "causalConsistency" : false }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.457-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.457-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.457-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.462-0400 c20021| 2019-07-25T18:24:57.462-0400 I REPL [conn1] replSetInitiate admin command received from client
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.463-0400 c20021| 2019-07-25T18:24:57.463-0400 I REPL [conn1] replSetInitiate config object with 1 members parses ok
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.463-0400 c20021| 2019-07-25T18:24:57.463-0400 I REPL [conn1] ******
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.463-0400 c20021| 2019-07-25T18:24:57.463-0400 I REPL [conn1] creating replication oplog of size: 40MB...
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.463-0400 c20021| 2019-07-25T18:24:57.463-0400 I STORAGE [conn1] createCollection: local.oplog.rs with generated UUID: 4897c211-3daa-4388-9897-4f5bcb131a4b and options: { capped: true, size: 41943040, autoIndexId: false }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.500-0400 c20021| 2019-07-25T18:24:57.500-0400 I STORAGE [conn1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.500-0400 c20021| 2019-07-25T18:24:57.500-0400 I STORAGE [conn1] Scanning the oplog to determine where to place markers for truncation
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.703-0400 c20021| 2019-07-25T18:24:57.703-0400 I REPL [conn1] ******
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.704-0400 c20021| 2019-07-25T18:24:57.704-0400 I STORAGE [conn1] createCollection: local.system.replset with generated UUID: dd5952c4-3c13-4ad9-bf30-e86a595e836e and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.754-0400 c20021| 2019-07-25T18:24:57.754-0400 I INDEX [conn1] index build: done building index _id_ on ns local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.766-0400 c20021| 2019-07-25T18:24:57.766-0400 I STORAGE [conn1] createCollection: admin.system.version with provided UUID: 9eb89103-fb3d-4038-bb54-c402876ca16e and options: { uuid: UUID("9eb89103-fb3d-4038-bb54-c402876ca16e") }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.816-0400 c20021| 2019-07-25T18:24:57.815-0400 I INDEX [conn1] index build: done building index _id_ on ns admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.816-0400 c20021| 2019-07-25T18:24:57.816-0400 I COMMAND [conn1] setting featureCompatibilityVersion to 4.2
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.816-0400 c20021| 2019-07-25T18:24:57.816-0400 I NETWORK [conn1] Skip closing connection for connection # 1
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.817-0400 c20021| 2019-07-25T18:24:57.817-0400 I REPL [conn1] New replica set config in use: { _id: "configsvr_failover_repro-configRS", version: 1, configsvr: true, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "Jasons-MacBook-Pro.local:20021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5d3a2c399cfa09cae7a79750') } }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.818-0400 c20021| 2019-07-25T18:24:57.817-0400 I REPL [conn1] This node is Jasons-MacBook-Pro.local:20021 in the config
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.818-0400 c20021| 2019-07-25T18:24:57.817-0400 I REPL [conn1] transition to STARTUP2 from STARTUP
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.818-0400 c20021| 2019-07-25T18:24:57.817-0400 I REPL [conn1] Starting replication storage threads
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.820-0400 c20021| 2019-07-25T18:24:57.819-0400 I REPL [conn1] transition to RECOVERING from STARTUP2
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.820-0400 c20021| 2019-07-25T18:24:57.820-0400 I REPL [conn1] Starting replication fetcher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.820-0400 c20021| 2019-07-25T18:24:57.820-0400 I REPL [conn1] Starting replication applier thread
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.821-0400 c20021| 2019-07-25T18:24:57.820-0400 I REPL [conn1] Starting replication reporter thread
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.822-0400 c20021| 2019-07-25T18:24:57.821-0400 I REPL [rsSync-0] Starting oplog application
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.822-0400 c20021| 2019-07-25T18:24:57.821-0400 I COMMAND [conn1] command local.system.replset appName: "MongoDB Shell" command: replSetInitiate { replSetInitiate: { _id: "configsvr_failover_repro-configRS", protocolVersion: 1.0, members: [ { _id: 0.0, host: "Jasons-MacBook-Pro.local:20021" } ], configsvr: true }, lsid: { id: UUID("9a454f36-648a-450a-9eb9-b85dea2fbf25") }, $db: "admin" } numYields:0 reslen:252 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 10 } }, ReplicationStateTransition: { acquireCount: { w: 10 } }, Global: { acquireCount: { r: 3, w: 5, W: 2 } }, Database: { acquireCount: { r: 2, w: 2, W: 3 } }, Collection: { acquireCount: { r: 2, w: 2 } }, Mutex: { acquireCount: { r: 9 } }, oplog: { acquireCount: { r: 1, w: 1 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 358ms
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.824-0400 c20021| 2019-07-25T18:24:57.824-0400 I REPL [rsSync-0] transition to SECONDARY from RECOVERING
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.824-0400 c20021| 2019-07-25T18:24:57.824-0400 I ELECTION [rsSync-0] conducting a dry run election to see if we could be elected. current term: 0
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.824-0400 c20021| 2019-07-25T18:24:57.824-0400 I ELECTION [replexec-0] dry election run succeeded, running for election in term 1
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.833-0400 c20021| 2019-07-25T18:24:57.833-0400 I ELECTION [replexec-0] election succeeded, assuming primary role in term 1
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.834-0400 c20021| 2019-07-25T18:24:57.833-0400 I REPL [replexec-0] transition to PRIMARY from SECONDARY
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.834-0400 c20021| 2019-07-25T18:24:57.833-0400 I REPL [replexec-0] Resetting sync source to empty, which was :27017
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.834-0400 c20021| 2019-07-25T18:24:57.833-0400 I REPL [replexec-0] Entering primary catch-up mode.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.834-0400 c20021| 2019-07-25T18:24:57.833-0400 I REPL [replexec-0] Exited primary catch-up mode.
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.834-0400 c20021| 2019-07-25T18:24:57.833-0400 I REPL [replexec-0] Stopping replication producer
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.839-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.839-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.839-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.839-0400 [jsTest] New session started with sessionID: { "id" : UUID("44321189-0df7-4024-bb83-362f10fea9c6") } and options: { "causalConsistency" : false }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.839-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.839-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.839-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.850-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.850-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.850-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.850-0400 [jsTest] New session started with sessionID: { "id" : UUID("b5665848-5267-4c4e-b6b4-e996ea0e3245") } and options: { "causalConsistency" : false }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.850-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.850-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:57.850-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:24:58.829-0400 c20021| 2019-07-25T18:24:58.829-0400 I REPL [RstlKillOpThread] Starting to kill user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:24:58.830-0400 c20021| 2019-07-25T18:24:58.829-0400 I REPL [RstlKillOpThread] Stopped killing user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:24:59.836-0400 c20021| 2019-07-25T18:24:59.836-0400 I REPL [RstlKillOpThread] Starting to kill user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:24:59.836-0400 c20021| 2019-07-25T18:24:59.836-0400 I REPL [RstlKillOpThread] Stopped killing user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:24:59.837-0400 c20021| 2019-07-25T18:24:59.837-0400 I SHARDING [rsSync-0] Marking collection config.transactions as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:24:59.838-0400 c20021| 2019-07-25T18:24:59.838-0400 I STORAGE [rsSync-0] createCollection: config.transactions with generated UUID: 2ff387c1-0957-46b7-b825-992aba2ed063 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:24:59.894-0400 c20021| 2019-07-25T18:24:59.894-0400 I INDEX [rsSync-0] index build: done building index _id_ on ns config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:24:59.897-0400 c20021| 2019-07-25T18:24:59.897-0400 I STORAGE [rsSync-0] createCollection: config.chunks with provided UUID: 63c02d1c-5493-42cd-9595-17fe7298418c and options: { uuid: UUID("63c02d1c-5493-42cd-9595-17fe7298418c") }
[js_test:configsvr_failover_repro] 2019-07-25T18:24:59.945-0400 c20021| 2019-07-25T18:24:59.944-0400 I INDEX [rsSync-0] index build: done building index _id_ on ns config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:24:59.947-0400 c20021| 2019-07-25T18:24:59.946-0400 I INDEX [rsSync-0] Registering index build: b9f96242-02d1-409f-84df-aeee92c52117
[js_test:configsvr_failover_repro] 2019-07-25T18:24:59.949-0400 c20021| 2019-07-25T18:24:59.948-0400 I SHARDING [rsSync-0] Marking collection config.chunks as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.025-0400 c20021| 2019-07-25T18:25:00.025-0400 I INDEX [rsSync-0] index build: starting on config.chunks properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.025-0400 c20021| 2019-07-25T18:25:00.025-0400 I INDEX [rsSync-0] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.025-0400 c20021| 2019-07-25T18:25:00.025-0400 I STORAGE [rsSync-0] Index build initialized: b9f96242-02d1-409f-84df-aeee92c52117: config.chunks (63c02d1c-5493-42cd-9595-17fe7298418c ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.025-0400 c20021| 2019-07-25T18:25:00.025-0400 I STORAGE [rsSync-0] Running index build on current thread because we are transitioning between replication states: b9f96242-02d1-409f-84df-aeee92c52117
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.026-0400 c20021| 2019-07-25T18:25:00.026-0400 I INDEX [rsSync-0] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.026-0400 c20021| 2019-07-25T18:25:00.026-0400 I INDEX [rsSync-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.043-0400 c20021| 2019-07-25T18:25:00.042-0400 I INDEX [rsSync-0] index build: done building index ns_1_min_1 on ns config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.068-0400 c20021| 2019-07-25T18:25:00.068-0400 I STORAGE [rsSync-0] Index build completed successfully: b9f96242-02d1-409f-84df-aeee92c52117: config.chunks ( 63c02d1c-5493-42cd-9595-17fe7298418c ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.068-0400 c20021| 2019-07-25T18:25:00.068-0400 I INDEX [rsSync-0] Waiting for index build to complete: b9f96242-02d1-409f-84df-aeee92c52117
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.068-0400 c20021| 2019-07-25T18:25:00.068-0400 I INDEX [rsSync-0] Index build completed: b9f96242-02d1-409f-84df-aeee92c52117
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.068-0400 c20021| 2019-07-25T18:25:00.068-0400 I COMMAND [rsSync-0] command config.chunks command: createIndexes { createIndexes: "chunks", indexes: [ { ns: "config.chunks", v: 2, name: "ns_1_min_1", key: { ns: 1, min: 1 }, unique: true } ], $db: "config" } numYields:0 reslen:366 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 5 } }, Global: { acquireCount: { r: 1, w: 4 } }, Database: { acquireCount: { r: 1, w: 4 } }, Collection: { acquireCount: { r: 4, w: 1, R: 1, W: 4 } }, Mutex: { acquireCount: { r: 6 } } } storage:{} protocol:op_msg 171ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.069-0400 c20021| 2019-07-25T18:25:00.069-0400 I INDEX [rsSync-0] Registering index build: aadf60f7-8c9d-4b13-90aa-430921f54030
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.151-0400 c20021| 2019-07-25T18:25:00.151-0400 I INDEX [rsSync-0] index build: starting on config.chunks properties: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.152-0400 c20021| 2019-07-25T18:25:00.151-0400 I INDEX [rsSync-0] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.152-0400 c20021| 2019-07-25T18:25:00.152-0400 I STORAGE [rsSync-0] Index build initialized: aadf60f7-8c9d-4b13-90aa-430921f54030: config.chunks (63c02d1c-5493-42cd-9595-17fe7298418c ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.152-0400 c20021| 2019-07-25T18:25:00.152-0400 I STORAGE [rsSync-0] Running index build on current thread because we are transitioning between replication states: aadf60f7-8c9d-4b13-90aa-430921f54030
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.152-0400 c20021| 2019-07-25T18:25:00.152-0400 I INDEX [rsSync-0] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.153-0400 c20021| 2019-07-25T18:25:00.153-0400 I INDEX [rsSync-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.166-0400 c20021| 2019-07-25T18:25:00.166-0400 I INDEX [rsSync-0] index build: done building index ns_1_shard_1_min_1 on ns config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.198-0400 c20021| 2019-07-25T18:25:00.198-0400 I STORAGE [rsSync-0] Index build completed successfully: aadf60f7-8c9d-4b13-90aa-430921f54030: config.chunks ( 63c02d1c-5493-42cd-9595-17fe7298418c ). Index specs built: 1. Indexes in catalog before build: 2. Indexes in catalog after build: 3
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.198-0400 c20021| 2019-07-25T18:25:00.198-0400 I INDEX [rsSync-0] Waiting for index build to complete: aadf60f7-8c9d-4b13-90aa-430921f54030
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.198-0400 c20021| 2019-07-25T18:25:00.198-0400 I INDEX [rsSync-0] Index build completed: aadf60f7-8c9d-4b13-90aa-430921f54030
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.198-0400 c20021| 2019-07-25T18:25:00.198-0400 I COMMAND [rsSync-0] command config.chunks command: createIndexes { createIndexes: "chunks", indexes: [ { ns: "config.chunks", v: 2, name: "ns_1_shard_1_min_1", key: { ns: 1, shard: 1, min: 1 }, unique: true } ], $db: "config" } numYields:0 reslen:366 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 5 } }, Global: { acquireCount: { r: 1, w: 4 } }, Database: { acquireCount: { r: 1, w: 4 } }, Collection: { acquireCount: { r: 2, w: 1, R: 1, W: 4 } }, Mutex: { acquireCount: { r: 6 } } } storage:{} protocol:op_msg 129ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.200-0400 c20021| 2019-07-25T18:25:00.200-0400 I INDEX [rsSync-0] Registering index build: 33c95270-bb39-4c4e-8f67-a0d7b0fa39ff
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.288-0400 c20021| 2019-07-25T18:25:00.288-0400 I INDEX [rsSync-0] index build: starting on config.chunks properties: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.288-0400 c20021| 2019-07-25T18:25:00.288-0400 I INDEX [rsSync-0] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.289-0400 c20021| 2019-07-25T18:25:00.289-0400 I STORAGE [rsSync-0] Index build initialized: 33c95270-bb39-4c4e-8f67-a0d7b0fa39ff: config.chunks (63c02d1c-5493-42cd-9595-17fe7298418c ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.289-0400 c20021| 2019-07-25T18:25:00.289-0400 I STORAGE [rsSync-0] Running index build on current thread because we are transitioning between replication states: 33c95270-bb39-4c4e-8f67-a0d7b0fa39ff
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.289-0400 c20021| 2019-07-25T18:25:00.289-0400 I INDEX [rsSync-0] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.292-0400 c20021| 2019-07-25T18:25:00.291-0400 I INDEX [rsSync-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.302-0400 c20021| 2019-07-25T18:25:00.302-0400 I INDEX [rsSync-0] index build: done building index ns_1_lastmod_1 on ns config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.336-0400 c20021| 2019-07-25T18:25:00.336-0400 I STORAGE [rsSync-0] Index build completed successfully: 33c95270-bb39-4c4e-8f67-a0d7b0fa39ff: config.chunks ( 63c02d1c-5493-42cd-9595-17fe7298418c ). Index specs built: 1. Indexes in catalog before build: 3. Indexes in catalog after build: 4
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.337-0400 c20021| 2019-07-25T18:25:00.336-0400 I INDEX [rsSync-0] Waiting for index build to complete: 33c95270-bb39-4c4e-8f67-a0d7b0fa39ff
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.337-0400 c20021| 2019-07-25T18:25:00.336-0400 I INDEX [rsSync-0] Index build completed: 33c95270-bb39-4c4e-8f67-a0d7b0fa39ff
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.337-0400 c20021| 2019-07-25T18:25:00.337-0400 I COMMAND [rsSync-0] command config.chunks command: createIndexes { createIndexes: "chunks", indexes: [ { ns: "config.chunks", v: 2, name: "ns_1_lastmod_1", key: { ns: 1, lastmod: 1 }, unique: true } ], $db: "config" } numYields:0 reslen:366 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 5 } }, Global: { acquireCount: { r: 1, w: 4 } }, Database: { acquireCount: { r: 1, w: 4 } }, Collection: { acquireCount: { r: 2, w: 1, R: 1, W: 4 } }, Mutex: { acquireCount: { r: 6 } } } storage:{} protocol:op_msg 137ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.339-0400 c20021| 2019-07-25T18:25:00.339-0400 I STORAGE [rsSync-0] createCollection: config.migrations with provided UUID: 91fc80cd-1974-4835-96e0-c0c276b056ee and options: { uuid: UUID("91fc80cd-1974-4835-96e0-c0c276b056ee") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.385-0400 c20021| 2019-07-25T18:25:00.385-0400 I INDEX [rsSync-0] index build: done building index _id_ on ns config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.386-0400 c20021| 2019-07-25T18:25:00.386-0400 I INDEX [rsSync-0] Registering index build: e91b14d3-aeab-4e75-bc03-17217ec78c6e
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.387-0400 c20021| 2019-07-25T18:25:00.386-0400 I SHARDING [rsSync-0] Marking collection config.migrations as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.472-0400 c20021| 2019-07-25T18:25:00.472-0400 I INDEX [rsSync-0] index build: starting on config.migrations properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.472-0400 c20021| 2019-07-25T18:25:00.472-0400 I INDEX [rsSync-0] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.472-0400 c20021| 2019-07-25T18:25:00.472-0400 I STORAGE [rsSync-0] Index build initialized: e91b14d3-aeab-4e75-bc03-17217ec78c6e: config.migrations (91fc80cd-1974-4835-96e0-c0c276b056ee ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.473-0400 c20021| 2019-07-25T18:25:00.472-0400 I STORAGE [rsSync-0] Running index build on current thread because we are transitioning between replication states: e91b14d3-aeab-4e75-bc03-17217ec78c6e
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.473-0400 c20021| 2019-07-25T18:25:00.473-0400 I INDEX [rsSync-0] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.474-0400 c20021| 2019-07-25T18:25:00.473-0400 I INDEX [rsSync-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.502-0400 c20021| 2019-07-25T18:25:00.502-0400 I INDEX [rsSync-0] index build: done building index ns_1_min_1 on ns config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.526-0400 c20021| 2019-07-25T18:25:00.526-0400 I STORAGE [rsSync-0] Index build completed successfully: e91b14d3-aeab-4e75-bc03-17217ec78c6e: config.migrations ( 91fc80cd-1974-4835-96e0-c0c276b056ee ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.526-0400 c20021| 2019-07-25T18:25:00.526-0400 I INDEX [rsSync-0] Waiting for index build to complete: e91b14d3-aeab-4e75-bc03-17217ec78c6e
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.526-0400 c20021| 2019-07-25T18:25:00.526-0400 I INDEX [rsSync-0] Index build completed: e91b14d3-aeab-4e75-bc03-17217ec78c6e
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.526-0400 c20021| 2019-07-25T18:25:00.526-0400 I COMMAND [rsSync-0] command config.migrations command: createIndexes { createIndexes: "migrations", indexes: [ { ns: "config.migrations", v: 2, name: "ns_1_min_1", key: { ns: 1, min: 1 }, unique: true } ], $db: "config" } numYields:0 reslen:366 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 5 } }, Global: { acquireCount: { r: 1, w: 4 } }, Database: { acquireCount: { r: 1, w: 4 } }, Collection: { acquireCount: { r: 4, w: 1, R: 1, W: 4 } }, Mutex: { acquireCount: { r: 6 } } } storage:{} protocol:op_msg 188ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.527-0400 c20021| 2019-07-25T18:25:00.527-0400 I STORAGE [rsSync-0] createCollection: config.shards with provided UUID: 9dc58f2f-04de-441a-b6d7-36d58adac3fa and options: { uuid: UUID("9dc58f2f-04de-441a-b6d7-36d58adac3fa") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.573-0400 c20021| 2019-07-25T18:25:00.573-0400 I INDEX [rsSync-0] index build: done building index _id_ on ns config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.574-0400 c20021| 2019-07-25T18:25:00.574-0400 I INDEX [rsSync-0] Registering index build: 0f774b46-00ab-4b2e-9007-ddd82805876b
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.574-0400 c20021| 2019-07-25T18:25:00.574-0400 I SHARDING [rsSync-0] Marking collection config.shards as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.656-0400 c20021| 2019-07-25T18:25:00.655-0400 I INDEX [rsSync-0] index build: starting on config.shards properties: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.656-0400 c20021| 2019-07-25T18:25:00.655-0400 I INDEX [rsSync-0] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.656-0400 c20021| 2019-07-25T18:25:00.656-0400 I STORAGE [rsSync-0] Index build initialized: 0f774b46-00ab-4b2e-9007-ddd82805876b: config.shards (9dc58f2f-04de-441a-b6d7-36d58adac3fa ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.656-0400 c20021| 2019-07-25T18:25:00.656-0400 I STORAGE [rsSync-0] Running index build on current thread because we are transitioning between replication states: 0f774b46-00ab-4b2e-9007-ddd82805876b
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.656-0400 c20021| 2019-07-25T18:25:00.656-0400 I INDEX [rsSync-0] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.657-0400 c20021| 2019-07-25T18:25:00.656-0400 I INDEX [rsSync-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.671-0400 c20021| 2019-07-25T18:25:00.671-0400 I INDEX [rsSync-0] index build: done building index host_1 on ns config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.706-0400 c20021| 2019-07-25T18:25:00.706-0400 I STORAGE [rsSync-0] Index build completed successfully: 0f774b46-00ab-4b2e-9007-ddd82805876b: config.shards ( 9dc58f2f-04de-441a-b6d7-36d58adac3fa ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.706-0400 c20021| 2019-07-25T18:25:00.706-0400 I INDEX [rsSync-0] Waiting for index build to complete: 0f774b46-00ab-4b2e-9007-ddd82805876b
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.707-0400 c20021| 2019-07-25T18:25:00.706-0400 I INDEX [rsSync-0] Index build completed: 0f774b46-00ab-4b2e-9007-ddd82805876b
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.707-0400 c20021| 2019-07-25T18:25:00.707-0400 I COMMAND [rsSync-0] command config.shards command: createIndexes { createIndexes: "shards", indexes: [ { ns: "config.shards", v: 2, name: "host_1", key: { host: 1 }, unique: true } ], $db: "config" } numYields:0 reslen:366 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 5 } }, Global: { acquireCount: { r: 1, w: 4 } }, Database: { acquireCount: { r: 1, w: 4 } }, Collection: { acquireCount: { r: 4, w: 1, R: 1, W: 4 } }, Mutex: { acquireCount: { r: 6 } } } storage:{} protocol:op_msg 179ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.707-0400 c20021| 2019-07-25T18:25:00.707-0400 I STORAGE [rsSync-0] createCollection: config.locks with provided UUID: dd929b42-c13c-4682-8066-ef80c2666228 and options: { uuid: UUID("dd929b42-c13c-4682-8066-ef80c2666228") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.771-0400 c20021| 2019-07-25T18:25:00.770-0400 I INDEX [rsSync-0] index build: done building index _id_ on ns config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.771-0400 c20021| 2019-07-25T18:25:00.771-0400 I INDEX [rsSync-0] Registering index build: 8695ca2b-8cc7-4d01-8a33-a124c8cf0396
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.822-0400 c20021| 2019-07-25T18:25:00.822-0400 I INDEX [rsSync-0] index build: starting on config.locks properties: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.822-0400 c20021| 2019-07-25T18:25:00.822-0400 I INDEX [rsSync-0] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.822-0400 c20021| 2019-07-25T18:25:00.822-0400 I STORAGE [rsSync-0] Index build initialized: 8695ca2b-8cc7-4d01-8a33-a124c8cf0396: config.locks (dd929b42-c13c-4682-8066-ef80c2666228 ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.823-0400 c20021| 2019-07-25T18:25:00.822-0400 I STORAGE [rsSync-0] Running index build on current thread because we are transitioning between replication states: 8695ca2b-8cc7-4d01-8a33-a124c8cf0396
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.823-0400 c20021| 2019-07-25T18:25:00.823-0400 I INDEX [rsSync-0] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.823-0400 c20021| 2019-07-25T18:25:00.823-0400 I INDEX [rsSync-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.838-0400 c20021| 2019-07-25T18:25:00.838-0400 I INDEX [rsSync-0] index build: done building index ts_1 on ns config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.850-0400 c20021| 2019-07-25T18:25:00.850-0400 I STORAGE [rsSync-0] Index build completed successfully: 8695ca2b-8cc7-4d01-8a33-a124c8cf0396: config.locks ( dd929b42-c13c-4682-8066-ef80c2666228 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.850-0400 c20021| 2019-07-25T18:25:00.850-0400 I INDEX [rsSync-0] Waiting for index build to complete: 8695ca2b-8cc7-4d01-8a33-a124c8cf0396
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.850-0400 c20021| 2019-07-25T18:25:00.850-0400 I INDEX [rsSync-0] Index build completed: 8695ca2b-8cc7-4d01-8a33-a124c8cf0396
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.851-0400 c20021| 2019-07-25T18:25:00.850-0400 I COMMAND [rsSync-0] command config.locks command: createIndexes { createIndexes: "locks", indexes: [ { ns: "config.locks", v: 2, name: "ts_1", key: { ts: 1 }, unique: false } ], $db: "config" } numYields:0 reslen:366 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 5 } }, Global: { acquireCount: { r: 1, w: 4 } }, Database: { acquireCount: { r: 1, w: 4 } }, Collection: { acquireCount: { r: 4, w: 1, R: 1, W: 4 } }, Mutex: { acquireCount: { r: 6 } } } storage:{} protocol:op_msg 143ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.851-0400 c20021| 2019-07-25T18:25:00.851-0400 I INDEX [rsSync-0] Registering index build: 85611439-6f6b-41a0-98ff-a1e21bd31560
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.900-0400 c20021| 2019-07-25T18:25:00.899-0400 I INDEX [rsSync-0] index build: starting on config.locks properties: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.900-0400 c20021| 2019-07-25T18:25:00.899-0400 I INDEX [rsSync-0] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.900-0400 c20021| 2019-07-25T18:25:00.900-0400 I STORAGE [rsSync-0] Index build initialized: 85611439-6f6b-41a0-98ff-a1e21bd31560: config.locks (dd929b42-c13c-4682-8066-ef80c2666228 ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.901-0400 c20021| 2019-07-25T18:25:00.900-0400 I STORAGE [rsSync-0] Running index build on current thread because we are transitioning between replication states: 85611439-6f6b-41a0-98ff-a1e21bd31560
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.901-0400 c20021| 2019-07-25T18:25:00.901-0400 I INDEX [rsSync-0] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.903-0400 c20021| 2019-07-25T18:25:00.903-0400 I INDEX [rsSync-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.917-0400 c20021| 2019-07-25T18:25:00.917-0400 I INDEX [rsSync-0] index build: done building index state_1_process_1 on ns config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.928-0400 c20021| 2019-07-25T18:25:00.928-0400 I STORAGE [rsSync-0] Index build completed successfully: 85611439-6f6b-41a0-98ff-a1e21bd31560: config.locks ( dd929b42-c13c-4682-8066-ef80c2666228 ). Index specs built: 1. Indexes in catalog before build: 2. Indexes in catalog after build: 3
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.928-0400 c20021| 2019-07-25T18:25:00.928-0400 I INDEX [rsSync-0] Waiting for index build to complete: 85611439-6f6b-41a0-98ff-a1e21bd31560
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.929-0400 c20021| 2019-07-25T18:25:00.928-0400 I INDEX [rsSync-0] Index build completed: 85611439-6f6b-41a0-98ff-a1e21bd31560
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.929-0400 c20021| 2019-07-25T18:25:00.929-0400 I STORAGE [rsSync-0] createCollection: config.lockpings with provided UUID: dd0672e8-19c6-432b-9b6a-d21b02c0bf6e and options: { uuid: UUID("dd0672e8-19c6-432b-9b6a-d21b02c0bf6e") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.977-0400 c20021| 2019-07-25T18:25:00.977-0400 I INDEX [rsSync-0] index build: done building index _id_ on ns config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:00.977-0400 c20021| 2019-07-25T18:25:00.977-0400 I INDEX [rsSync-0] Registering index build: 2ff72ab2-3854-4ac0-b1fc-251c8cc209a6
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.038-0400 c20021| 2019-07-25T18:25:01.038-0400 I INDEX [rsSync-0] index build: starting on config.lockpings properties: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.038-0400 c20021| 2019-07-25T18:25:01.038-0400 I INDEX [rsSync-0] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.038-0400 c20021| 2019-07-25T18:25:01.038-0400 I STORAGE [rsSync-0] Index build initialized: 2ff72ab2-3854-4ac0-b1fc-251c8cc209a6: config.lockpings (dd0672e8-19c6-432b-9b6a-d21b02c0bf6e ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.038-0400 c20021| 2019-07-25T18:25:01.038-0400 I STORAGE [rsSync-0] Running index build on current thread because we are transitioning between replication states: 2ff72ab2-3854-4ac0-b1fc-251c8cc209a6
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.039-0400 c20021| 2019-07-25T18:25:01.039-0400 I INDEX [rsSync-0] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.041-0400 c20021| 2019-07-25T18:25:01.041-0400 I INDEX [rsSync-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.055-0400 c20021| 2019-07-25T18:25:01.055-0400 I INDEX [rsSync-0] index build: done building index ping_1 on ns config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.076-0400 c20021| 2019-07-25T18:25:01.076-0400 I STORAGE [rsSync-0] Index build completed successfully: 2ff72ab2-3854-4ac0-b1fc-251c8cc209a6: config.lockpings ( dd0672e8-19c6-432b-9b6a-d21b02c0bf6e ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.076-0400 c20021| 2019-07-25T18:25:01.076-0400 I INDEX [rsSync-0] Waiting for index build to complete: 2ff72ab2-3854-4ac0-b1fc-251c8cc209a6
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.076-0400 c20021| 2019-07-25T18:25:01.076-0400 I INDEX [rsSync-0] Index build completed: 2ff72ab2-3854-4ac0-b1fc-251c8cc209a6
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.077-0400 c20021| 2019-07-25T18:25:01.077-0400 I COMMAND [rsSync-0] command config.lockpings command: createIndexes { createIndexes: "lockpings", indexes: [ { ns: "config.lockpings", v: 2, name: "ping_1", key: { ping: 1 }, unique: false } ], $db: "config" } numYields:0 reslen:366 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 5 } }, Global: { acquireCount: { r: 1, w: 4 } }, Database: { acquireCount: { r: 1, w: 4 } }, Collection: { acquireCount: { r: 4, w: 1, R: 1, W: 4 } }, Mutex: { acquireCount: { r: 6 } } } storage:{} protocol:op_msg 147ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.081-0400 c20021| 2019-07-25T18:25:01.081-0400 I STORAGE [rsSync-0] createCollection: config.tags with provided UUID: f1867b25-f9fb-445f-8bca-c3b4a21b38ee and options: { uuid: UUID("f1867b25-f9fb-445f-8bca-c3b4a21b38ee") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.131-0400 c20021| 2019-07-25T18:25:01.131-0400 I INDEX [rsSync-0] index build: done building index _id_ on ns config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.132-0400 c20021| 2019-07-25T18:25:01.132-0400 I INDEX [rsSync-0] Registering index build: 28415f44-75ae-4577-852e-d93485f61552
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.132-0400 c20021| 2019-07-25T18:25:01.132-0400 I SHARDING [rsSync-0] Marking collection config.tags as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.219-0400 c20021| 2019-07-25T18:25:01.219-0400 I INDEX [rsSync-0] index build: starting on config.tags properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.220-0400 c20021| 2019-07-25T18:25:01.219-0400 I INDEX [rsSync-0] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.220-0400 c20021| 2019-07-25T18:25:01.220-0400 I STORAGE [rsSync-0] Index build initialized: 28415f44-75ae-4577-852e-d93485f61552: config.tags (f1867b25-f9fb-445f-8bca-c3b4a21b38ee ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.220-0400 c20021| 2019-07-25T18:25:01.220-0400 I STORAGE [rsSync-0] Running index build on current thread because we are transitioning between replication states: 28415f44-75ae-4577-852e-d93485f61552
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.220-0400 c20021| 2019-07-25T18:25:01.220-0400 I INDEX [rsSync-0] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.221-0400 c20021| 2019-07-25T18:25:01.221-0400 I INDEX [rsSync-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.235-0400 c20021| 2019-07-25T18:25:01.235-0400 I INDEX [rsSync-0] index build: done building index ns_1_min_1 on ns config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.258-0400 c20021| 2019-07-25T18:25:01.258-0400 I STORAGE [rsSync-0] Index build completed successfully: 28415f44-75ae-4577-852e-d93485f61552: config.tags ( f1867b25-f9fb-445f-8bca-c3b4a21b38ee ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.258-0400 c20021| 2019-07-25T18:25:01.258-0400 I INDEX [rsSync-0] Waiting for index build to complete: 28415f44-75ae-4577-852e-d93485f61552
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.258-0400 c20021| 2019-07-25T18:25:01.258-0400 I INDEX [rsSync-0] Index build completed: 28415f44-75ae-4577-852e-d93485f61552
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.259-0400 c20021| 2019-07-25T18:25:01.259-0400 I COMMAND [rsSync-0] command config.tags command: createIndexes { createIndexes: "tags", indexes: [ { ns: "config.tags", v: 2, name: "ns_1_min_1", key: { ns: 1, min: 1 }, unique: true } ], $db: "config" } numYields:0 reslen:366 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 5 } }, Global: { acquireCount: { r: 1, w: 4 } }, Database: { acquireCount: { r: 1, w: 4 } }, Collection: { acquireCount: { r: 4, w: 1, R: 1, W: 4 } }, Mutex: { acquireCount: { r: 6 } } } storage:{} protocol:op_msg 178ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.260-0400 c20021| 2019-07-25T18:25:01.260-0400 I INDEX [rsSync-0] Registering index build: f0273b8a-a359-49ba-bd00-1c040408ee8e
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.308-0400 c20021| 2019-07-25T18:25:01.308-0400 I INDEX [rsSync-0] index build: starting on config.tags properties: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.308-0400 c20021| 2019-07-25T18:25:01.308-0400 I INDEX [rsSync-0] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.308-0400 c20021| 2019-07-25T18:25:01.308-0400 I STORAGE [rsSync-0] Index build initialized: f0273b8a-a359-49ba-bd00-1c040408ee8e: config.tags (f1867b25-f9fb-445f-8bca-c3b4a21b38ee ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.309-0400 c20021| 2019-07-25T18:25:01.308-0400 I STORAGE [rsSync-0] Running index build on current thread because we are transitioning between replication states: f0273b8a-a359-49ba-bd00-1c040408ee8e
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.309-0400 c20021| 2019-07-25T18:25:01.309-0400 I INDEX [rsSync-0] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.312-0400 c20021| 2019-07-25T18:25:01.311-0400 I INDEX [rsSync-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.324-0400 c20021| 2019-07-25T18:25:01.324-0400 I INDEX [rsSync-0] index build: done building index ns_1_tag_1 on ns config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.347-0400 c20021| 2019-07-25T18:25:01.347-0400 I STORAGE [rsSync-0] Index build completed successfully: f0273b8a-a359-49ba-bd00-1c040408ee8e: config.tags ( f1867b25-f9fb-445f-8bca-c3b4a21b38ee ). Index specs built: 1. Indexes in catalog before build: 2. Indexes in catalog after build: 3
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.347-0400 c20021| 2019-07-25T18:25:01.347-0400 I INDEX [rsSync-0] Waiting for index build to complete: f0273b8a-a359-49ba-bd00-1c040408ee8e
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.347-0400 c20021| 2019-07-25T18:25:01.347-0400 I INDEX [rsSync-0] Index build completed: f0273b8a-a359-49ba-bd00-1c040408ee8e
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.348-0400 c20021| 2019-07-25T18:25:01.348-0400 I SHARDING [rsSync-0] Marking collection config.version as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.349-0400 c20021| 2019-07-25T18:25:01.348-0400 I STORAGE [rsSync-0] createCollection: config.version with generated UUID: e2da88e1-afec-4a2a-9c9c-0b4b51073f63 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.411-0400 c20021| 2019-07-25T18:25:01.411-0400 I INDEX [rsSync-0] index build: done building index _id_ on ns config.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.414-0400 c20021| 2019-07-25T18:25:01.414-0400 I SHARDING [rsSync-0] Marking collection config.locks as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.416-0400 c20021| 2019-07-25T18:25:01.416-0400 I SHARDING [Balancer] CSRS balancer is starting
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.416-0400 c20021| 2019-07-25T18:25:01.416-0400 D3 TXN [TransactionCoordinator] Waiting for OpTime { ts: Timestamp(1564093501, 9), t: 1 } to become majority committed
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.417-0400 c20021| 2019-07-25T18:25:01.417-0400 I STORAGE [rsSync-0] Triggering the first stable checkpoint. Initial Data: Timestamp(1564093497, 1) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1564093501, 7)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.417-0400 c20021| 2019-07-25T18:25:01.417-0400 I REPL [rsSync-0] transition to primary complete; database writes are now permitted
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.419-0400 c20021| 2019-07-25T18:25:01.419-0400 I COMMAND [ftdc] serverStatus was very slow: { after basic: 0, after asserts: 0, after connections: 0, after electionMetrics: 0, after encryptionAtRest: 0, after extra_info: 0, after flowControl: 0, after globalLock: 0, after locks: 0, after logicalSessionRecordCache: 0, after network: 0, after opLatencies: 0, after opReadConcernCounters: 0, after opcounters: 0, after opcountersRepl: 0, after repl: 0, after security: 0, after shardingStatistics: 0, after storageEngine: 0, after trafficRecording: 0, after transactions: 0, after transportSecurity: 0, after twoPhaseCommitCoordinator: 0, after wiredTiger: 1417, at end: 1417 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.425-0400 c20021| 2019-07-25T18:25:01.424-0400 I SHARDING [Balancer] Marking collection config.settings as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.425-0400 c20021| 2019-07-25T18:25:01.424-0400 I SHARDING [TransactionCoordinator] Marking collection config.transaction_coordinators as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.425-0400 c20021| 2019-07-25T18:25:01.424-0400 I TXN [TransactionCoordinator] Need to resume coordinating commit for 0 transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.425-0400 c20021| 2019-07-25T18:25:01.425-0400 I TXN [TransactionCoordinator] Incoming coordinateCommit requests are now enabled
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.426-0400 c20021| 2019-07-25T18:25:01.425-0400 I SHARDING [monitoring-keys-for-HMAC] Marking collection admin.system.keys as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.426-0400 c20021| 2019-07-25T18:25:01.426-0400 I SHARDING [Balancer] CSRS balancer thread is recovering
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.427-0400 c20021| 2019-07-25T18:25:01.426-0400 I SHARDING [Balancer] CSRS balancer thread is recovered
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.429-0400 c20021| 2019-07-25T18:25:01.429-0400 I STORAGE [monitoring-keys-for-HMAC] createCollection: admin.system.keys with generated UUID: 7d5bfd11-2f9e-43fa-b296-05f895e4aea7 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.430-0400 c20021| 2019-07-25T18:25:01.430-0400 I SHARDING [Balancer] Marking collection config.collections as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.484-0400 c20021| 2019-07-25T18:25:01.483-0400 I INDEX [monitoring-keys-for-HMAC] index build: done building index _id_ on ns admin.system.keys
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.590-0400 Reconfiguring replica set to add in other nodes
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.593-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.593-0400 "replSetReconfig" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.593-0400 "_id" : "configsvr_failover_repro-configRS",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 "protocolVersion" : 1,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 "members" : [
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 "_id" : 0,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 "host" : "Jasons-MacBook-Pro.local:20021"
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 },
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 "_id" : 1,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 "host" : "Jasons-MacBook-Pro.local:20022"
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 },
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 "_id" : 2,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 "host" : "Jasons-MacBook-Pro.local:20023"
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 ],
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 "configsvr" : true,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 "settings" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 },
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 "version" : 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.594-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.599-0400 c20021| 2019-07-25T18:25:01.599-0400 I REPL [conn1] replSetReconfig admin command received from client; new config: { _id: "configsvr_failover_repro-configRS", protocolVersion: 1.0, members: [ { _id: 0.0, host: "Jasons-MacBook-Pro.local:20021" }, { _id: 1.0, host: "Jasons-MacBook-Pro.local:20022" }, { _id: 2.0, host: "Jasons-MacBook-Pro.local:20023" } ], configsvr: true, settings: {}, version: 2.0 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.601-0400 c20022| 2019-07-25T18:25:01.601-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49469 #2 (2 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.601-0400 c20022| 2019-07-25T18:25:01.601-0400 I NETWORK [conn2] end connection 127.0.0.1:49469 (1 connection now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.602-0400 c20023| 2019-07-25T18:25:01.602-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49470 #2 (2 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.602-0400 c20021| 2019-07-25T18:25:01.602-0400 I REPL [conn1] replSetReconfig config object with 3 members parses ok
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.602-0400 c20023| 2019-07-25T18:25:01.602-0400 I NETWORK [conn2] end connection 127.0.0.1:49470 (1 connection now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.603-0400 c20021| 2019-07-25T18:25:01.602-0400 I REPL [conn1] Scheduling remote command request for reconfig quorum check: RemoteCommand 1 -- target:Jasons-MacBook-Pro.local:20022 db:admin cmd:{ replSetHeartbeat: "configsvr_failover_repro-configRS", configVersion: 2, hbv: 1, from: "Jasons-MacBook-Pro.local:20021", fromId: 0, term: 1 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.603-0400 c20021| 2019-07-25T18:25:01.603-0400 I REPL [conn1] Scheduling remote command request for reconfig quorum check: RemoteCommand 2 -- target:Jasons-MacBook-Pro.local:20023 db:admin cmd:{ replSetHeartbeat: "configsvr_failover_repro-configRS", configVersion: 2, hbv: 1, from: "Jasons-MacBook-Pro.local:20021", fromId: 0, term: 1 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.603-0400 c20021| 2019-07-25T18:25:01.603-0400 I CONNPOOL [Replication] Connecting to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.603-0400 c20021| 2019-07-25T18:25:01.603-0400 I CONNPOOL [Replication] Connecting to Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.604-0400 c20022| 2019-07-25T18:25:01.604-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49471 #3 (2 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.605-0400 c20022| 2019-07-25T18:25:01.605-0400 I NETWORK [conn3] received client metadata from 127.0.0.1:49471 conn3: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.606-0400 c20023| 2019-07-25T18:25:01.605-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49472 #3 (2 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.606-0400 c20023| 2019-07-25T18:25:01.606-0400 I NETWORK [conn3] received client metadata from 127.0.0.1:49472 conn3: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.607-0400 c20022| 2019-07-25T18:25:01.607-0400 I CONNPOOL [Replication] Connecting to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.608-0400 c20023| 2019-07-25T18:25:01.608-0400 I CONNPOOL [Replication] Connecting to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.609-0400 c20021| 2019-07-25T18:25:01.609-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49474 #6 (2 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.610-0400 c20021| 2019-07-25T18:25:01.610-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49473 #7 (3 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.610-0400 c20021| 2019-07-25T18:25:01.610-0400 I NETWORK [conn6] received client metadata from 127.0.0.1:49474 conn6: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.610-0400 c20021| 2019-07-25T18:25:01.610-0400 I NETWORK [conn7] received client metadata from 127.0.0.1:49473 conn7: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.610-0400 c20021| 2019-07-25T18:25:01.610-0400 I REPL [conn1] New replica set config in use: { _id: "configsvr_failover_repro-configRS", version: 2, configsvr: true, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "Jasons-MacBook-Pro.local:20021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "Jasons-MacBook-Pro.local:20022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "Jasons-MacBook-Pro.local:20023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5d3a2c399cfa09cae7a79750') } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.610-0400 c20021| 2019-07-25T18:25:01.610-0400 I REPL [conn1] This node is Jasons-MacBook-Pro.local:20021 in the config
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.613-0400 c20021| 2019-07-25T18:25:01.612-0400 I REPL [replexec-1] Member Jasons-MacBook-Pro.local:20023 is now in state STARTUP
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.613-0400 c20021| 2019-07-25T18:25:01.613-0400 I REPL [replexec-0] Member Jasons-MacBook-Pro.local:20022 is now in state STARTUP
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.617-0400 c20021| 2019-07-25T18:25:01.616-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49475 #8 (4 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.617-0400 c20021| 2019-07-25T18:25:01.617-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49476 #9 (5 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.618-0400 c20021| 2019-07-25T18:25:01.618-0400 I NETWORK [conn9] end connection 127.0.0.1:49476 (4 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.618-0400 c20021| 2019-07-25T18:25:01.618-0400 I NETWORK [conn8] end connection 127.0.0.1:49475 (3 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.619-0400 c20022| 2019-07-25T18:25:01.619-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49477 #6 (3 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.619-0400 c20023| 2019-07-25T18:25:01.619-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49478 #7 (3 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.620-0400 c20022| 2019-07-25T18:25:01.619-0400 I NETWORK [conn6] end connection 127.0.0.1:49477 (2 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.620-0400 c20023| 2019-07-25T18:25:01.620-0400 I NETWORK [conn7] end connection 127.0.0.1:49478 (2 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.620-0400 c20023| 2019-07-25T18:25:01.620-0400 I STORAGE [replexec-1] createCollection: local.system.replset with generated UUID: f4eee805-b714-4aa7-bd0d-b8813350f86d and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.620-0400 c20022| 2019-07-25T18:25:01.620-0400 I STORAGE [replexec-0] createCollection: local.system.replset with generated UUID: 3e5f7f9f-70c8-4ba0-8948-7ef1822d7896 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.707-0400 c20023| 2019-07-25T18:25:01.707-0400 I INDEX [replexec-1] index build: done building index _id_ on ns local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.707-0400 c20022| 2019-07-25T18:25:01.707-0400 I INDEX [replexec-0] index build: done building index _id_ on ns local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.709-0400 c20023| 2019-07-25T18:25:01.709-0400 I REPL [replexec-1] New replica set config in use: { _id: "configsvr_failover_repro-configRS", version: 2, configsvr: true, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "Jasons-MacBook-Pro.local:20021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "Jasons-MacBook-Pro.local:20022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "Jasons-MacBook-Pro.local:20023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5d3a2c399cfa09cae7a79750') } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.709-0400 c20022| 2019-07-25T18:25:01.709-0400 I REPL [replexec-0] New replica set config in use: { _id: "configsvr_failover_repro-configRS", version: 2, configsvr: true, protocolVersion: 1, writeConcernMajorityJournalDefault: true, members: [ { _id: 0, host: "Jasons-MacBook-Pro.local:20021", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 1, host: "Jasons-MacBook-Pro.local:20022", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 }, { _id: 2, host: "Jasons-MacBook-Pro.local:20023", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, catchUpTimeoutMillis: -1, catchUpTakeoverDelayMillis: 30000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5d3a2c399cfa09cae7a79750') } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.709-0400 c20023| 2019-07-25T18:25:01.709-0400 I REPL [replexec-1] This node is Jasons-MacBook-Pro.local:20023 in the config
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.709-0400 c20022| 2019-07-25T18:25:01.709-0400 I REPL [replexec-0] This node is Jasons-MacBook-Pro.local:20022 in the config
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.709-0400 c20022| 2019-07-25T18:25:01.709-0400 I REPL [replexec-0] transition to STARTUP2 from STARTUP
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.709-0400 c20023| 2019-07-25T18:25:01.709-0400 I REPL [replexec-1] transition to STARTUP2 from STARTUP
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.710-0400 c20023| 2019-07-25T18:25:01.710-0400 I CONNPOOL [Replication] Connecting to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.711-0400 c20022| 2019-07-25T18:25:01.710-0400 I CONNPOOL [Replication] Connecting to Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.712-0400 c20022| 2019-07-25T18:25:01.712-0400 I REPL [replexec-2] Member Jasons-MacBook-Pro.local:20021 is now in state PRIMARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.712-0400 c20022| 2019-07-25T18:25:01.712-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49479 #8 (3 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.713-0400 c20023| 2019-07-25T18:25:01.712-0400 I REPL [replexec-2] Member Jasons-MacBook-Pro.local:20021 is now in state PRIMARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.713-0400 c20022| 2019-07-25T18:25:01.713-0400 I NETWORK [conn8] received client metadata from 127.0.0.1:49479 conn8: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.713-0400 c20023| 2019-07-25T18:25:01.713-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49480 #9 (3 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.714-0400 c20023| 2019-07-25T18:25:01.714-0400 I NETWORK [conn9] received client metadata from 127.0.0.1:49480 conn9: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.714-0400 c20022| 2019-07-25T18:25:01.714-0400 I REPL [replexec-0] Starting replication storage threads
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.715-0400 c20023| 2019-07-25T18:25:01.714-0400 I REPL [replexec-1] Starting replication storage threads
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.716-0400 c20023| 2019-07-25T18:25:01.715-0400 I REPL [replexec-0] Member Jasons-MacBook-Pro.local:20022 is now in state STARTUP2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.716-0400 c20022| 2019-07-25T18:25:01.716-0400 I REPL [replexec-3] Member Jasons-MacBook-Pro.local:20023 is now in state STARTUP2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.735-0400 c20023| 2019-07-25T18:25:01.735-0400 I STORAGE [replexec-1] createCollection: local.temp_oplog_buffer with generated UUID: 0d79bd15-d153-44dd-aa9e-8f642f964981 and options: { temp: true }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.745-0400 c20022| 2019-07-25T18:25:01.745-0400 I STORAGE [replexec-0] createCollection: local.temp_oplog_buffer with generated UUID: a63602a8-6717-4352-ab05-e5591717996a and options: { temp: true }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.803-0400 c20023| 2019-07-25T18:25:01.803-0400 I INDEX [replexec-1] index build: done building index _id_ on ns local.temp_oplog_buffer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.804-0400 c20022| 2019-07-25T18:25:01.803-0400 I INDEX [replexec-0] index build: done building index _id_ on ns local.temp_oplog_buffer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.804-0400 c20022| 2019-07-25T18:25:01.804-0400 I INITSYNC [replication-0] Starting initial sync (attempt 1 of 10)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.804-0400 c20023| 2019-07-25T18:25:01.804-0400 I INITSYNC [replication-0] Starting initial sync (attempt 1 of 10)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.805-0400 c20022| 2019-07-25T18:25:01.805-0400 I STORAGE [replication-0] Finishing collection drop for local.temp_oplog_buffer (a63602a8-6717-4352-ab05-e5591717996a).
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.805-0400 c20023| 2019-07-25T18:25:01.805-0400 I STORAGE [replication-0] Finishing collection drop for local.temp_oplog_buffer (0d79bd15-d153-44dd-aa9e-8f642f964981).
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.842-0400 c20022| 2019-07-25T18:25:01.841-0400 I STORAGE [replication-0] createCollection: local.temp_oplog_buffer with generated UUID: 765374e3-de77-43a8-91c8-d17b6c3d54bc and options: { temp: true }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.842-0400 c20023| 2019-07-25T18:25:01.841-0400 I STORAGE [replication-0] createCollection: local.temp_oplog_buffer with generated UUID: 77f3bee9-3688-4446-8487-148de8eeae40 and options: { temp: true }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.895-0400 c20022| 2019-07-25T18:25:01.895-0400 I INDEX [replication-0] index build: done building index _id_ on ns local.temp_oplog_buffer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.896-0400 c20022| 2019-07-25T18:25:01.896-0400 I REPL [replication-0] waiting for 1 pings from other members before syncing
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.915-0400 c20023| 2019-07-25T18:25:01.915-0400 I INDEX [replication-0] index build: done building index _id_ on ns local.temp_oplog_buffer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:01.916-0400 c20023| 2019-07-25T18:25:01.915-0400 I REPL [replication-0] waiting for 1 pings from other members before syncing
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.900-0400 c20022| 2019-07-25T18:25:02.900-0400 I REPL [replication-1] sync source candidate: Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.900-0400 c20022| 2019-07-25T18:25:02.900-0400 I INITSYNC [replication-1] Initial syncer oplog truncation finished in: 0ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.900-0400 c20022| 2019-07-25T18:25:02.900-0400 I REPL [replication-1] ******
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.900-0400 c20022| 2019-07-25T18:25:02.900-0400 I REPL [replication-1] creating replication oplog of size: 40MB...
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.900-0400 c20022| 2019-07-25T18:25:02.900-0400 I STORAGE [replication-1] createCollection: local.oplog.rs with generated UUID: 7f0ff0ec-c05e-440e-9aab-2bbf7acc65fd and options: { capped: true, size: 41943040, autoIndexId: false }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.921-0400 c20023| 2019-07-25T18:25:02.920-0400 I REPL [replication-1] sync source candidate: Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.921-0400 c20023| 2019-07-25T18:25:02.921-0400 I INITSYNC [replication-1] Initial syncer oplog truncation finished in: 0ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.921-0400 c20023| 2019-07-25T18:25:02.921-0400 I REPL [replication-1] ******
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.921-0400 c20023| 2019-07-25T18:25:02.921-0400 I REPL [replication-1] creating replication oplog of size: 40MB...
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.921-0400 c20023| 2019-07-25T18:25:02.921-0400 I STORAGE [replication-1] createCollection: local.oplog.rs with generated UUID: ef443948-cafd-4e05-ab9a-2309d9565d71 and options: { capped: true, size: 41943040, autoIndexId: false }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.927-0400 c20022| 2019-07-25T18:25:02.927-0400 I STORAGE [replication-1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.927-0400 c20022| 2019-07-25T18:25:02.927-0400 I STORAGE [replication-1] Scanning the oplog to determine where to place markers for truncation
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.960-0400 c20023| 2019-07-25T18:25:02.959-0400 I STORAGE [replication-1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
[js_test:configsvr_failover_repro] 2019-07-25T18:25:02.960-0400 c20023| 2019-07-25T18:25:02.959-0400 I STORAGE [replication-1] Scanning the oplog to determine where to place markers for truncation
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.248-0400 c20022| 2019-07-25T18:25:03.248-0400 I REPL [replication-1] ******
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.251-0400 c20022| 2019-07-25T18:25:03.251-0400 I REPL [replication-1] dropReplicatedDatabases - dropping 1 databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.251-0400 c20022| 2019-07-25T18:25:03.251-0400 I REPL [replication-1] dropReplicatedDatabases - dropped 1 databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.252-0400 c20022| 2019-07-25T18:25:03.251-0400 I CONNPOOL [RS] Connecting to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.253-0400 c20021| 2019-07-25T18:25:03.253-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49481 #10 (4 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.254-0400 c20021| 2019-07-25T18:25:03.253-0400 I NETWORK [conn10] received client metadata from 127.0.0.1:49481 conn10: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.262-0400 c20021| 2019-07-25T18:25:03.261-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49482 #11 (5 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.262-0400 c20021| 2019-07-25T18:25:03.262-0400 I NETWORK [conn11] received client metadata from 127.0.0.1:49482 conn11: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.264-0400 c20022| 2019-07-25T18:25:03.264-0400 I SHARDING [replication-0] Marking collection local.temp_oplog_buffer as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.264-0400 c20022| 2019-07-25T18:25:03.264-0400 I INITSYNC [replication-1] CollectionCloner::start called, on ns:admin.system.keys
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.267-0400 c20022| 2019-07-25T18:25:03.267-0400 I STORAGE [repl-writer-worker-0] createCollection: admin.system.keys with provided UUID: 7d5bfd11-2f9e-43fa-b296-05f895e4aea7 and options: { uuid: UUID("7d5bfd11-2f9e-43fa-b296-05f895e4aea7") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.398-0400 c20022| 2019-07-25T18:25:03.398-0400 I INDEX [repl-writer-worker-0] index build: starting on admin.system.keys properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.398-0400 c20022| 2019-07-25T18:25:03.398-0400 I INDEX [repl-writer-worker-0] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.399-0400 c20021| 2019-07-25T18:25:03.399-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49483 #12 (6 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.399-0400 c20021| 2019-07-25T18:25:03.399-0400 I NETWORK [conn12] received client metadata from 127.0.0.1:49483 conn12: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.402-0400 c20022| 2019-07-25T18:25:03.402-0400 I SHARDING [repl-writer-worker-13] Marking collection admin.system.keys as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.402-0400 c20022| 2019-07-25T18:25:03.402-0400 I INITSYNC [replication-0] CollectionCloner ns:admin.system.keys finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.402-0400 c20021| 2019-07-25T18:25:03.402-0400 I NETWORK [conn12] end connection 127.0.0.1:49483 (5 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.405-0400 c20023| 2019-07-25T18:25:03.405-0400 I REPL [replication-1] ******
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.406-0400 c20022| 2019-07-25T18:25:03.406-0400 I INDEX [replication-0] index build: inserted 2 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.407-0400 c20023| 2019-07-25T18:25:03.407-0400 I REPL [replication-1] dropReplicatedDatabases - dropping 1 databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.407-0400 c20023| 2019-07-25T18:25:03.407-0400 I REPL [replication-1] dropReplicatedDatabases - dropped 1 databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.407-0400 c20023| 2019-07-25T18:25:03.407-0400 I CONNPOOL [RS] Connecting to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.409-0400 c20021| 2019-07-25T18:25:03.409-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49484 #13 (6 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.410-0400 c20021| 2019-07-25T18:25:03.409-0400 I NETWORK [conn13] received client metadata from 127.0.0.1:49484 conn13: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.419-0400 c20021| 2019-07-25T18:25:03.419-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49485 #14 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.420-0400 c20021| 2019-07-25T18:25:03.420-0400 I NETWORK [conn14] received client metadata from 127.0.0.1:49485 conn14: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.421-0400 c20022| 2019-07-25T18:25:03.421-0400 I INDEX [replication-0] index build: done building index _id_ on ns admin.system.keys
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.422-0400 c20023| 2019-07-25T18:25:03.422-0400 I SHARDING [replication-0] Marking collection local.temp_oplog_buffer as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.422-0400 c20023| 2019-07-25T18:25:03.422-0400 I INITSYNC [replication-0] CollectionCloner::start called, on ns:admin.system.keys
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.425-0400 c20023| 2019-07-25T18:25:03.425-0400 I STORAGE [repl-writer-worker-7] createCollection: admin.system.keys with provided UUID: 7d5bfd11-2f9e-43fa-b296-05f895e4aea7 and options: { uuid: UUID("7d5bfd11-2f9e-43fa-b296-05f895e4aea7") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.467-0400 c20022| 2019-07-25T18:25:03.467-0400 I INITSYNC [replication-0] CollectionCloner::start called, on ns:admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.470-0400 c20022| 2019-07-25T18:25:03.470-0400 I STORAGE [repl-writer-worker-2] createCollection: admin.system.version with provided UUID: 9eb89103-fb3d-4038-bb54-c402876ca16e and options: { uuid: UUID("9eb89103-fb3d-4038-bb54-c402876ca16e") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.564-0400 c20023| 2019-07-25T18:25:03.564-0400 I INDEX [repl-writer-worker-7] index build: starting on admin.system.keys properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.keys" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.564-0400 c20023| 2019-07-25T18:25:03.564-0400 I INDEX [repl-writer-worker-7] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.566-0400 c20021| 2019-07-25T18:25:03.566-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49486 #15 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.566-0400 c20021| 2019-07-25T18:25:03.566-0400 I NETWORK [conn15] received client metadata from 127.0.0.1:49486 conn15: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.569-0400 c20023| 2019-07-25T18:25:03.569-0400 I SHARDING [repl-writer-worker-0] Marking collection admin.system.keys as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.569-0400 c20023| 2019-07-25T18:25:03.569-0400 I INITSYNC [replication-1] CollectionCloner ns:admin.system.keys finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.569-0400 c20021| 2019-07-25T18:25:03.569-0400 I NETWORK [conn15] end connection 127.0.0.1:49486 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.574-0400 c20023| 2019-07-25T18:25:03.574-0400 I INDEX [replication-1] index build: inserted 2 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.610-0400 c20022| 2019-07-25T18:25:03.610-0400 I INDEX [repl-writer-worker-2] index build: starting on admin.system.version properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.610-0400 c20022| 2019-07-25T18:25:03.610-0400 I INDEX [repl-writer-worker-2] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.611-0400 c20021| 2019-07-25T18:25:03.611-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49487 #16 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.613-0400 c20023| 2019-07-25T18:25:03.612-0400 I INDEX [replication-1] index build: done building index _id_ on ns admin.system.keys
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.613-0400 c20021| 2019-07-25T18:25:03.612-0400 I NETWORK [conn16] received client metadata from 127.0.0.1:49487 conn16: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.615-0400 c20021| 2019-07-25T18:25:03.615-0400 I REPL [replexec-1] Member Jasons-MacBook-Pro.local:20023 is now in state STARTUP2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.616-0400 c20021| 2019-07-25T18:25:03.615-0400 I REPL [replexec-0] Member Jasons-MacBook-Pro.local:20022 is now in state STARTUP2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.617-0400 c20022| 2019-07-25T18:25:03.616-0400 I COMMAND [repl-writer-worker-15] setting featureCompatibilityVersion to 4.2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.617-0400 c20022| 2019-07-25T18:25:03.616-0400 I NETWORK [repl-writer-worker-15] Skip closing connection for connection # 8
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.617-0400 c20022| 2019-07-25T18:25:03.617-0400 I NETWORK [repl-writer-worker-15] Skip closing connection for connection # 3
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.617-0400 c20022| 2019-07-25T18:25:03.617-0400 I NETWORK [repl-writer-worker-15] Skip closing connection for connection # 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.617-0400 c20022| 2019-07-25T18:25:03.617-0400 I INITSYNC [replication-1] CollectionCloner ns:admin.system.version finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.617-0400 c20021| 2019-07-25T18:25:03.617-0400 I NETWORK [conn16] end connection 127.0.0.1:49487 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.626-0400 c20022| 2019-07-25T18:25:03.626-0400 I INDEX [replication-1] index build: inserted 1 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.638-0400 c20023| 2019-07-25T18:25:03.638-0400 I INITSYNC [replication-1] CollectionCloner::start called, on ns:admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.641-0400 c20023| 2019-07-25T18:25:03.641-0400 I STORAGE [repl-writer-worker-15] createCollection: admin.system.version with provided UUID: 9eb89103-fb3d-4038-bb54-c402876ca16e and options: { uuid: UUID("9eb89103-fb3d-4038-bb54-c402876ca16e") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.649-0400 c20022| 2019-07-25T18:25:03.649-0400 I INDEX [replication-1] index build: done building index _id_ on ns admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.705-0400 c20022| 2019-07-25T18:25:03.705-0400 I INITSYNC [replication-0] CollectionCloner::start called, on ns:config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.710-0400 c20022| 2019-07-25T18:25:03.710-0400 I STORAGE [repl-writer-worker-4] createCollection: config.transactions with provided UUID: 2ff387c1-0957-46b7-b825-992aba2ed063 and options: { uuid: UUID("2ff387c1-0957-46b7-b825-992aba2ed063") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.765-0400 c20023| 2019-07-25T18:25:03.764-0400 I INDEX [repl-writer-worker-15] index build: starting on admin.system.version properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "admin.system.version" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.765-0400 c20023| 2019-07-25T18:25:03.764-0400 I INDEX [repl-writer-worker-15] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.766-0400 c20021| 2019-07-25T18:25:03.766-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49488 #17 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.766-0400 c20021| 2019-07-25T18:25:03.766-0400 I NETWORK [conn17] received client metadata from 127.0.0.1:49488 conn17: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.769-0400 c20023| 2019-07-25T18:25:03.769-0400 I COMMAND [repl-writer-worker-4] setting featureCompatibilityVersion to 4.2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.770-0400 c20023| 2019-07-25T18:25:03.769-0400 I NETWORK [repl-writer-worker-4] Skip closing connection for connection # 9
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.770-0400 c20023| 2019-07-25T18:25:03.769-0400 I NETWORK [repl-writer-worker-4] Skip closing connection for connection # 3
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.770-0400 c20023| 2019-07-25T18:25:03.769-0400 I NETWORK [repl-writer-worker-4] Skip closing connection for connection # 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.770-0400 c20023| 2019-07-25T18:25:03.769-0400 I INITSYNC [replication-0] CollectionCloner ns:admin.system.version finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.770-0400 c20021| 2019-07-25T18:25:03.770-0400 I NETWORK [conn17] end connection 127.0.0.1:49488 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.777-0400 c20023| 2019-07-25T18:25:03.776-0400 I INDEX [replication-0] index build: inserted 1 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.787-0400 c20023| 2019-07-25T18:25:03.787-0400 I INDEX [replication-0] index build: done building index _id_ on ns admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.822-0400 c20022| 2019-07-25T18:25:03.822-0400 I INDEX [repl-writer-worker-4] index build: starting on config.transactions properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.822-0400 c20022| 2019-07-25T18:25:03.822-0400 I INDEX [repl-writer-worker-4] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.823-0400 c20021| 2019-07-25T18:25:03.823-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49489 #18 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.824-0400 c20021| 2019-07-25T18:25:03.824-0400 I NETWORK [conn18] received client metadata from 127.0.0.1:49489 conn18: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.826-0400 c20022| 2019-07-25T18:25:03.826-0400 I INITSYNC [replication-1] CollectionCloner ns:config.transactions finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.826-0400 c20021| 2019-07-25T18:25:03.826-0400 I NETWORK [conn18] end connection 127.0.0.1:49489 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.827-0400 c20022| 2019-07-25T18:25:03.827-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.829-0400 c20023| 2019-07-25T18:25:03.829-0400 I INITSYNC [replication-1] CollectionCloner::start called, on ns:config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.836-0400 c20023| 2019-07-25T18:25:03.836-0400 I STORAGE [repl-writer-worker-5] createCollection: config.transactions with provided UUID: 2ff387c1-0957-46b7-b825-992aba2ed063 and options: { uuid: UUID("2ff387c1-0957-46b7-b825-992aba2ed063") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.836-0400 c20022| 2019-07-25T18:25:03.836-0400 I INDEX [replication-1] index build: done building index _id_ on ns config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.884-0400 c20022| 2019-07-25T18:25:03.884-0400 I INITSYNC [replication-1] CollectionCloner::start called, on ns:config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:03.888-0400 c20022| 2019-07-25T18:25:03.888-0400 I STORAGE [repl-writer-worker-3] createCollection: config.chunks with provided UUID: 63c02d1c-5493-42cd-9595-17fe7298418c and options: { uuid: UUID("63c02d1c-5493-42cd-9595-17fe7298418c") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.001-0400 c20023| 2019-07-25T18:25:04.001-0400 I INDEX [repl-writer-worker-5] index build: starting on config.transactions properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.transactions" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.001-0400 c20023| 2019-07-25T18:25:04.001-0400 I INDEX [repl-writer-worker-5] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.002-0400 c20021| 2019-07-25T18:25:04.002-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49490 #19 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.003-0400 c20021| 2019-07-25T18:25:04.003-0400 I NETWORK [conn19] received client metadata from 127.0.0.1:49490 conn19: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.012-0400 c20023| 2019-07-25T18:25:04.012-0400 I INITSYNC [replication-0] CollectionCloner ns:config.transactions finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.012-0400 c20021| 2019-07-25T18:25:04.012-0400 I NETWORK [conn19] end connection 127.0.0.1:49490 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.021-0400 c20023| 2019-07-25T18:25:04.021-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.032-0400 c20023| 2019-07-25T18:25:04.031-0400 I INDEX [replication-0] index build: done building index _id_ on ns config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.056-0400 c20022| 2019-07-25T18:25:04.055-0400 I INDEX [repl-writer-worker-3] index build: starting on config.chunks properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.056-0400 c20022| 2019-07-25T18:25:04.056-0400 I INDEX [repl-writer-worker-3] build may temporarily use up to 166 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.080-0400 c20023| 2019-07-25T18:25:04.080-0400 I INITSYNC [replication-0] CollectionCloner::start called, on ns:config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.084-0400 c20023| 2019-07-25T18:25:04.084-0400 I STORAGE [repl-writer-worker-6] createCollection: config.chunks with provided UUID: 63c02d1c-5493-42cd-9595-17fe7298418c and options: { uuid: UUID("63c02d1c-5493-42cd-9595-17fe7298418c") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.158-0400 c20022| 2019-07-25T18:25:04.158-0400 I INDEX [repl-writer-worker-3] index build: starting on config.chunks properties: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.159-0400 c20022| 2019-07-25T18:25:04.158-0400 I INDEX [repl-writer-worker-3] build may temporarily use up to 166 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.193-0400 c20023| 2019-07-25T18:25:04.193-0400 I INDEX [repl-writer-worker-6] index build: starting on config.chunks properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.chunks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.193-0400 c20023| 2019-07-25T18:25:04.193-0400 I INDEX [repl-writer-worker-6] build may temporarily use up to 166 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.248-0400 c20022| 2019-07-25T18:25:04.248-0400 I INDEX [repl-writer-worker-3] index build: starting on config.chunks properties: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.248-0400 c20022| 2019-07-25T18:25:04.248-0400 I INDEX [repl-writer-worker-3] build may temporarily use up to 166 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.271-0400 c20023| 2019-07-25T18:25:04.271-0400 I INDEX [repl-writer-worker-6] index build: starting on config.chunks properties: { v: 2, unique: true, key: { ns: 1, shard: 1, min: 1 }, name: "ns_1_shard_1_min_1", ns: "config.chunks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.271-0400 c20023| 2019-07-25T18:25:04.271-0400 I INDEX [repl-writer-worker-6] build may temporarily use up to 166 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.386-0400 c20022| 2019-07-25T18:25:04.386-0400 I INDEX [repl-writer-worker-3] index build: starting on config.chunks properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.386-0400 c20022| 2019-07-25T18:25:04.386-0400 I INDEX [repl-writer-worker-3] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.387-0400 c20021| 2019-07-25T18:25:04.387-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49491 #20 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.388-0400 c20021| 2019-07-25T18:25:04.388-0400 I NETWORK [conn20] received client metadata from 127.0.0.1:49491 conn20: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.390-0400 c20022| 2019-07-25T18:25:04.390-0400 I INITSYNC [replication-0] CollectionCloner ns:config.chunks finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.391-0400 c20021| 2019-07-25T18:25:04.390-0400 I NETWORK [conn20] end connection 127.0.0.1:49491 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.396-0400 c20022| 2019-07-25T18:25:04.396-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.397-0400 c20023| 2019-07-25T18:25:04.396-0400 I INDEX [repl-writer-worker-6] index build: starting on config.chunks properties: { v: 2, unique: true, key: { ns: 1, lastmod: 1 }, name: "ns_1_lastmod_1", ns: "config.chunks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.397-0400 c20023| 2019-07-25T18:25:04.396-0400 I INDEX [repl-writer-worker-6] build may temporarily use up to 166 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.428-0400 c20022| 2019-07-25T18:25:04.428-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.450-0400 c20022| 2019-07-25T18:25:04.450-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.475-0400 c20022| 2019-07-25T18:25:04.475-0400 I INDEX [replication-0] index build: done building index ns_1_min_1 on ns config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.476-0400 c20022| 2019-07-25T18:25:04.476-0400 I INDEX [replication-0] index build: done building index ns_1_shard_1_min_1 on ns config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.477-0400 c20022| 2019-07-25T18:25:04.476-0400 I INDEX [replication-0] index build: done building index ns_1_lastmod_1 on ns config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.484-0400 c20022| 2019-07-25T18:25:04.484-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.485-0400 c20023| 2019-07-25T18:25:04.485-0400 I INDEX [repl-writer-worker-6] index build: starting on config.chunks properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.chunks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.485-0400 c20023| 2019-07-25T18:25:04.485-0400 I INDEX [repl-writer-worker-6] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.486-0400 c20021| 2019-07-25T18:25:04.486-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49492 #21 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.486-0400 c20021| 2019-07-25T18:25:04.486-0400 I NETWORK [conn21] received client metadata from 127.0.0.1:49492 conn21: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.488-0400 c20023| 2019-07-25T18:25:04.488-0400 I INITSYNC [replication-1] CollectionCloner ns:config.chunks finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.488-0400 c20021| 2019-07-25T18:25:04.488-0400 I NETWORK [conn21] end connection 127.0.0.1:49492 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.498-0400 c20023| 2019-07-25T18:25:04.498-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.498-0400 c20022| 2019-07-25T18:25:04.498-0400 I INDEX [replication-0] index build: done building index _id_ on ns config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.507-0400 c20023| 2019-07-25T18:25:04.507-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.518-0400 c20023| 2019-07-25T18:25:04.518-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.540-0400 c20023| 2019-07-25T18:25:04.540-0400 I INDEX [replication-1] index build: done building index ns_1_min_1 on ns config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.541-0400 c20023| 2019-07-25T18:25:04.541-0400 I INDEX [replication-1] index build: done building index ns_1_shard_1_min_1 on ns config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.541-0400 c20023| 2019-07-25T18:25:04.541-0400 I INDEX [replication-1] index build: done building index ns_1_lastmod_1 on ns config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.556-0400 c20023| 2019-07-25T18:25:04.555-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.576-0400 c20023| 2019-07-25T18:25:04.575-0400 I INDEX [replication-1] index build: done building index _id_ on ns config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.659-0400 c20022| 2019-07-25T18:25:04.659-0400 I INITSYNC [replication-0] CollectionCloner::start called, on ns:config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.662-0400 c20022| 2019-07-25T18:25:04.662-0400 I STORAGE [repl-writer-worker-6] createCollection: config.migrations with provided UUID: 91fc80cd-1974-4835-96e0-c0c276b056ee and options: { uuid: UUID("91fc80cd-1974-4835-96e0-c0c276b056ee") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.776-0400 c20023| 2019-07-25T18:25:04.775-0400 I INITSYNC [replication-1] CollectionCloner::start called, on ns:config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.780-0400 c20023| 2019-07-25T18:25:04.779-0400 I STORAGE [repl-writer-worker-8] createCollection: config.migrations with provided UUID: 91fc80cd-1974-4835-96e0-c0c276b056ee and options: { uuid: UUID("91fc80cd-1974-4835-96e0-c0c276b056ee") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.784-0400 c20022| 2019-07-25T18:25:04.784-0400 I INDEX [repl-writer-worker-6] index build: starting on config.migrations properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.784-0400 c20022| 2019-07-25T18:25:04.784-0400 I INDEX [repl-writer-worker-6] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.874-0400 c20022| 2019-07-25T18:25:04.874-0400 I INDEX [repl-writer-worker-6] index build: starting on config.migrations properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.874-0400 c20022| 2019-07-25T18:25:04.874-0400 I INDEX [repl-writer-worker-6] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.876-0400 c20021| 2019-07-25T18:25:04.876-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49493 #22 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.876-0400 c20021| 2019-07-25T18:25:04.876-0400 I NETWORK [conn22] received client metadata from 127.0.0.1:49493 conn22: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.879-0400 c20022| 2019-07-25T18:25:04.879-0400 I INITSYNC [replication-1] CollectionCloner ns:config.migrations finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.879-0400 c20021| 2019-07-25T18:25:04.879-0400 I NETWORK [conn22] end connection 127.0.0.1:49493 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.901-0400 c20023| 2019-07-25T18:25:04.901-0400 I INDEX [repl-writer-worker-8] index build: starting on config.migrations properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.migrations" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.901-0400 c20023| 2019-07-25T18:25:04.901-0400 I INDEX [repl-writer-worker-8] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.903-0400 c20022| 2019-07-25T18:25:04.902-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.927-0400 c20022| 2019-07-25T18:25:04.926-0400 I INDEX [replication-1] index build: done building index ns_1_min_1 on ns config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.949-0400 c20022| 2019-07-25T18:25:04.949-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:04.963-0400 c20022| 2019-07-25T18:25:04.963-0400 I INDEX [replication-1] index build: done building index _id_ on ns config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.009-0400 c20023| 2019-07-25T18:25:05.009-0400 I INDEX [repl-writer-worker-8] index build: starting on config.migrations properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.migrations" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.009-0400 c20023| 2019-07-25T18:25:05.009-0400 I INDEX [repl-writer-worker-8] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.011-0400 c20021| 2019-07-25T18:25:05.011-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49494 #23 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.011-0400 c20021| 2019-07-25T18:25:05.011-0400 I NETWORK [conn23] received client metadata from 127.0.0.1:49494 conn23: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.016-0400 c20023| 2019-07-25T18:25:05.016-0400 I INITSYNC [replication-0] CollectionCloner ns:config.migrations finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.016-0400 c20021| 2019-07-25T18:25:05.016-0400 I NETWORK [conn23] end connection 127.0.0.1:49494 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.019-0400 c20023| 2019-07-25T18:25:05.019-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.029-0400 c20023| 2019-07-25T18:25:05.028-0400 I INDEX [replication-0] index build: done building index ns_1_min_1 on ns config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.042-0400 c20023| 2019-07-25T18:25:05.042-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.053-0400 c20022| 2019-07-25T18:25:05.053-0400 I INITSYNC [replication-1] CollectionCloner::start called, on ns:config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.060-0400 c20022| 2019-07-25T18:25:05.060-0400 I STORAGE [repl-writer-worker-7] createCollection: config.shards with provided UUID: 9dc58f2f-04de-441a-b6d7-36d58adac3fa and options: { uuid: UUID("9dc58f2f-04de-441a-b6d7-36d58adac3fa") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.072-0400 c20023| 2019-07-25T18:25:05.072-0400 I INDEX [replication-0] index build: done building index _id_ on ns config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.167-0400 c20023| 2019-07-25T18:25:05.167-0400 I INITSYNC [replication-0] CollectionCloner::start called, on ns:config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.170-0400 c20023| 2019-07-25T18:25:05.170-0400 I STORAGE [repl-writer-worker-9] createCollection: config.shards with provided UUID: 9dc58f2f-04de-441a-b6d7-36d58adac3fa and options: { uuid: UUID("9dc58f2f-04de-441a-b6d7-36d58adac3fa") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.185-0400 c20022| 2019-07-25T18:25:05.185-0400 I INDEX [repl-writer-worker-7] index build: starting on config.shards properties: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.185-0400 c20022| 2019-07-25T18:25:05.185-0400 I INDEX [repl-writer-worker-7] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.270-0400 c20022| 2019-07-25T18:25:05.270-0400 I INDEX [repl-writer-worker-7] index build: starting on config.shards properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.270-0400 c20022| 2019-07-25T18:25:05.270-0400 I INDEX [repl-writer-worker-7] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.272-0400 c20021| 2019-07-25T18:25:05.271-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49495 #24 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.272-0400 c20021| 2019-07-25T18:25:05.272-0400 I NETWORK [conn24] received client metadata from 127.0.0.1:49495 conn24: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.275-0400 c20022| 2019-07-25T18:25:05.274-0400 I INITSYNC [replication-0] CollectionCloner ns:config.shards finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.275-0400 c20021| 2019-07-25T18:25:05.275-0400 I NETWORK [conn24] end connection 127.0.0.1:49495 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.280-0400 c20022| 2019-07-25T18:25:05.280-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.301-0400 c20022| 2019-07-25T18:25:05.301-0400 I INDEX [replication-0] index build: done building index host_1 on ns config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.315-0400 c20022| 2019-07-25T18:25:05.315-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.323-0400 c20023| 2019-07-25T18:25:05.323-0400 I INDEX [repl-writer-worker-9] index build: starting on config.shards properties: { v: 2, unique: true, key: { host: 1 }, name: "host_1", ns: "config.shards" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.324-0400 c20023| 2019-07-25T18:25:05.323-0400 I INDEX [repl-writer-worker-9] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.335-0400 c20022| 2019-07-25T18:25:05.335-0400 I INDEX [replication-0] index build: done building index _id_ on ns config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.423-0400 c20023| 2019-07-25T18:25:05.423-0400 I INDEX [repl-writer-worker-9] index build: starting on config.shards properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.shards" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.423-0400 c20023| 2019-07-25T18:25:05.423-0400 I INDEX [repl-writer-worker-9] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.424-0400 c20021| 2019-07-25T18:25:05.424-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49496 #25 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.425-0400 c20021| 2019-07-25T18:25:05.424-0400 I NETWORK [conn25] received client metadata from 127.0.0.1:49496 conn25: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.427-0400 c20023| 2019-07-25T18:25:05.426-0400 I INITSYNC [replication-1] CollectionCloner ns:config.shards finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.427-0400 c20021| 2019-07-25T18:25:05.427-0400 I NETWORK [conn25] end connection 127.0.0.1:49496 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.433-0400 c20023| 2019-07-25T18:25:05.433-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.443-0400 c20023| 2019-07-25T18:25:05.443-0400 I INDEX [replication-1] index build: done building index host_1 on ns config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.456-0400 c20023| 2019-07-25T18:25:05.456-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.456-0400 c20022| 2019-07-25T18:25:05.456-0400 I INITSYNC [replication-0] CollectionCloner::start called, on ns:config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.457-0400 c20021| 2019-07-25T18:25:05.457-0400 I SHARDING [conn11] Marking collection config.lockpings as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.459-0400 c20022| 2019-07-25T18:25:05.459-0400 I STORAGE [repl-writer-worker-8] createCollection: config.lockpings with provided UUID: dd0672e8-19c6-432b-9b6a-d21b02c0bf6e and options: { uuid: UUID("dd0672e8-19c6-432b-9b6a-d21b02c0bf6e") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.466-0400 c20023| 2019-07-25T18:25:05.466-0400 I INDEX [replication-1] index build: done building index _id_ on ns config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.565-0400 c20022| 2019-07-25T18:25:05.565-0400 I INDEX [repl-writer-worker-8] index build: starting on config.lockpings properties: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.565-0400 c20022| 2019-07-25T18:25:05.565-0400 I INDEX [repl-writer-worker-8] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.588-0400 c20023| 2019-07-25T18:25:05.587-0400 I INITSYNC [replication-1] CollectionCloner::start called, on ns:config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.591-0400 c20023| 2019-07-25T18:25:05.591-0400 I STORAGE [repl-writer-worker-10] createCollection: config.lockpings with provided UUID: dd0672e8-19c6-432b-9b6a-d21b02c0bf6e and options: { uuid: UUID("dd0672e8-19c6-432b-9b6a-d21b02c0bf6e") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.688-0400 c20022| 2019-07-25T18:25:05.687-0400 I INDEX [repl-writer-worker-8] index build: starting on config.lockpings properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.688-0400 c20022| 2019-07-25T18:25:05.687-0400 I INDEX [repl-writer-worker-8] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.688-0400 c20023| 2019-07-25T18:25:05.688-0400 I INDEX [repl-writer-worker-10] index build: starting on config.lockpings properties: { v: 2, key: { ping: 1 }, name: "ping_1", ns: "config.lockpings" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.688-0400 c20023| 2019-07-25T18:25:05.688-0400 I INDEX [repl-writer-worker-10] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.689-0400 c20021| 2019-07-25T18:25:05.689-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49497 #26 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.689-0400 c20021| 2019-07-25T18:25:05.689-0400 I NETWORK [conn26] received client metadata from 127.0.0.1:49497 conn26: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.692-0400 c20022| 2019-07-25T18:25:05.691-0400 I INITSYNC [replication-1] CollectionCloner ns:config.lockpings finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.692-0400 c20021| 2019-07-25T18:25:05.692-0400 I NETWORK [conn26] end connection 127.0.0.1:49497 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.699-0400 c20022| 2019-07-25T18:25:05.699-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.722-0400 c20022| 2019-07-25T18:25:05.722-0400 I INDEX [replication-1] index build: done building index ping_1 on ns config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.724-0400 c20022| 2019-07-25T18:25:05.724-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.744-0400 c20022| 2019-07-25T18:25:05.744-0400 I INDEX [replication-1] index build: done building index _id_ on ns config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.782-0400 c20023| 2019-07-25T18:25:05.782-0400 I INDEX [repl-writer-worker-10] index build: starting on config.lockpings properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.lockpings" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.782-0400 c20023| 2019-07-25T18:25:05.782-0400 I INDEX [repl-writer-worker-10] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.784-0400 c20021| 2019-07-25T18:25:05.783-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49498 #27 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.786-0400 c20021| 2019-07-25T18:25:05.785-0400 I NETWORK [conn27] received client metadata from 127.0.0.1:49498 conn27: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.790-0400 c20023| 2019-07-25T18:25:05.790-0400 I INITSYNC [replication-0] CollectionCloner ns:config.lockpings finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.790-0400 c20021| 2019-07-25T18:25:05.790-0400 I NETWORK [conn27] end connection 127.0.0.1:49498 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.793-0400 c20023| 2019-07-25T18:25:05.793-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.804-0400 c20023| 2019-07-25T18:25:05.804-0400 I INDEX [replication-0] index build: done building index ping_1 on ns config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.816-0400 c20023| 2019-07-25T18:25:05.816-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.817-0400 c20022| 2019-07-25T18:25:05.817-0400 I INITSYNC [replication-1] CollectionCloner::start called, on ns:config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.824-0400 c20022| 2019-07-25T18:25:05.824-0400 I STORAGE [repl-writer-worker-9] createCollection: config.locks with provided UUID: dd929b42-c13c-4682-8066-ef80c2666228 and options: { uuid: UUID("dd929b42-c13c-4682-8066-ef80c2666228") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.863-0400 c20023| 2019-07-25T18:25:05.863-0400 I INDEX [replication-0] index build: done building index _id_ on ns config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.921-0400 c20022| 2019-07-25T18:25:05.921-0400 I INDEX [repl-writer-worker-9] index build: starting on config.locks properties: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.921-0400 c20022| 2019-07-25T18:25:05.921-0400 I INDEX [repl-writer-worker-9] build may temporarily use up to 250 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.935-0400 c20023| 2019-07-25T18:25:05.934-0400 I INITSYNC [replication-0] CollectionCloner::start called, on ns:config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.939-0400 c20023| 2019-07-25T18:25:05.939-0400 I STORAGE [repl-writer-worker-11] createCollection: config.locks with provided UUID: dd929b42-c13c-4682-8066-ef80c2666228 and options: { uuid: UUID("dd929b42-c13c-4682-8066-ef80c2666228") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.976-0400 c20022| 2019-07-25T18:25:05.975-0400 I INDEX [repl-writer-worker-9] index build: starting on config.locks properties: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:05.976-0400 c20022| 2019-07-25T18:25:05.975-0400 I INDEX [repl-writer-worker-9] build may temporarily use up to 250 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.030-0400 c20023| 2019-07-25T18:25:06.030-0400 I INDEX [repl-writer-worker-11] index build: starting on config.locks properties: { v: 2, key: { ts: 1 }, name: "ts_1", ns: "config.locks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.030-0400 c20023| 2019-07-25T18:25:06.030-0400 I INDEX [repl-writer-worker-11] build may temporarily use up to 250 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.097-0400 c20022| 2019-07-25T18:25:06.097-0400 I INDEX [repl-writer-worker-9] index build: starting on config.locks properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.097-0400 c20022| 2019-07-25T18:25:06.097-0400 I INDEX [repl-writer-worker-9] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.099-0400 c20021| 2019-07-25T18:25:06.098-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49499 #28 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.099-0400 c20021| 2019-07-25T18:25:06.099-0400 I NETWORK [conn28] received client metadata from 127.0.0.1:49499 conn28: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.102-0400 c20022| 2019-07-25T18:25:06.102-0400 I INITSYNC [replication-0] CollectionCloner ns:config.locks finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.102-0400 c20021| 2019-07-25T18:25:06.102-0400 I NETWORK [conn28] end connection 127.0.0.1:49499 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.109-0400 c20023| 2019-07-25T18:25:06.109-0400 I INDEX [repl-writer-worker-11] index build: starting on config.locks properties: { v: 2, key: { state: 1, process: 1 }, name: "state_1_process_1", ns: "config.locks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.109-0400 c20023| 2019-07-25T18:25:06.109-0400 I INDEX [repl-writer-worker-11] build may temporarily use up to 250 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.123-0400 c20022| 2019-07-25T18:25:06.122-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.175-0400 c20022| 2019-07-25T18:25:06.175-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.184-0400 c20023| 2019-07-25T18:25:06.184-0400 I INDEX [repl-writer-worker-11] index build: starting on config.locks properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.locks" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.184-0400 c20023| 2019-07-25T18:25:06.184-0400 I INDEX [repl-writer-worker-11] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.185-0400 c20021| 2019-07-25T18:25:06.185-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49500 #29 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.186-0400 c20021| 2019-07-25T18:25:06.186-0400 I NETWORK [conn29] received client metadata from 127.0.0.1:49500 conn29: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.189-0400 c20023| 2019-07-25T18:25:06.189-0400 I INITSYNC [replication-1] CollectionCloner ns:config.locks finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.189-0400 c20021| 2019-07-25T18:25:06.189-0400 I NETWORK [conn29] end connection 127.0.0.1:49500 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.194-0400 c20023| 2019-07-25T18:25:06.194-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.194-0400 c20022| 2019-07-25T18:25:06.194-0400 I INDEX [replication-0] index build: done building index ts_1 on ns config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.194-0400 c20022| 2019-07-25T18:25:06.194-0400 I INDEX [replication-0] index build: done building index state_1_process_1 on ns config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.205-0400 c20023| 2019-07-25T18:25:06.204-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.207-0400 c20022| 2019-07-25T18:25:06.207-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.218-0400 c20023| 2019-07-25T18:25:06.218-0400 I INDEX [replication-1] index build: done building index ts_1 on ns config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.218-0400 c20023| 2019-07-25T18:25:06.218-0400 I INDEX [replication-1] index build: done building index state_1_process_1 on ns config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.226-0400 c20023| 2019-07-25T18:25:06.226-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.226-0400 c20022| 2019-07-25T18:25:06.226-0400 I INDEX [replication-0] index build: done building index _id_ on ns config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.245-0400 c20023| 2019-07-25T18:25:06.245-0400 I INDEX [replication-1] index build: done building index _id_ on ns config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.283-0400 c20022| 2019-07-25T18:25:06.283-0400 I INITSYNC [replication-0] CollectionCloner::start called, on ns:config.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.287-0400 c20022| 2019-07-25T18:25:06.287-0400 I STORAGE [repl-writer-worker-10] createCollection: config.version with provided UUID: e2da88e1-afec-4a2a-9c9c-0b4b51073f63 and options: { uuid: UUID("e2da88e1-afec-4a2a-9c9c-0b4b51073f63") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.295-0400 c20023| 2019-07-25T18:25:06.295-0400 I INITSYNC [replication-1] CollectionCloner::start called, on ns:config.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.298-0400 c20023| 2019-07-25T18:25:06.298-0400 I STORAGE [repl-writer-worker-12] createCollection: config.version with provided UUID: e2da88e1-afec-4a2a-9c9c-0b4b51073f63 and options: { uuid: UUID("e2da88e1-afec-4a2a-9c9c-0b4b51073f63") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.416-0400 c20023| 2019-07-25T18:25:06.415-0400 I INDEX [repl-writer-worker-12] index build: starting on config.version properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.416-0400 c20022| 2019-07-25T18:25:06.415-0400 I INDEX [repl-writer-worker-10] index build: starting on config.version properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.version" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.416-0400 c20022| 2019-07-25T18:25:06.416-0400 I INDEX [repl-writer-worker-10] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.416-0400 c20023| 2019-07-25T18:25:06.416-0400 I INDEX [repl-writer-worker-12] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.418-0400 c20021| 2019-07-25T18:25:06.417-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49501 #30 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.418-0400 c20021| 2019-07-25T18:25:06.418-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49502 #31 (9 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.418-0400 c20021| 2019-07-25T18:25:06.418-0400 I NETWORK [conn30] received client metadata from 127.0.0.1:49501 conn30: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.418-0400 c20021| 2019-07-25T18:25:06.418-0400 I NETWORK [conn31] received client metadata from 127.0.0.1:49502 conn31: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.421-0400 c20022| 2019-07-25T18:25:06.421-0400 I SHARDING [repl-writer-worker-11] Marking collection config.version as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.421-0400 c20023| 2019-07-25T18:25:06.421-0400 I SHARDING [repl-writer-worker-13] Marking collection config.version as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.422-0400 c20022| 2019-07-25T18:25:06.421-0400 I INITSYNC [replication-1] CollectionCloner ns:config.version finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.422-0400 c20021| 2019-07-25T18:25:06.421-0400 I NETWORK [conn30] end connection 127.0.0.1:49501 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.422-0400 c20023| 2019-07-25T18:25:06.421-0400 I INITSYNC [replication-0] CollectionCloner ns:config.version finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.422-0400 c20021| 2019-07-25T18:25:06.421-0400 I NETWORK [conn31] end connection 127.0.0.1:49502 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.423-0400 c20022| 2019-07-25T18:25:06.422-0400 I INDEX [replication-1] index build: inserted 1 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.423-0400 c20023| 2019-07-25T18:25:06.423-0400 I INDEX [replication-0] index build: inserted 1 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.432-0400 c20022| 2019-07-25T18:25:06.432-0400 I INDEX [replication-1] index build: done building index _id_ on ns config.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.443-0400 c20023| 2019-07-25T18:25:06.442-0400 I INDEX [replication-0] index build: done building index _id_ on ns config.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.483-0400 c20022| 2019-07-25T18:25:06.482-0400 I INITSYNC [replication-1] CollectionCloner::start called, on ns:config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.489-0400 c20022| 2019-07-25T18:25:06.489-0400 I STORAGE [repl-writer-worker-12] createCollection: config.tags with provided UUID: f1867b25-f9fb-445f-8bca-c3b4a21b38ee and options: { uuid: UUID("f1867b25-f9fb-445f-8bca-c3b4a21b38ee") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.494-0400 c20023| 2019-07-25T18:25:06.494-0400 I INITSYNC [replication-0] CollectionCloner::start called, on ns:config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.498-0400 c20023| 2019-07-25T18:25:06.497-0400 I STORAGE [repl-writer-worker-14] createCollection: config.tags with provided UUID: f1867b25-f9fb-445f-8bca-c3b4a21b38ee and options: { uuid: UUID("f1867b25-f9fb-445f-8bca-c3b4a21b38ee") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.620-0400 c20023| 2019-07-25T18:25:06.620-0400 I INDEX [repl-writer-worker-14] index build: starting on config.tags properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.621-0400 c20023| 2019-07-25T18:25:06.620-0400 I INDEX [repl-writer-worker-14] build may temporarily use up to 250 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.630-0400 c20022| 2019-07-25T18:25:06.630-0400 I INDEX [repl-writer-worker-12] index build: starting on config.tags properties: { v: 2, unique: true, key: { ns: 1, min: 1 }, name: "ns_1_min_1", ns: "config.tags" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.631-0400 c20022| 2019-07-25T18:25:06.630-0400 I INDEX [repl-writer-worker-12] build may temporarily use up to 250 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.674-0400 c20023| 2019-07-25T18:25:06.674-0400 I INDEX [repl-writer-worker-14] index build: starting on config.tags properties: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.674-0400 c20023| 2019-07-25T18:25:06.674-0400 I INDEX [repl-writer-worker-14] build may temporarily use up to 250 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.720-0400 c20022| 2019-07-25T18:25:06.720-0400 I INDEX [repl-writer-worker-12] index build: starting on config.tags properties: { v: 2, key: { ns: 1, tag: 1 }, name: "ns_1_tag_1", ns: "config.tags" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.720-0400 c20022| 2019-07-25T18:25:06.720-0400 I INDEX [repl-writer-worker-12] build may temporarily use up to 250 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.776-0400 c20023| 2019-07-25T18:25:06.776-0400 I INDEX [repl-writer-worker-14] index build: starting on config.tags properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.776-0400 c20023| 2019-07-25T18:25:06.776-0400 I INDEX [repl-writer-worker-14] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.777-0400 c20021| 2019-07-25T18:25:06.777-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49503 #32 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.778-0400 c20021| 2019-07-25T18:25:06.777-0400 I NETWORK [conn32] received client metadata from 127.0.0.1:49503 conn32: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.784-0400 c20023| 2019-07-25T18:25:06.783-0400 I INITSYNC [replication-1] CollectionCloner ns:config.tags finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.784-0400 c20021| 2019-07-25T18:25:06.784-0400 I NETWORK [conn32] end connection 127.0.0.1:49503 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.786-0400 c20023| 2019-07-25T18:25:06.786-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.810-0400 c20023| 2019-07-25T18:25:06.810-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.837-0400 c20023| 2019-07-25T18:25:06.837-0400 I INDEX [replication-1] index build: done building index ns_1_min_1 on ns config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.838-0400 c20023| 2019-07-25T18:25:06.838-0400 I INDEX [replication-1] index build: done building index ns_1_tag_1 on ns config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.845-0400 c20023| 2019-07-25T18:25:06.845-0400 I INDEX [replication-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.846-0400 c20022| 2019-07-25T18:25:06.846-0400 I INDEX [repl-writer-worker-12] index build: starting on config.tags properties: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.tags" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.846-0400 c20022| 2019-07-25T18:25:06.846-0400 I INDEX [repl-writer-worker-12] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.848-0400 c20021| 2019-07-25T18:25:06.847-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49504 #33 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.850-0400 c20021| 2019-07-25T18:25:06.850-0400 I NETWORK [conn33] received client metadata from 127.0.0.1:49504 conn33: { driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.854-0400 c20022| 2019-07-25T18:25:06.853-0400 I INITSYNC [replication-0] CollectionCloner ns:config.tags finished cloning with status: OK
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.854-0400 c20021| 2019-07-25T18:25:06.854-0400 I NETWORK [conn33] end connection 127.0.0.1:49504 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.857-0400 c20022| 2019-07-25T18:25:06.857-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.857-0400 c20023| 2019-07-25T18:25:06.857-0400 I INDEX [replication-1] index build: done building index _id_ on ns config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.868-0400 c20022| 2019-07-25T18:25:06.868-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.889-0400 c20022| 2019-07-25T18:25:06.889-0400 I INDEX [replication-0] index build: done building index ns_1_min_1 on ns config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.890-0400 c20022| 2019-07-25T18:25:06.890-0400 I INDEX [replication-0] index build: done building index ns_1_tag_1 on ns config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.892-0400 c20022| 2019-07-25T18:25:06.892-0400 I INDEX [replication-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.906-0400 c20022| 2019-07-25T18:25:06.906-0400 I INDEX [replication-0] index build: done building index _id_ on ns config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.967-0400 c20023| 2019-07-25T18:25:06.967-0400 I INITSYNC [replication-1] Finished cloning data: OK. Beginning oplog replay.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.969-0400 c20023| 2019-07-25T18:25:06.969-0400 I INITSYNC [replication-0] No need to apply operations. (currently at { : Timestamp(1564093501, 13) })
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.970-0400 c20023| 2019-07-25T18:25:06.970-0400 I INITSYNC [replication-0] Finished fetching oplog during initial sync: CallbackCanceled: error in fetcher batch callback: oplog fetcher is shutting down. Last fetched optime: { ts: Timestamp(0, 0), t: -1 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.970-0400 c20023| 2019-07-25T18:25:06.970-0400 I INITSYNC [replication-0] Initial sync attempt finishing up.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.970-0400 c20023| 2019-07-25T18:25:06.970-0400 I INITSYNC [replication-0] Initial Sync Attempt Statistics: { failedInitialSyncAttempts: 0, maxFailedInitialSyncAttempts: 10, initialSyncStart: new Date(1564093501804), initialSyncAttempts: [], fetchedMissingDocs: 0, appliedOps: 0, initialSyncOplogStart: Timestamp(1564093501, 13), initialSyncOplogEnd: Timestamp(1564093501, 13), databases: { databasesCloned: 2, admin: { collections: 2, clonedCollections: 2, start: new Date(1564093503418), end: new Date(1564093503825), elapsedMillis: 407, admin.system.keys: { documentsToCopy: 2, documentsCopied: 2, indexes: 1, fetchedBatches: 1, start: new Date(1564093503422), end: new Date(1564093503638), elapsedMillis: 216, receivedBatches: 1 }, admin.system.version: { documentsToCopy: 1, documentsCopied: 1, indexes: 1, fetchedBatches: 1, start: new Date(1564093503638), end: new Date(1564093503825), elapsedMillis: 187, receivedBatches: 1 } }, config: { collections: 8, clonedCollections: 8, start: new Date(1564093503825), end: new Date(1564093506967), elapsedMillis: 3142, config.transactions: { documentsToCopy: 0, documentsCopied: 0, indexes: 1, fetchedBatches: 0, start: new Date(1564093503829), end: new Date(1564093504080), elapsedMillis: 251, receivedBatches: 0 }, config.chunks: { documentsToCopy: 0, documentsCopied: 0, indexes: 4, fetchedBatches: 0, start: new Date(1564093504080), end: new Date(1564093504776), elapsedMillis: 696, receivedBatches: 0 }, config.migrations: { documentsToCopy: 0, documentsCopied: 0, indexes: 2, fetchedBatches: 0, start: new Date(1564093504775), end: new Date(1564093505167), elapsedMillis: 392, receivedBatches: 0 }, config.shards: { documentsToCopy: 0, documentsCopied: 0, indexes: 2, fetchedBatches: 0, start: new Date(1564093505167), end: new Date(1564093505588), elapsedMillis: 421, receivedBatches: 0 }, config.lockpings: { documentsToCopy: 0, documentsCopied: 0, indexes: 2, fetchedBatches: 0, start: new Date(1564093505588), end: new Date(1564093505935), elapsedMillis: 347, receivedBatches: 0 }, config.locks: { documentsToCopy: 0, documentsCopied: 0, indexes: 3, fetchedBatches: 0, start: new Date(1564093505935), end: new Date(1564093506295), elapsedMillis: 360, receivedBatches: 0 }, config.version: { documentsToCopy: 1, documentsCopied: 1, indexes: 1, fetchedBatches: 1, start: new Date(1564093506295), end: new Date(1564093506494), elapsedMillis: 199, receivedBatches: 1 }, config.tags: { documentsToCopy: 0, documentsCopied: 0, indexes: 3, fetchedBatches: 0, start: new Date(1564093506494), end: new Date(1564093506967), elapsedMillis: 473, receivedBatches: 0 } } } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:06.971-0400 c20023| 2019-07-25T18:25:06.971-0400 I STORAGE [replication-1] Finishing collection drop for local.temp_oplog_buffer (77f3bee9-3688-4446-8487-148de8eeae40).
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.000-0400 c20022| 2019-07-25T18:25:06.999-0400 I INITSYNC [replication-0] Finished cloning data: OK. Beginning oplog replay.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.001-0400 c20022| 2019-07-25T18:25:07.001-0400 I INITSYNC [replication-1] No need to apply operations. (currently at { : Timestamp(1564093501, 13) })
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.002-0400 c20022| 2019-07-25T18:25:07.002-0400 I INITSYNC [replication-1] Finished fetching oplog during initial sync: CallbackCanceled: error in fetcher batch callback: oplog fetcher is shutting down. Last fetched optime: { ts: Timestamp(0, 0), t: -1 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.003-0400 c20022| 2019-07-25T18:25:07.002-0400 I INITSYNC [replication-1] Initial sync attempt finishing up.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.003-0400 c20022| 2019-07-25T18:25:07.003-0400 I INITSYNC [replication-1] Initial Sync Attempt Statistics: { failedInitialSyncAttempts: 0, maxFailedInitialSyncAttempts: 10, initialSyncStart: new Date(1564093501804), initialSyncAttempts: [], fetchedMissingDocs: 0, appliedOps: 0, initialSyncOplogStart: Timestamp(1564093501, 13), initialSyncOplogEnd: Timestamp(1564093501, 13), databases: { databasesCloned: 2, admin: { collections: 2, clonedCollections: 2, start: new Date(1564093503261), end: new Date(1564093503701), elapsedMillis: 440, admin.system.keys: { documentsToCopy: 2, documentsCopied: 2, indexes: 1, fetchedBatches: 1, start: new Date(1564093503264), end: new Date(1564093503467), elapsedMillis: 203, receivedBatches: 1 }, admin.system.version: { documentsToCopy: 1, documentsCopied: 1, indexes: 1, fetchedBatches: 1, start: new Date(1564093503467), end: new Date(1564093503701), elapsedMillis: 234, receivedBatches: 1 } }, config: { collections: 8, clonedCollections: 8, start: new Date(1564093503701), end: new Date(1564093507000), elapsedMillis: 3299, config.transactions: { documentsToCopy: 0, documentsCopied: 0, indexes: 1, fetchedBatches: 0, start: new Date(1564093503705), end: new Date(1564093503884), elapsedMillis: 179, receivedBatches: 0 }, config.chunks: { documentsToCopy: 0, documentsCopied: 0, indexes: 4, fetchedBatches: 0, start: new Date(1564093503884), end: new Date(1564093504659), elapsedMillis: 775, receivedBatches: 0 }, config.migrations: { documentsToCopy: 0, documentsCopied: 0, indexes: 2, fetchedBatches: 0, start: new Date(1564093504659), end: new Date(1564093505053), elapsedMillis: 394, receivedBatches: 0 }, config.shards: { documentsToCopy: 0, documentsCopied: 0, indexes: 2, fetchedBatches: 0, start: new Date(1564093505053), end: new Date(1564093505456), elapsedMillis: 403, receivedBatches: 0 }, config.lockpings: { documentsToCopy: 0, documentsCopied: 0, indexes: 2, fetchedBatches: 0, start: new Date(1564093505456), end: new Date(1564093505817), elapsedMillis: 361, receivedBatches: 0 }, config.locks: { documentsToCopy: 0, documentsCopied: 0, indexes: 3, fetchedBatches: 0, start: new Date(1564093505817), end: new Date(1564093506283), elapsedMillis: 466, receivedBatches: 0 }, config.version: { documentsToCopy: 1, documentsCopied: 1, indexes: 1, fetchedBatches: 1, start: new Date(1564093506283), end: new Date(1564093506483), elapsedMillis: 200, receivedBatches: 1 }, config.tags: { documentsToCopy: 0, documentsCopied: 0, indexes: 3, fetchedBatches: 0, start: new Date(1564093506483), end: new Date(1564093507000), elapsedMillis: 517, receivedBatches: 0 } } } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.003-0400 c20022| 2019-07-25T18:25:07.003-0400 I STORAGE [replication-0] Finishing collection drop for local.temp_oplog_buffer (765374e3-de77-43a8-91c8-d17b6c3d54bc).
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.011-0400 c20023| 2019-07-25T18:25:07.011-0400 I SHARDING [replication-1] Marking collection config.transactions as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.012-0400 c20023| 2019-07-25T18:25:07.012-0400 I SHARDING [replication-1] Marking collection local.replset.oplogTruncateAfterPoint as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.028-0400 c20023| 2019-07-25T18:25:07.028-0400 I INITSYNC [replication-1] initial sync done; took 5s.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.029-0400 c20023| 2019-07-25T18:25:07.029-0400 I REPL [replication-1] transition to RECOVERING from STARTUP2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.029-0400 c20023| 2019-07-25T18:25:07.029-0400 I REPL [replication-1] Starting replication fetcher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.030-0400 c20023| 2019-07-25T18:25:07.029-0400 I REPL [replication-1] Starting replication applier thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.030-0400 c20023| 2019-07-25T18:25:07.029-0400 I REPL [replication-1] Starting replication reporter thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.030-0400 c20023| 2019-07-25T18:25:07.029-0400 I REPL [rsSync-0] Starting oplog application
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.034-0400 c20023| 2019-07-25T18:25:07.034-0400 I REPL [rsSync-0] transition to SECONDARY from RECOVERING
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.034-0400 c20023| 2019-07-25T18:25:07.034-0400 I REPL [rsSync-0] Resetting sync source to empty, which was :27017
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.035-0400 c20023| 2019-07-25T18:25:07.034-0400 I REPL [rsBackgroundSync] could not find member to sync from
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.043-0400 c20022| 2019-07-25T18:25:07.043-0400 I SHARDING [replication-0] Marking collection config.transactions as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.045-0400 c20022| 2019-07-25T18:25:07.045-0400 I SHARDING [replication-0] Marking collection local.replset.oplogTruncateAfterPoint as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.055-0400 c20022| 2019-07-25T18:25:07.055-0400 I INITSYNC [replication-0] initial sync done; took 5s.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.056-0400 c20022| 2019-07-25T18:25:07.055-0400 I REPL [replication-0] transition to RECOVERING from STARTUP2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.056-0400 c20022| 2019-07-25T18:25:07.056-0400 I REPL [replication-0] Starting replication fetcher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.057-0400 c20022| 2019-07-25T18:25:07.056-0400 I REPL [replication-0] Starting replication applier thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.057-0400 c20022| 2019-07-25T18:25:07.056-0400 I REPL [replication-0] Starting replication reporter thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.057-0400 c20022| 2019-07-25T18:25:07.056-0400 I REPL [rsSync-0] Starting oplog application
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.063-0400 c20022| 2019-07-25T18:25:07.062-0400 I REPL [rsSync-0] transition to SECONDARY from RECOVERING
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.063-0400 c20022| 2019-07-25T18:25:07.063-0400 I REPL [rsSync-0] Resetting sync source to empty, which was :27017
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.063-0400 c20022| 2019-07-25T18:25:07.063-0400 I REPL [rsBackgroundSync] could not find member to sync from
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.065-0400 c20022| 2019-07-25T18:25:07.065-0400 I REPL [replexec-2] Member Jasons-MacBook-Pro.local:20023 is now in state SECONDARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.290-0400 AwaitNodesAgreeOnPrimary: Waiting for nodes to agree on any primary.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.319-0400 AwaitNodesAgreeOnPrimary: Nodes agreed on primary Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.349-0400 Set shouldWaitForKeys from RS options: false
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.379-0400 AwaitLastStableRecoveryTimestamp: Beginning for [
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.379-0400 "Jasons-MacBook-Pro.local:20021",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.379-0400 "Jasons-MacBook-Pro.local:20022",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.379-0400 "Jasons-MacBook-Pro.local:20023"
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.379-0400 ]
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.430-0400 "Config servers: configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023"
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.431-0400 2019-07-25T18:25:07.431-0400 I NETWORK [js] Starting new replica set monitor for configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.432-0400 2019-07-25T18:25:07.432-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.432-0400 2019-07-25T18:25:07.432-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.432-0400 2019-07-25T18:25:07.432-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.433-0400 c20022| 2019-07-25T18:25:07.433-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49505 #22 (4 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.434-0400 c20021| 2019-07-25T18:25:07.434-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49506 #34 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.434-0400 c20022| 2019-07-25T18:25:07.434-0400 I NETWORK [conn22] received client metadata from 127.0.0.1:49505 conn22: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.434-0400 c20023| 2019-07-25T18:25:07.434-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49507 #22 (4 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.435-0400 c20021| 2019-07-25T18:25:07.434-0400 I NETWORK [conn34] received client metadata from 127.0.0.1:49506 conn34: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.435-0400 c20023| 2019-07-25T18:25:07.435-0400 I NETWORK [conn22] received client metadata from 127.0.0.1:49507 conn22: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.436-0400 2019-07-25T18:25:07.436-0400 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for configsvr_failover_repro-configRS is configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.437-0400 ShardingTest configsvr_failover_repro :
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.437-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.437-0400 "config" : "configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.437-0400 "shards" : [
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.437-0400 connection to configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.437-0400 ]
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.438-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.455-0400 2019-07-25T18:25:07.455-0400 I - [js] shell: started program (sh2749): /Users/jason.zhang/mongodb/mongo/mongos -v --port 20024 --configdb configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023 --bind_ip 0.0.0.0 --setParameter enableTestCommands=1 --setParameter disableLogicalSessionCacheRefresh=true --setParameter logComponentVerbosity={"transaction":3}
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.539-0400 c20023| 2019-07-25T18:25:07.539-0400 I REPL [replexec-2] Member Jasons-MacBook-Pro.local:20022 is now in state SECONDARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.545-0400 s20024| 2019-07-25T18:25:07.545-0400 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.571-0400 s20024| 2019-07-25T18:25:07.571-0400 I CONTROL [main]
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.571-0400 s20024| 2019-07-25T18:25:07.571-0400 I CONTROL [main] ** NOTE: This is a development version (4.3.0-703-g917d338) of MongoDB.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.571-0400 s20024| 2019-07-25T18:25:07.571-0400 I CONTROL [main] ** Not recommended for production.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.571-0400 s20024| 2019-07-25T18:25:07.571-0400 I CONTROL [main]
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.571-0400 s20024| 2019-07-25T18:25:07.571-0400 I CONTROL [main] ** WARNING: Access control is not enabled for the database.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.571-0400 s20024| 2019-07-25T18:25:07.571-0400 I CONTROL [main] ** Read and write access to data and configuration is unrestricted.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.571-0400 s20024| 2019-07-25T18:25:07.571-0400 I CONTROL [main]
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.571-0400 s20024| 2019-07-25T18:25:07.571-0400 I SHARDING [mongosMain] mongos version v4.3.0-703-g917d338
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.572-0400 s20024| 2019-07-25T18:25:07.571-0400 I CONTROL [mongosMain] db version v4.3.0-703-g917d338
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.572-0400 s20024| 2019-07-25T18:25:07.571-0400 I CONTROL [mongosMain] git version: 917d338c4bf52dc8dce2c0e585a676385e81ed1c
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.572-0400 s20024| 2019-07-25T18:25:07.571-0400 I CONTROL [mongosMain] allocator: system
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.572-0400 s20024| 2019-07-25T18:25:07.571-0400 I CONTROL [mongosMain] modules: enterprise ninja
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.572-0400 s20024| 2019-07-25T18:25:07.571-0400 I CONTROL [mongosMain] build environment:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.572-0400 s20024| 2019-07-25T18:25:07.572-0400 I CONTROL [mongosMain] distarch: x86_64
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.572-0400 s20024| 2019-07-25T18:25:07.572-0400 I CONTROL [mongosMain] target_arch: x86_64
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.572-0400 s20024| 2019-07-25T18:25:07.572-0400 I CONTROL [mongosMain] options: { net: { bindIp: "0.0.0.0", port: 20024 }, setParameter: { disableLogicalSessionCacheRefresh: "true", enableTestCommands: "1", logComponentVerbosity: "{"transaction":3}" }, sharding: { configDB: "configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023" }, systemLog: { verbosity: 1 } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.572-0400 s20024| 2019-07-25T18:25:07.572-0400 D1 NETWORK [mongosMain] fd limit hard:9223372036854775807 soft:10240 max conn: 8192
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.574-0400 s20024| 2019-07-25T18:25:07.574-0400 D1 EXECUTOR [Sharding-Fixed-0] starting thread in pool Sharding-Fixed
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.576-0400 s20024| 2019-07-25T18:25:07.576-0400 D1 NETWORK [mongosMain] Starting up task executor for monitoring replica sets in response to request to monitor set: configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.576-0400 s20024| 2019-07-25T18:25:07.576-0400 I NETWORK [mongosMain] Starting new replica set monitor for configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.576-0400 s20024| 2019-07-25T18:25:07.576-0400 D1 NETWORK [mongosMain] Next replica set scan scheduled for 2019-07-25T18:25:37.576-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.577-0400 s20024| 2019-07-25T18:25:07.576-0400 D1 NETWORK [mongosMain] Started targeter for configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.577-0400 s20024| 2019-07-25T18:25:07.577-0400 D1 SHARDING [mongosMain] Starting up task executor for periodic reloading of ShardRegistry
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.577-0400 s20024| 2019-07-25T18:25:07.577-0400 D1 SHARDING [ShardRegistryUpdater] Reloading shardRegistry
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.577-0400 s20024| 2019-07-25T18:25:07.577-0400 I SHARDING [thread1] creating distributed lock ping thread for process Jasons-MacBook-Pro.local:20024:1564093507:3564609982540738235 (sleeping for 30000ms)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.577-0400 s20024| 2019-07-25T18:25:07.577-0400 D1 NETWORK [mongosMain] Next replica set scan scheduled for 2019-07-25T18:25:08.077-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.578-0400 s20024| 2019-07-25T18:25:07.578-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.578-0400 s20024| 2019-07-25T18:25:07.578-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.578-0400 s20024| 2019-07-25T18:25:07.578-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.579-0400 c20022| 2019-07-25T18:25:07.579-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49509 #23 (5 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.580-0400 c20023| 2019-07-25T18:25:07.580-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49510 #23 (5 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.580-0400 c20022| 2019-07-25T18:25:07.580-0400 I NETWORK [conn23] received client metadata from 127.0.0.1:49509 conn23: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.580-0400 c20021| 2019-07-25T18:25:07.580-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49511 #35 (9 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.581-0400 c20023| 2019-07-25T18:25:07.580-0400 I NETWORK [conn23] received client metadata from 127.0.0.1:49510 conn23: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.581-0400 c20021| 2019-07-25T18:25:07.581-0400 I NETWORK [conn35] received client metadata from 127.0.0.1:49511 conn35: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.587-0400 s20024| 2019-07-25T18:25:07.587-0400 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for configsvr_failover_repro-configRS is configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.587-0400 s20024| 2019-07-25T18:25:07.587-0400 I SHARDING [Sharding-Fixed-0] Updating sharding state with confirmed set configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.587-0400 s20024| 2019-07-25T18:25:07.587-0400 D1 NETWORK [Sharding-Fixed-0] Started targeter for configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.588-0400 s20024| 2019-07-25T18:25:07.587-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:37.587-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.588-0400 s20024| 2019-07-25T18:25:07.587-0400 D1 TRACKING [mongosMain] Cmd: NotSet, TrackingId: 5d3a2c433e6b567cf0908dbc
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.588-0400 s20024| 2019-07-25T18:25:07.587-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:37.587-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.588-0400 s20024| 2019-07-25T18:25:07.587-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 10ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.588-0400 s20024| 2019-07-25T18:25:07.587-0400 D1 TRACKING [replSetDistLockPinger] Cmd: NotSet, TrackingId: 5d3a2c433e6b567cf0908dbe
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.588-0400 s20024| 2019-07-25T18:25:07.587-0400 D1 TRACKING [monitoring-keys-for-HMAC] Cmd: NotSet, TrackingId: 5d3a2c433e6b567cf0908dc0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.588-0400 s20024| 2019-07-25T18:25:07.587-0400 D1 TRACKING [shard-registry-reload] Cmd: NotSet, TrackingId: 5d3a2c433e6b567cf0908dc2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.590-0400 c20023| 2019-07-25T18:25:07.589-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49512 #24 (6 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.591-0400 c20023| 2019-07-25T18:25:07.590-0400 I NETWORK [conn24] received client metadata from 127.0.0.1:49512 conn24: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.591-0400 c20022| 2019-07-25T18:25:07.590-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49513 #24 (6 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.592-0400 c20022| 2019-07-25T18:25:07.591-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49514 #25 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.592-0400 c20022| 2019-07-25T18:25:07.591-0400 I NETWORK [conn24] received client metadata from 127.0.0.1:49513 conn24: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.592-0400 c20021| 2019-07-25T18:25:07.591-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49515 #36 (10 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.592-0400 c20022| 2019-07-25T18:25:07.592-0400 I NETWORK [conn25] received client metadata from 127.0.0.1:49514 conn25: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.593-0400 c20021| 2019-07-25T18:25:07.592-0400 I NETWORK [conn36] received client metadata from 127.0.0.1:49515 conn36: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.622-0400 c20021| 2019-07-25T18:25:07.622-0400 I REPL [replexec-1] Member Jasons-MacBook-Pro.local:20022 is now in state SECONDARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:07.622-0400 c20021| 2019-07-25T18:25:07.622-0400 I REPL [replexec-0] Member Jasons-MacBook-Pro.local:20023 is now in state SECONDARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:08.046-0400 c20023| 2019-07-25T18:25:08.046-0400 I STORAGE [replexec-1] Triggering the first stable checkpoint. Initial Data: Timestamp(1564093501, 13) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1564093501, 13)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:08.048-0400 c20023| 2019-07-25T18:25:08.048-0400 I COMMAND [conn24] command config.version command: find { find: "version", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(0, 0), t: -1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d3a2c433e6b567cf0908dbd'), operName: "", parentOperId: "5d3a2c433e6b567cf0908dbc" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(0, 0), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $configServerState: { opTime: { ts: Timestamp(0, 0), t: -1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:636 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 454ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:08.049-0400 s20024| 2019-07-25T18:25:08.048-0400 I SHARDING [ShardRegistry] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(0, 0), t: -1 }, now { ts: Timestamp(1564093501, 13), t: 1 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:08.070-0400 c20022| 2019-07-25T18:25:08.069-0400 I STORAGE [replexec-0] Triggering the first stable checkpoint. Initial Data: Timestamp(1564093501, 13) PrevStable: Timestamp(0, 0) CurrStable: Timestamp(1564093501, 13)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:08.070-0400 c20022| 2019-07-25T18:25:08.070-0400 I SHARDING [conn25] Marking collection config.shards as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:08.071-0400 c20022| 2019-07-25T18:25:08.071-0400 I COMMAND [conn25] command config.shards command: find { find: "shards", readConcern: { level: "majority", afterOpTime: { ts: Timestamp(0, 0), t: -1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d3a2c433e6b567cf0908dc3'), operName: "", parentOperId: "5d3a2c433e6b567cf0908dc2" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(0, 0), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $configServerState: { opTime: { ts: Timestamp(0, 0), t: -1 } }, $db: "config" } planSummary: COLLSCAN keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:549 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 477ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:08.071-0400 c20022| 2019-07-25T18:25:08.071-0400 I COMMAND [conn24] command admin.system.keys command: find { find: "system.keys", filter: { purpose: "HMAC", expiresAt: { $gt: Timestamp(0, 0) } }, sort: { expiresAt: 1 }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(0, 0), t: -1 } }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d3a2c433e6b567cf0908dc1'), operName: "", parentOperId: "5d3a2c433e6b567cf0908dc0" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(0, 0), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $configServerState: { opTime: { ts: Timestamp(0, 0), t: -1 } }, $db: "admin" } planSummary: COLLSCAN keysExamined:0 docsExamined:2 hasSortStage:1 cursorExhausted:1 numYields:0 nreturned:2 queryHash:6DC32749 planCacheKey:6DC32749 reslen:729 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 477ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:08.072-0400 s20024| 2019-07-25T18:25:08.072-0400 D1 SHARDING [shard-registry-reload] found 0 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1564093501, 13), t: 1 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.044-0400 c20023| 2019-07-25T18:25:09.043-0400 I REPL [rsBackgroundSync] sync source candidate: Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.048-0400 c20023| 2019-07-25T18:25:09.047-0400 I REPL [rsBackgroundSync] Changed sync source from empty to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.050-0400 c20023| 2019-07-25T18:25:09.049-0400 I REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on Jasons-MacBook-Pro.local:20021 starting at filter: { ts: { $gte: Timestamp(1564093501, 13) } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.058-0400 c20023| 2019-07-25T18:25:09.058-0400 I SHARDING [repl-writer-worker-2] Marking collection config.lockpings as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.061-0400 c20023| 2019-07-25T18:25:09.060-0400 I CONNPOOL [RS] Connecting to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.062-0400 c20021| 2019-07-25T18:25:09.062-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49517 #37 (11 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.062-0400 c20021| 2019-07-25T18:25:09.062-0400 I NETWORK [conn37] received client metadata from 127.0.0.1:49517 conn37: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.068-0400 c20022| 2019-07-25T18:25:09.068-0400 I REPL [rsBackgroundSync] sync source candidate: Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.071-0400 c20021| 2019-07-25T18:25:09.070-0400 I COMMAND [conn36] command config.lockpings command: findAndModify { findAndModify: "lockpings", query: { _id: "Jasons-MacBook-Pro.local:20024:1564093507:3564609982540738235" }, update: { $set: { ping: new Date(1564093507577) } }, upsert: true, writeConcern: { w: "majority", wtimeout: 15000 }, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d3a2c433e6b567cf0908dbf'), operName: "", parentOperId: "5d3a2c433e6b567cf0908dbe" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(0, 0), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $configServerState: { opTime: { ts: Timestamp(0, 0), t: -1 } }, $db: "config" } planSummary: IDHACK keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 keysInserted:2 numYields:0 reslen:635 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 1476ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.071-0400 c20022| 2019-07-25T18:25:09.071-0400 I REPL [rsBackgroundSync] Changed sync source from empty to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.071-0400 s20024| 2019-07-25T18:25:09.071-0400 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.073-0400 c20022| 2019-07-25T18:25:09.073-0400 I REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on Jasons-MacBook-Pro.local:20021 starting at filter: { ts: { $gte: Timestamp(1564093501, 13) } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.078-0400 c20022| 2019-07-25T18:25:09.078-0400 I SHARDING [repl-writer-worker-5] Marking collection config.lockpings as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.080-0400 c20022| 2019-07-25T18:25:09.080-0400 I CONNPOOL [RS] Connecting to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.081-0400 c20021| 2019-07-25T18:25:09.081-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49518 #38 (12 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:09.081-0400 c20021| 2019-07-25T18:25:09.081-0400 I NETWORK [conn38] received client metadata from 127.0.0.1:49518 conn38: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:10.062-0400 s20024| 2019-07-25T18:25:10.062-0400 W FTDC [mongosMain] FTDC is disabled because neither '--logpath' nor set parameter 'diagnosticDataCollectionDirectoryPath' are specified.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:10.063-0400 s20024| 2019-07-25T18:25:10.062-0400 I FTDC [mongosMain] Initializing full-time diagnostic data capture with directory ''
[js_test:configsvr_failover_repro] 2019-07-25T18:25:10.063-0400 s20024| 2019-07-25T18:25:10.062-0400 D1 ACCESS [mongosMain] There were no users to pin, not starting tracker thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:10.063-0400 s20024| 2019-07-25T18:25:10.063-0400 D1 COMMAND [ClusterCursorCleanupJob] BackgroundJob starting: ClusterCursorCleanupJob
[js_test:configsvr_failover_repro] 2019-07-25T18:25:10.064-0400 s20024| 2019-07-25T18:25:10.064-0400 D1 COMMAND [UserCacheInvalidatorThread] BackgroundJob starting: UserCacheInvalidatorThread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:10.064-0400 s20024| 2019-07-25T18:25:10.064-0400 D1 COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner
[js_test:configsvr_failover_repro] 2019-07-25T18:25:10.065-0400 s20024| 2019-07-25T18:25:10.064-0400 I NETWORK [mongosMain] Listening on /tmp/mongodb-20024.sock
[js_test:configsvr_failover_repro] 2019-07-25T18:25:10.065-0400 s20024| 2019-07-25T18:25:10.064-0400 I NETWORK [mongosMain] Listening on 0.0.0.0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:10.065-0400 s20024| 2019-07-25T18:25:10.064-0400 I NETWORK [mongosMain] waiting for connections on port 20024
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.183-0400 s20024| 2019-07-25T18:25:11.182-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49516 #8 (1 connection now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.184-0400 s20024| 2019-07-25T18:25:11.183-0400 I NETWORK [conn8] received client metadata from 127.0.0.1:49516 conn8: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.184-0400 s20024| 2019-07-25T18:25:11.184-0400 I COMMAND [conn8] command admin.$cmd appName: "MongoDB Shell" command: isMaster { isMaster: 1, hostInfo: "Jasons-MacBook-Pro.local:27017", client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }, $db: "admin" } numYields:0 reslen:389 protocol:op_query 0ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.206-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.206-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.206-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.206-0400 [jsTest] New session started with sessionID: { "id" : UUID("806e05d3-7000-445f-820e-060646a14c47") } and options: { "causalConsistency" : false }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.207-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.207-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.207-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.213-0400 s20024| 2019-07-25T18:25:11.213-0400 D1 TRACKING [conn8] Cmd: balancerStop, TrackingId: 5d3a2c473e6b567cf0908dca
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.215-0400 c20021| 2019-07-25T18:25:11.214-0400 I STORAGE [conn36] createCollection: config.settings with generated UUID: 5b34234e-9f2a-4dc0-a6ec-4c3c2c8d8c4a and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.284-0400 c20021| 2019-07-25T18:25:11.284-0400 I INDEX [conn36] index build: done building index _id_ on ns config.settings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.303-0400 c20022| 2019-07-25T18:25:11.303-0400 I STORAGE [repl-writer-worker-0] createCollection: config.settings with provided UUID: 5b34234e-9f2a-4dc0-a6ec-4c3c2c8d8c4a and options: { uuid: UUID("5b34234e-9f2a-4dc0-a6ec-4c3c2c8d8c4a") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.303-0400 c20023| 2019-07-25T18:25:11.303-0400 I STORAGE [repl-writer-worker-7] createCollection: config.settings with provided UUID: 5b34234e-9f2a-4dc0-a6ec-4c3c2c8d8c4a and options: { uuid: UUID("5b34234e-9f2a-4dc0-a6ec-4c3c2c8d8c4a") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.361-0400 c20023| 2019-07-25T18:25:11.361-0400 I INDEX [repl-writer-worker-7] index build: done building index _id_ on ns config.settings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.366-0400 c20023| 2019-07-25T18:25:11.366-0400 I SHARDING [repl-writer-worker-15] Marking collection config.settings as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.394-0400 c20022| 2019-07-25T18:25:11.393-0400 I INDEX [repl-writer-worker-0] index build: done building index _id_ on ns config.settings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.395-0400 c20021| 2019-07-25T18:25:11.394-0400 I COMMAND [conn36] command config.$cmd appName: "MongoDB Shell" command: update { update: "settings", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "balancer" }, u: { $set: { stopped: true, mode: "off" } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, $db: "config" } numYields:0 reslen:406 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { w: 3 } }, Database: { acquireCount: { w: 3 } }, Collection: { acquireCount: { r: 2, w: 2, W: 1 } }, Mutex: { acquireCount: { r: 5 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 179ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.402-0400 c20022| 2019-07-25T18:25:11.401-0400 I SHARDING [repl-writer-worker-2] Marking collection config.settings as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.449-0400 c20021| 2019-07-25T18:25:11.449-0400 I SHARDING [conn36] ShouldAutoSplit changing from 1 to 0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.449-0400 c20021| 2019-07-25T18:25:11.449-0400 I STORAGE [conn36] createCollection: config.actionlog with generated UUID: bb55f986-13a3-489d-bc35-a22a32b44c10 and options: { capped: true, size: 20971520 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.517-0400 c20021| 2019-07-25T18:25:11.517-0400 I INDEX [conn36] index build: done building index _id_ on ns config.actionlog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.534-0400 c20022| 2019-07-25T18:25:11.534-0400 I STORAGE [repl-writer-worker-6] createCollection: config.actionlog with provided UUID: bb55f986-13a3-489d-bc35-a22a32b44c10 and options: { uuid: UUID("bb55f986-13a3-489d-bc35-a22a32b44c10"), capped: true, size: 20971520 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.534-0400 c20023| 2019-07-25T18:25:11.534-0400 I STORAGE [repl-writer-worker-8] createCollection: config.actionlog with provided UUID: bb55f986-13a3-489d-bc35-a22a32b44c10 and options: { uuid: UUID("bb55f986-13a3-489d-bc35-a22a32b44c10"), capped: true, size: 20971520 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.618-0400 c20022| 2019-07-25T18:25:11.618-0400 I INDEX [repl-writer-worker-6] index build: done building index _id_ on ns config.actionlog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.630-0400 c20021| 2019-07-25T18:25:11.629-0400 I COMMAND [conn36] command config.actionlog appName: "MongoDB Shell" command: create { create: "actionlog", capped: true, size: 20971520, writeConcern: { w: "majority", wtimeout: 60000 }, $db: "config" } numYields:0 reslen:272 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, W: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 5 } storage:{} protocol:op_msg 180ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.630-0400 c20021| 2019-07-25T18:25:11.629-0400 I SHARDING [conn36] about to log metadata event into actionlog: { _id: "Jasons-MacBook-Pro.local:20021-2019-07-25T18:25:11.629-0400-5d3a2c479cfa09cae7a7987d", server: "Jasons-MacBook-Pro.local:20021", shard: "config", clientAddr: "127.0.0.1:49515", time: new Date(1564093511629), what: "balancer.stop", ns: "", details: {} }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.631-0400 c20021| 2019-07-25T18:25:11.630-0400 I SHARDING [conn36] Marking collection config.actionlog as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.639-0400 c20023| 2019-07-25T18:25:11.639-0400 I INDEX [repl-writer-worker-8] index build: done building index _id_ on ns config.actionlog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.639-0400 c20023| 2019-07-25T18:25:11.639-0400 I REPL [repl-writer-worker-8] applied op: command { op: "c", ns: "config.$cmd", ui: UUID("bb55f986-13a3-489d-bc35-a22a32b44c10"), o: { create: "actionlog", capped: true, size: 20971520, idIndex: { v: 2, key: { _id: 1 }, name: "_id_", ns: "config.actionlog" } }, ts: Timestamp(1564093511, 4), t: 1, v: 2, wall: new Date(1564093511518) }, took 107ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.653-0400 c20022| 2019-07-25T18:25:11.653-0400 I SHARDING [repl-writer-worker-8] Marking collection config.actionlog as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.653-0400 c20023| 2019-07-25T18:25:11.653-0400 I SHARDING [repl-writer-worker-10] Marking collection config.actionlog as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.671-0400 c20021| 2019-07-25T18:25:11.671-0400 I COMMAND [conn36] command admin.$cmd appName: "MongoDB Shell" command: _configsvrBalancerStop { _configsvrBalancerStop: 1, maxTimeMS: 60010, lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, tracking_info: { operId: ObjectId('5d3a2c473e6b567cf0908dcb'), operName: "", parentOperId: "5d3a2c473e6b567cf0908dca" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1564093507, 1), signature: { hash: BinData(0, F2B88FE213609A1BDE7A0BD327B046397AE8C910), keyId: 6717730434681143305 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" }, mongos: { host: "Jasons-MacBook-Pro.local:20024", client: "127.0.0.1:49516", version: "4.3.0-703-g917d338" } }, $configServerState: { opTime: { ts: Timestamp(1564093507, 1), t: 1 } }, $db: "admin" } numYields:0 reslen:505 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 8 } }, ReplicationStateTransition: { acquireCount: { w: 14 } }, Global: { acquireCount: { r: 8, w: 6 } }, Database: { acquireCount: { r: 6, w: 6 } }, Collection: { acquireCount: { r: 9, w: 4, W: 2 } }, Metadata: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 17 } } } flowControl:{ acquireCount: 6 } storage:{} protocol:op_msg 456ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.672-0400 s20024| 2019-07-25T18:25:11.672-0400 I COMMAND [conn8] command admin.$cmd appName: "MongoDB Shell" command: balancerStop { balancerStop: 1.0, maxTimeMS: 60000.0, lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47") }, $db: "admin" } numYields:0 reslen:163 protocol:op_msg 459ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.682-0400 s20024| 2019-07-25T18:25:11.682-0400 I COMMAND [conn8] command test.$cmd appName: "MongoDB Shell" command: isMaster { ismaster: 1.0, lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47") }, $clusterTime: { clusterTime: Timestamp(1564093511, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test" } numYields:0 reslen:374 protocol:op_msg 0ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.699-0400 s20024| 2019-07-25T18:25:11.699-0400 D1 SH_REFR [conn8] Refreshing cached database entry for config; current cached database info is {}
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.699-0400 s20024| 2019-07-25T18:25:11.699-0400 D1 EXECUTOR [ConfigServerCatalogCacheLoader-0] starting thread in pool ConfigServerCatalogCacheLoader
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.701-0400 s20024| 2019-07-25T18:25:11.701-0400 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database config from version {} to version { uuid: UUID("94ee1b8b-e44c-425c-87fe-df9cf3a929b8"), lastMod: 0 } took 1 ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.701-0400 s20024| 2019-07-25T18:25:11.701-0400 D1 TRACKING [conn8] Cmd: update, TrackingId: 5d3a2c473e6b567cf0908dcd
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.705-0400 s20024| 2019-07-25T18:25:11.705-0400 I COMMAND [conn8] command config.settings appName: "MongoDB Shell" command: update { update: "settings", ordered: true, writeConcern: { w: "majority", wtimeout: 30000.0 }, lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47") }, $clusterTime: { clusterTime: Timestamp(1564093511, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "config" } nShards:0 nMatched:1 nModified:0 numYields:0 reslen:245 protocol:op_msg 6ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.710-0400 ShardingTest configsvr_failover_repro going to add shard : configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.719-0400 s20024| 2019-07-25T18:25:11.718-0400 D1 TRACKING [conn8] Cmd: addShard, TrackingId: 5d3a2c473e6b567cf0908dd0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.722-0400 c20021| 2019-07-25T18:25:11.722-0400 I NETWORK [conn36] Starting new replica set monitor for configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.723-0400 c20021| 2019-07-25T18:25:11.722-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.724-0400 d20020| 2019-07-25T18:25:11.724-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49519 #3 (3 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.724-0400 d20020| 2019-07-25T18:25:11.724-0400 I NETWORK [conn3] received client metadata from 127.0.0.1:49519 conn3: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.725-0400 c20021| 2019-07-25T18:25:11.725-0400 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for configsvr_failover_repro-rs0 is configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.727-0400 d20020| 2019-07-25T18:25:11.726-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49520 #4 (4 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.727-0400 d20020| 2019-07-25T18:25:11.727-0400 I NETWORK [conn4] received client metadata from 127.0.0.1:49520 conn4: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.730-0400 d20020| 2019-07-25T18:25:11.730-0400 I COMMAND [conn4] CMD: drop config.system.sessions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.732-0400 d20020| 2019-07-25T18:25:11.732-0400 I SHARDING [conn4] initializing sharding state with: { shardName: "configsvr_failover_repro-rs0", clusterId: ObjectId('5d3a2c3d9cfa09cae7a7976e'), configsvrConnectionString: "configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023" }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.736-0400 d20020| 2019-07-25T18:25:11.736-0400 I NETWORK [conn4] Starting new replica set monitor for configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.737-0400 d20020| 2019-07-25T18:25:11.736-0400 I SHARDING [thread5] creating distributed lock ping thread for process Jasons-MacBook-Pro.local:20020:1564093511:1668631399416606862 (sleeping for 30000ms)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.737-0400 d20020| 2019-07-25T18:25:11.737-0400 I TXN [conn4] Incoming coordinateCommit requests are now enabled
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.737-0400 d20020| 2019-07-25T18:25:11.737-0400 I SHARDING [conn4] Finished initializing sharding components for primary node.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.738-0400 d20020| 2019-07-25T18:25:11.737-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.739-0400 d20020| 2019-07-25T18:25:11.738-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.739-0400 d20020| 2019-07-25T18:25:11.739-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.743-0400 c20023| 2019-07-25T18:25:11.742-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49521 #26 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.744-0400 c20023| 2019-07-25T18:25:11.744-0400 I NETWORK [conn26] received client metadata from 127.0.0.1:49521 conn26: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.744-0400 c20022| 2019-07-25T18:25:11.744-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49522 #27 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.744-0400 c20021| 2019-07-25T18:25:11.744-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49523 #41 (13 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.745-0400 c20022| 2019-07-25T18:25:11.744-0400 I NETWORK [conn27] received client metadata from 127.0.0.1:49522 conn27: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.745-0400 c20021| 2019-07-25T18:25:11.745-0400 I NETWORK [conn41] received client metadata from 127.0.0.1:49523 conn41: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.747-0400 d20020| 2019-07-25T18:25:11.747-0400 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for configsvr_failover_repro-configRS is configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.747-0400 d20020| 2019-07-25T18:25:11.747-0400 I SHARDING [Sharding-Fixed-0] Updating config server with confirmed set configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.749-0400 c20023| 2019-07-25T18:25:11.749-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49524 #27 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.750-0400 c20023| 2019-07-25T18:25:11.749-0400 I NETWORK [conn27] received client metadata from 127.0.0.1:49524 conn27: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.750-0400 c20023| 2019-07-25T18:25:11.749-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49525 #28 (9 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.750-0400 c20023| 2019-07-25T18:25:11.750-0400 I NETWORK [conn28] received client metadata from 127.0.0.1:49525 conn28: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.751-0400 c20021| 2019-07-25T18:25:11.750-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49526 #42 (14 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.752-0400 c20021| 2019-07-25T18:25:11.751-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49527 #43 (15 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.752-0400 c20021| 2019-07-25T18:25:11.751-0400 I NETWORK [conn42] received client metadata from 127.0.0.1:49526 conn42: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.752-0400 c20022| 2019-07-25T18:25:11.752-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49528 #28 (9 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.752-0400 c20021| 2019-07-25T18:25:11.752-0400 I NETWORK [conn43] received client metadata from 127.0.0.1:49527 conn43: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.753-0400 c20022| 2019-07-25T18:25:11.753-0400 I NETWORK [conn28] received client metadata from 127.0.0.1:49528 conn28: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.753-0400 c20023| 2019-07-25T18:25:11.753-0400 I SHARDING [conn27] Marking collection config.shards as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.755-0400 d20020| 2019-07-25T18:25:11.754-0400 I SHARDING [ShardRegistry] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(0, 0), t: -1 }, now { ts: Timestamp(1564093511, 5), t: 1 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.759-0400 d20020| 2019-07-25T18:25:11.759-0400 I SHARDING [PeriodicBalancerConfigRefresher] ShouldAutoSplit changing from 1 to 0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.761-0400 d20020| 2019-07-25T18:25:11.761-0400 I COMMAND [conn4] setting featureCompatibilityVersion to upgrading to 4.2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.761-0400 d20020| 2019-07-25T18:25:11.761-0400 I NETWORK [conn4] Skip closing connection for connection # 4
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.761-0400 d20020| 2019-07-25T18:25:11.761-0400 I NETWORK [conn4] Skip closing connection for connection # 3
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.761-0400 d20020| 2019-07-25T18:25:11.761-0400 I NETWORK [conn4] Skip closing connection for connection # 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.761-0400 d20020| 2019-07-25T18:25:11.761-0400 I NETWORK [conn4] Skip closing connection for connection # 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.777-0400 d20020| 2019-07-25T18:25:11.777-0400 I COMMAND [conn4] setting featureCompatibilityVersion to 4.2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.777-0400 d20020| 2019-07-25T18:25:11.777-0400 I NETWORK [conn4] Skip closing connection for connection # 4
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.777-0400 d20020| 2019-07-25T18:25:11.777-0400 I NETWORK [conn4] Skip closing connection for connection # 3
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.777-0400 d20020| 2019-07-25T18:25:11.777-0400 I NETWORK [conn4] Skip closing connection for connection # 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.778-0400 d20020| 2019-07-25T18:25:11.777-0400 I NETWORK [conn4] Skip closing connection for connection # 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.787-0400 d20020| 2019-07-25T18:25:11.787-0400 W SHARDING [replSetDistLockPinger] pinging failed for distributed lock pinger :: caused by :: LockStateChangeFailed: findAndModify query predicate didn't match any lock document
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.796-0400 c20021| 2019-07-25T18:25:11.796-0400 I SHARDING [conn36] going to insert new entry for shard into config.shards: { _id: "configsvr_failover_repro-rs0", host: "configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020", state: 1 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.800-0400 c20021| 2019-07-25T18:25:11.800-0400 I STORAGE [conn36] createCollection: config.changelog with generated UUID: b00cc6c9-f585-4cd8-9cf1-362a83e2e9df and options: { capped: true, size: 209715200 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.866-0400 c20021| 2019-07-25T18:25:11.865-0400 I INDEX [conn36] index build: done building index _id_ on ns config.changelog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.882-0400 c20022| 2019-07-25T18:25:11.882-0400 I STORAGE [repl-writer-worker-5] createCollection: config.changelog with provided UUID: b00cc6c9-f585-4cd8-9cf1-362a83e2e9df and options: { uuid: UUID("b00cc6c9-f585-4cd8-9cf1-362a83e2e9df"), capped: true, size: 209715200 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.882-0400 c20023| 2019-07-25T18:25:11.882-0400 I STORAGE [repl-writer-worker-2] createCollection: config.changelog with provided UUID: b00cc6c9-f585-4cd8-9cf1-362a83e2e9df and options: { uuid: UUID("b00cc6c9-f585-4cd8-9cf1-362a83e2e9df"), capped: true, size: 209715200 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.952-0400 c20023| 2019-07-25T18:25:11.952-0400 I INDEX [repl-writer-worker-2] index build: done building index _id_ on ns config.changelog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.962-0400 c20022| 2019-07-25T18:25:11.962-0400 I INDEX [repl-writer-worker-5] index build: done building index _id_ on ns config.changelog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.974-0400 c20021| 2019-07-25T18:25:11.973-0400 I COMMAND [conn36] command config.changelog appName: "MongoDB Shell" command: create { create: "changelog", capped: true, size: 209715200, writeConcern: { w: "majority", wtimeout: 60000 }, $db: "config" } numYields:0 reslen:272 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { r: 1, W: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 172ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.974-0400 c20021| 2019-07-25T18:25:11.973-0400 I SHARDING [conn36] about to log metadata event into changelog: { _id: "Jasons-MacBook-Pro.local:20021-2019-07-25T18:25:11.973-0400-5d3a2c479cfa09cae7a798ab", server: "Jasons-MacBook-Pro.local:20021", shard: "config", clientAddr: "127.0.0.1:49515", time: new Date(1564093511973), what: "addShard", ns: "", details: { name: "configsvr_failover_repro-rs0", host: "configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.974-0400 c20021| 2019-07-25T18:25:11.974-0400 I SHARDING [conn36] Marking collection config.changelog as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.996-0400 c20023| 2019-07-25T18:25:11.996-0400 I SHARDING [repl-writer-worker-7] Marking collection config.changelog as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:11.996-0400 c20022| 2019-07-25T18:25:11.996-0400 I SHARDING [repl-writer-worker-0] Marking collection config.changelog as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.016-0400 c20021| 2019-07-25T18:25:12.016-0400 I COMMAND [conn36] command admin.$cmd appName: "MongoDB Shell" command: _configsvrAddShard { _configsvrAddShard: "configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020", writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, tracking_info: { operId: ObjectId('5d3a2c473e6b567cf0908dd1'), operName: "", parentOperId: "5d3a2c473e6b567cf0908dd0" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1564093511, 5), signature: { hash: BinData(0, E637D02C02A4B24DFEF7FB7BCABAF1FF5587424C), keyId: 6717730434681143305 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" }, mongos: { host: "Jasons-MacBook-Pro.local:20024", client: "127.0.0.1:49516", version: "4.3.0-703-g917d338" } }, $configServerState: { opTime: { ts: Timestamp(1564093511, 5), t: 1 } }, $db: "admin" } numYields:0 reslen:550 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 7 } }, Global: { acquireCount: { r: 4, w: 3 } }, Database: { acquireCount: { r: 3, w: 3 } }, Collection: { acquireCount: { r: 4, w: 2, W: 1 } }, Metadata: { acquireCount: { W: 1 } }, Mutex: { acquireCount: { r: 10, W: 1 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 295ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.025-0400 s20024| 2019-07-25T18:25:12.025-0400 D1 SHARDING [conn8] found 1 shards listed on config server(s) with lastVisibleOpTime: { ts: Timestamp(1564093511, 9), t: 1 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.025-0400 s20024| 2019-07-25T18:25:12.025-0400 I NETWORK [conn8] Starting new replica set monitor for configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.026-0400 s20024| 2019-07-25T18:25:12.025-0400 D1 NETWORK [conn8] Next replica set scan scheduled for 2019-07-25T18:25:42.025-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.026-0400 s20024| 2019-07-25T18:25:12.026-0400 D1 NETWORK [conn8] Started targeter for configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.026-0400 s20024| 2019-07-25T18:25:12.026-0400 I COMMAND [conn8] command admin.configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020 appName: "MongoDB Shell" command: addShard { addshard: "configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020", lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47") }, $clusterTime: { clusterTime: Timestamp(1564093511, 5), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:208 protocol:op_msg 308ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.074-0400 Waiting for op with OpTime { "ts" : Timestamp(1564093511, 9), "t" : NumberLong(1) } to be committed on all secondaries
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.194-0400 c20021| 2019-07-25T18:25:12.194-0400 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database config from version {} to version { uuid: UUID("176e2f90-e72e-4d25-b87b-b9b7c511f946"), lastMod: 0 } took 0 ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.221-0400 c20023| 2019-07-25T18:25:12.221-0400 I SHARDING [repl-writer-worker-15] Marking collection config.locks as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.222-0400 c20022| 2019-07-25T18:25:12.221-0400 I SHARDING [repl-writer-worker-2] Marking collection config.locks as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.236-0400 c20021| 2019-07-25T18:25:12.236-0400 I SHARDING [conn1] distributed lock 'config' acquired for 'shardCollection', ts : 5d3a2c489cfa09cae7a798c1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.276-0400 c20021| 2019-07-25T18:25:12.275-0400 I SHARDING [conn1] distributed lock 'config.system.sessions' acquired for 'shardCollection', ts : 5d3a2c489cfa09cae7a798c8
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.276-0400 c20021| 2019-07-25T18:25:12.275-0400 I SHARDING [conn1] Marking collection config.system.sessions as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.278-0400 d20020| 2019-07-25T18:25:12.278-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49529 #13 (5 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.278-0400 d20020| 2019-07-25T18:25:12.278-0400 I NETWORK [conn13] received client metadata from 127.0.0.1:49529 conn13: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.283-0400 c20023| 2019-07-25T18:25:12.282-0400 I SHARDING [conn27] Marking collection config.collections as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.284-0400 c20023| 2019-07-25T18:25:12.284-0400 I SHARDING [conn27] Marking collection config.chunks as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.290-0400 d20020| 2019-07-25T18:25:12.290-0400 I STORAGE [conn13] createCollection: config.system.sessions with provided UUID: 169b4ca7-9147-452d-b8b8-2496698e9e94 and options: { uuid: UUID("169b4ca7-9147-452d-b8b8-2496698e9e94") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.344-0400 d20020| 2019-07-25T18:25:12.343-0400 I INDEX [conn13] index build: done building index _id_ on ns config.system.sessions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.345-0400 d20020| 2019-07-25T18:25:12.345-0400 I INDEX [conn13] Registering index build: c4203de1-dbef-4206-8a84-d88f232c0c8c
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.345-0400 d20020| 2019-07-25T18:25:12.345-0400 I INDEX [conn13] Waiting for index build to complete: c4203de1-dbef-4206-8a84-d88f232c0c8c
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.345-0400 d20020| 2019-07-25T18:25:12.345-0400 I INDEX [conn13] Index build completed: c4203de1-dbef-4206-8a84-d88f232c0c8c
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.355-0400 c20022| 2019-07-25T18:25:12.355-0400 I SHARDING [conn28] Marking collection config.tags as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.363-0400 d20020| 2019-07-25T18:25:12.362-0400 I NETWORK [conn13] Starting new replica set monitor for configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.363-0400 d20020| 2019-07-25T18:25:12.363-0400 I SHARDING [conn13] CMD: shardcollection: { _shardsvrShardCollection: "config.system.sessions", key: { _id: 1 }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("9a454f36-648a-450a-9eb9-b85dea2fbf25"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1564093512, 2), signature: { hash: BinData(0, C07E7B3E9E212D93ECC84C3FA4D924E7CF9F21C0), keyId: 6717730434681143305 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }, $configServerState: { opTime: { ts: Timestamp(1564093512, 2), t: 1 } }, $db: "admin" }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.365-0400 d20020| 2019-07-25T18:25:12.365-0400 I SHARDING [conn13] about to log metadata event into changelog: { _id: "Jasons-MacBook-Pro.local:20020-2019-07-25T18:25:12.365-0400-5d3a2c482d71daf4c4e5f042", server: "Jasons-MacBook-Pro.local:20020", shard: "configsvr_failover_repro-rs0", clientAddr: "127.0.0.1:49529", time: new Date(1564093512365), what: "shardCollection.start", ns: "config.system.sessions", details: { shardKey: { _id: 1 }, collection: "config.system.sessions", uuid: UUID("169b4ca7-9147-452d-b8b8-2496698e9e94"), empty: true, fromMapReduce: false, primary: "configsvr_failover_repro-rs0:configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020", numChunks: 1 } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.402-0400 c20021| 2019-07-25T18:25:12.402-0400 D4 TXN [conn42] New transaction started with txnNumber: 0 on session with lsid d336e4d9-9bca-49b0-9f9b-9940876200da
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.422-0400 c20022| 2019-07-25T18:25:12.421-0400 I SHARDING [repl-writer-worker-8] Marking collection config.chunks as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.463-0400 c20021| 2019-07-25T18:25:12.463-0400 I STORAGE [conn42] createCollection: config.collections with generated UUID: c91bd94c-858a-4b52-a9a4-ed241d46bb6b and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.515-0400 c20021| 2019-07-25T18:25:12.515-0400 I INDEX [conn42] index build: done building index _id_ on ns config.collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.533-0400 c20023| 2019-07-25T18:25:12.533-0400 I STORAGE [repl-writer-worker-13] createCollection: config.collections with provided UUID: c91bd94c-858a-4b52-a9a4-ed241d46bb6b and options: { uuid: UUID("c91bd94c-858a-4b52-a9a4-ed241d46bb6b") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.534-0400 c20022| 2019-07-25T18:25:12.534-0400 I STORAGE [repl-writer-worker-11] createCollection: config.collections with provided UUID: c91bd94c-858a-4b52-a9a4-ed241d46bb6b and options: { uuid: UUID("c91bd94c-858a-4b52-a9a4-ed241d46bb6b") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.610-0400 c20023| 2019-07-25T18:25:12.610-0400 I INDEX [repl-writer-worker-13] index build: done building index _id_ on ns config.collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.619-0400 c20022| 2019-07-25T18:25:12.619-0400 I INDEX [repl-writer-worker-11] index build: done building index _id_ on ns config.collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.623-0400 c20022| 2019-07-25T18:25:12.623-0400 I SHARDING [repl-writer-worker-1] Marking collection config.collections as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.641-0400 c20021| 2019-07-25T18:25:12.640-0400 I COMMAND [conn42] command config.$cmd appName: "MongoDB Shell" command: update { update: "collections", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "config.system.sessions" }, u: { _id: "config.system.sessions", lastmodEpoch: ObjectId('5d3a2c482d71daf4c4e5f043'), lastmod: new Date(4294967296), dropped: false, key: { _id: 1 }, unique: false, uuid: UUID("169b4ca7-9147-452d-b8b8-2496698e9e94") }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000, lsid: { id: UUID("9a454f36-648a-450a-9eb9-b85dea2fbf25"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1564093512, 5), signature: { hash: BinData(0, C07E7B3E9E212D93ECC84C3FA4D924E7CF9F21C0), keyId: 6717730434681143305 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }, $configServerState: { opTime: { ts: Timestamp(1564093512, 5), t: 1 } }, $db: "config" } numYields:0 reslen:653 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { w: 3 } }, Database: { acquireCount: { w: 3 } }, Collection: { acquireCount: { r: 2, w: 2, W: 1 } }, Mutex: { acquireCount: { r: 5 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 177ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.643-0400 d20020| 2019-07-25T18:25:12.642-0400 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database config from version {} to version { uuid: UUID("c6cdf3bd-1769-4fc3-b5b3-18b6dcc53e36"), lastMod: 0 } took 0 ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.643-0400 d20020| 2019-07-25T18:25:12.643-0400 I STORAGE [ShardServerCatalogCacheLoader-0] createCollection: config.cache.databases with generated UUID: 245bb089-d70e-4fce-aac4-3f2cb83a2712 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.651-0400 d20020| 2019-07-25T18:25:12.651-0400 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.collections with generated UUID: aaeeb449-06a2-456c-88ad-c9f05fde12f6 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.651-0400 d20020| 2019-07-25T18:25:12.651-0400 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection config.system.sessions to version 1|0||5d3a2c482d71daf4c4e5f043 took 6 ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.652-0400 d20020| 2019-07-25T18:25:12.652-0400 I SHARDING [conn13] Marking collection config.system.sessions as collection version: 1|0||5d3a2c482d71daf4c4e5f043, shard version: 1|0||5d3a2c482d71daf4c4e5f043
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.652-0400 d20020| 2019-07-25T18:25:12.652-0400 I SHARDING [conn13] Created 1 chunk(s) for: config.system.sessions, producing collection version 1|0||5d3a2c482d71daf4c4e5f043
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.652-0400 d20020| 2019-07-25T18:25:12.652-0400 I SHARDING [conn13] about to log metadata event into changelog: { _id: "Jasons-MacBook-Pro.local:20020-2019-07-25T18:25:12.652-0400-5d3a2c482d71daf4c4e5f049", server: "Jasons-MacBook-Pro.local:20020", shard: "configsvr_failover_repro-rs0", clientAddr: "127.0.0.1:49529", time: new Date(1564093512652), what: "shardCollection.end", ns: "config.system.sessions", details: { version: "1|0||5d3a2c482d71daf4c4e5f043" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.700-0400 d20020| 2019-07-25T18:25:12.700-0400 I COMMAND [conn13] command admin.$cmd appName: "MongoDB Shell" command: _shardsvrShardCollection { _shardsvrShardCollection: "config.system.sessions", key: { _id: 1 }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("9a454f36-648a-450a-9eb9-b85dea2fbf25"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1564093512, 2), signature: { hash: BinData(0, C07E7B3E9E212D93ECC84C3FA4D924E7CF9F21C0), keyId: 6717730434681143305 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }, $configServerState: { opTime: { ts: Timestamp(1564093512, 2), t: 1 } }, $db: "admin" } numYields:0 reslen:416 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 9 } }, ReplicationStateTransition: { acquireCount: { w: 13 } }, Global: { acquireCount: { r: 6, w: 7 } }, Database: { acquireCount: { r: 6, w: 7 } }, Collection: { acquireCount: { r: 11, w: 3, W: 4 } }, Mutex: { acquireCount: { r: 13, W: 4 } } } flowControl:{ acquireCount: 5 } storage:{} protocol:op_msg 419ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.704-0400 c20021| 2019-07-25T18:25:12.704-0400 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection config.system.sessions to version 1|0||5d3a2c482d71daf4c4e5f043 took 3 ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.744-0400 c20021| 2019-07-25T18:25:12.743-0400 I SHARDING [conn1] distributed lock with ts: 5d3a2c489cfa09cae7a798c8' unlocked.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.777-0400 d20020| 2019-07-25T18:25:12.777-0400 I INDEX [ShardServerCatalogCacheLoader-0] index build: done building index _id_ on ns config.cache.databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.778-0400 d20020| 2019-07-25T18:25:12.778-0400 I WRITE [ShardServerCatalogCacheLoader-0] update config.cache.databases command: { q: { _id: "config" }, u: { $set: { _id: "config", primary: "config", partitioned: true, version: { uuid: UUID("c6cdf3bd-1769-4fc3-b5b3-18b6dcc53e36"), lastMod: 0 } } }, multi: false, upsert: true } planSummary: IDHACK keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 keysInserted:1 numYields:0 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { w: 3 } }, Database: { acquireCount: { w: 3 } }, Collection: { acquireCount: { r: 2, w: 2, W: 1 } }, Mutex: { acquireCount: { r: 5 } } } flowControl:{ acquireCount: 3 } storage:{} 135ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.779-0400 d20020| 2019-07-25T18:25:12.779-0400 I COMMAND [ShardServerCatalogCacheLoader-0] command config.$cmd command: update { update: "cache.databases", bypassDocumentValidation: false, ordered: true, $db: "config" } numYields:0 reslen:465 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { w: 3 } }, Collection: { acquireCount: { r: 2, w: 2, W: 1 } }, Mutex: { acquireCount: { r: 5 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 135ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.796-0400 c20021| 2019-07-25T18:25:12.795-0400 I SHARDING [conn1] distributed lock with ts: 5d3a2c489cfa09cae7a798c1' unlocked.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.797-0400 c20021| 2019-07-25T18:25:12.796-0400 I COMMAND [conn1] command admin.$cmd appName: "MongoDB Shell" command: _configsvrShardCollection { _configsvrShardCollection: "config.system.sessions", key: { _id: 1 }, unique: false, numInitialChunks: 0, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, $db: "admin" } numYields:0 reslen:355 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 5 } }, Global: { acquireCount: { r: 1, w: 4 } }, Database: { acquireCount: { r: 1, w: 4 } }, Collection: { acquireCount: { r: 2, w: 4 } }, Mutex: { acquireCount: { r: 9, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 600ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.797-0400 c20021| 2019-07-25T18:25:12.797-0400 I CONNPOOL [TaskExecutorPool-0] Connecting to Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.799-0400 d20020| 2019-07-25T18:25:12.799-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49530 #14 (6 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.799-0400 d20020| 2019-07-25T18:25:12.799-0400 I NETWORK [conn14] received client metadata from 127.0.0.1:49530 conn14: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.804-0400 d20020| 2019-07-25T18:25:12.804-0400 I INDEX [conn14] Registering index build: 055b3041-ffd6-4243-9d9f-74d10237bceb
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.822-0400 d20020| 2019-07-25T18:25:12.822-0400 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.825-0400 d20020| 2019-07-25T18:25:12.825-0400 I WRITE [ShardServerCatalogCacheLoader-1] update config.cache.collections command: { q: { _id: "config.system.sessions" }, u: { $set: { _id: "config.system.sessions", uuid: UUID("169b4ca7-9147-452d-b8b8-2496698e9e94"), epoch: ObjectId('5d3a2c482d71daf4c4e5f043'), key: { _id: 1 }, unique: false, refreshing: true } }, multi: false, upsert: true } planSummary: IDHACK keysExamined:0 docsExamined:0 nMatched:0 nModified:0 upsert:1 keysInserted:1 numYields:0 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { w: 3 } }, Database: { acquireCount: { w: 3 } }, Collection: { acquireCount: { r: 2, w: 2, W: 1 } }, Mutex: { acquireCount: { r: 5 } } } flowControl:{ acquireCount: 3 } storage:{} 174ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.825-0400 d20020| 2019-07-25T18:25:12.825-0400 I COMMAND [ShardServerCatalogCacheLoader-1] command config.$cmd command: update { update: "cache.collections", bypassDocumentValidation: false, ordered: true, $db: "config" } numYields:0 reslen:481 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 4 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { w: 3 } }, Collection: { acquireCount: { r: 2, w: 2, W: 1 } }, Mutex: { acquireCount: { r: 5 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 174ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.826-0400 d20020| 2019-07-25T18:25:12.826-0400 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.chunks.config.system.sessions with provided UUID: af7277a7-a3e5-406d-ac28-4ce6862bbaa0 and options: { uuid: UUID("af7277a7-a3e5-406d-ac28-4ce6862bbaa0") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.903-0400 d20020| 2019-07-25T18:25:12.903-0400 I INDEX [conn14] index build: starting on config.system.sessions properties: { v: 2, key: { lastUse: 1 }, name: "lsidTTLIndex", ns: "config.system.sessions", expireAfterSeconds: 1800 } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.903-0400 d20020| 2019-07-25T18:25:12.903-0400 I INDEX [conn14] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.904-0400 d20020| 2019-07-25T18:25:12.903-0400 I STORAGE [conn14] Index build initialized: 055b3041-ffd6-4243-9d9f-74d10237bceb: config.system.sessions (169b4ca7-9147-452d-b8b8-2496698e9e94 ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.904-0400 d20020| 2019-07-25T18:25:12.904-0400 I INDEX [conn14] Waiting for index build to complete: 055b3041-ffd6-4243-9d9f-74d10237bceb
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.904-0400 d20020| 2019-07-25T18:25:12.904-0400 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.936-0400 d20020| 2019-07-25T18:25:12.936-0400 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.chunks.config.system.sessions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.937-0400 d20020| 2019-07-25T18:25:12.937-0400 I INDEX [ShardServerCatalogCacheLoader-1] Registering index build: cf7f86b3-14d8-4dc4-be3a-c8351eeac4de
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.938-0400 d20020| 2019-07-25T18:25:12.938-0400 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:12.977-0400 d20020| 2019-07-25T18:25:12.977-0400 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lsidTTLIndex on ns config.system.sessions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.005-0400 d20020| 2019-07-25T18:25:13.005-0400 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: 055b3041-ffd6-4243-9d9f-74d10237bceb: config.system.sessions ( 169b4ca7-9147-452d-b8b8-2496698e9e94 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.007-0400 d20020| 2019-07-25T18:25:13.007-0400 I INDEX [ShardServerCatalogCacheLoader-1] index build: starting on config.cache.chunks.config.system.sessions properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1", ns: "config.cache.chunks.config.system.sessions" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.007-0400 d20020| 2019-07-25T18:25:13.007-0400 I INDEX [ShardServerCatalogCacheLoader-1] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.008-0400 d20020| 2019-07-25T18:25:13.008-0400 I STORAGE [ShardServerCatalogCacheLoader-1] Index build initialized: cf7f86b3-14d8-4dc4-be3a-c8351eeac4de: config.cache.chunks.config.system.sessions (af7277a7-a3e5-406d-ac28-4ce6862bbaa0 ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.008-0400 d20020| 2019-07-25T18:25:13.008-0400 I INDEX [ShardServerCatalogCacheLoader-1] Waiting for index build to complete: cf7f86b3-14d8-4dc4-be3a-c8351eeac4de
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.009-0400 d20020| 2019-07-25T18:25:13.008-0400 I INDEX [conn14] Index build completed: 055b3041-ffd6-4243-9d9f-74d10237bceb
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.009-0400 d20020| 2019-07-25T18:25:13.009-0400 I COMMAND [conn14] command config.system.sessions appName: "MongoDB Shell" command: createIndexes { createIndexes: "system.sessions", indexes: [ { key: { lastUse: 1 }, name: "lsidTTLIndex", expireAfterSeconds: 1800 } ], allowImplicitCollectionCreation: false, lsid: { id: UUID("9a454f36-648a-450a-9eb9-b85dea2fbf25"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $readPreference: { mode: "nearest" }, $clusterTime: { clusterTime: Timestamp(1564093512, 12), signature: { hash: BinData(0, C07E7B3E9E212D93ECC84C3FA4D924E7CF9F21C0), keyId: 6717730434681143305 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }, $configServerState: { opTime: { ts: Timestamp(1564093512, 12), t: 1 } }, $db: "config" } numYields:0 reslen:427 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 2 } storage:{} protocol:op_msg 206ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.010-0400 d20020| 2019-07-25T18:25:13.010-0400 I INDEX [IndexBuildsCoordinatorMongod-0] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.014-0400 d20020| 2019-07-25T18:25:13.014-0400 I INDEX [IndexBuildsCoordinatorMongod-0] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.023-0400 d20020| 2019-07-25T18:25:13.023-0400 I INDEX [IndexBuildsCoordinatorMongod-0] index build: done building index lastmod_1 on ns config.cache.chunks.config.system.sessions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.024-0400 d20020| 2019-07-25T18:25:13.023-0400 I STORAGE [IndexBuildsCoordinatorMongod-0] Index build completed successfully: cf7f86b3-14d8-4dc4-be3a-c8351eeac4de: config.cache.chunks.config.system.sessions ( af7277a7-a3e5-406d-ac28-4ce6862bbaa0 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.024-0400 d20020| 2019-07-25T18:25:13.024-0400 I INDEX [ShardServerCatalogCacheLoader-1] Index build completed: cf7f86b3-14d8-4dc4-be3a-c8351eeac4de
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.024-0400 d20020| 2019-07-25T18:25:13.024-0400 I COMMAND [ShardServerCatalogCacheLoader-1] command config.cache.chunks.config.system.sessions command: createIndexes { createIndexes: "cache.chunks.config.system.sessions", indexes: [ { name: "lastmod_1", key: { lastmod: 1 } } ], $db: "config" } numYields:0 reslen:427 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { r: 2, w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 5 } storage:{} protocol:op_msg 198ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.045-0400 c20021| 2019-07-25T18:25:13.045-0400 I COMMAND [conn1] command admin.$cmd appName: "MongoDB Shell" command: refreshLogicalSessionCacheNow { refreshLogicalSessionCacheNow: 1.0, lsid: { id: UUID("9a454f36-648a-450a-9eb9-b85dea2fbf25") }, $clusterTime: { clusterTime: Timestamp(1564093511, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:272 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 5 } }, ReplicationStateTransition: { acquireCount: { w: 7 } }, Global: { acquireCount: { r: 3, w: 4 } }, Database: { acquireCount: { r: 2, w: 4 } }, Collection: { acquireCount: { r: 4, w: 4 } }, Mutex: { acquireCount: { r: 10, W: 1 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 850ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.089-0400 s20024| 2019-07-25T18:25:13.089-0400 D1 TRACKING [conn8] Cmd: enableSharding, TrackingId: 5d3a2c493e6b567cf0908dd3
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.131-0400 c20021| 2019-07-25T18:25:13.131-0400 I SHARDING [conn36] distributed lock 'test' acquired for 'enableSharding', ts : 5d3a2c499cfa09cae7a7991e
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.132-0400 c20021| 2019-07-25T18:25:13.132-0400 I SHARDING [conn36] Marking collection config.databases as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.137-0400 c20021| 2019-07-25T18:25:13.137-0400 I SHARDING [conn36] Registering new database { _id: "test", primary: "configsvr_failover_repro-rs0", partitioned: false, version: { uuid: UUID("0cce76c1-c047-48b7-a8d7-e77d0ed13425"), lastMod: 1 } } in sharding catalog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.138-0400 c20021| 2019-07-25T18:25:13.138-0400 I STORAGE [conn36] createCollection: config.databases with generated UUID: 01649270-e43f-438a-ad71-36bd6eeffe6b and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.196-0400 c20021| 2019-07-25T18:25:13.196-0400 I INDEX [conn36] index build: done building index _id_ on ns config.databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.215-0400 c20023| 2019-07-25T18:25:13.215-0400 I STORAGE [repl-writer-worker-9] createCollection: config.databases with provided UUID: 01649270-e43f-438a-ad71-36bd6eeffe6b and options: { uuid: UUID("01649270-e43f-438a-ad71-36bd6eeffe6b") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.215-0400 c20022| 2019-07-25T18:25:13.215-0400 I STORAGE [repl-writer-worker-7] createCollection: config.databases with provided UUID: 01649270-e43f-438a-ad71-36bd6eeffe6b and options: { uuid: UUID("01649270-e43f-438a-ad71-36bd6eeffe6b") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.264-0400 c20022| 2019-07-25T18:25:13.264-0400 I CONNPOOL [RS] Ending connection to host Jasons-MacBook-Pro.local:20021 due to bad connection status: CallbackCanceled: Callback was canceled; 2 connections to that host remain open
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.265-0400 c20021| 2019-07-25T18:25:13.264-0400 I NETWORK [conn10] end connection 127.0.0.1:49481 (14 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.266-0400 c20023| 2019-07-25T18:25:13.266-0400 I INDEX [repl-writer-worker-9] index build: done building index _id_ on ns config.databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.273-0400 c20023| 2019-07-25T18:25:13.273-0400 I SHARDING [repl-writer-worker-11] Marking collection config.databases as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.287-0400 c20022| 2019-07-25T18:25:13.287-0400 I INDEX [repl-writer-worker-7] index build: done building index _id_ on ns config.databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.292-0400 c20022| 2019-07-25T18:25:13.292-0400 I SHARDING [repl-writer-worker-8] Marking collection config.databases as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.298-0400 c20021| 2019-07-25T18:25:13.298-0400 I COMMAND [conn36] command config.databases appName: "MongoDB Shell" command: insert { insert: "databases", bypassDocumentValidation: false, ordered: true, documents: [ { _id: "test", primary: "configsvr_failover_repro-rs0", partitioned: false, version: { uuid: UUID("0cce76c1-c047-48b7-a8d7-e77d0ed13425"), lastMod: 1 } } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, $db: "config" } ninserted:1 keysInserted:1 numYields:0 reslen:339 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { w: 3 } }, Database: { acquireCount: { w: 3 } }, Collection: { acquireCount: { r: 2, w: 2, W: 1 } }, Mutex: { acquireCount: { r: 5 } } } flowControl:{ acquireCount: 4 } storage:{} protocol:op_msg 160ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.304-0400 d20020| 2019-07-25T18:25:13.303-0400 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test from version {} to version { uuid: UUID("0cce76c1-c047-48b7-a8d7-e77d0ed13425"), lastMod: 1 } took 3 ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.307-0400 d20020| 2019-07-25T18:25:13.307-0400 I SHARDING [conn13] setting this node's cached database version for test to { uuid: UUID("0cce76c1-c047-48b7-a8d7-e77d0ed13425"), lastMod: 1 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.308-0400 c20021| 2019-07-25T18:25:13.308-0400 I SHARDING [conn36] Enabling sharding for database [test] in config db
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.371-0400 c20021| 2019-07-25T18:25:13.370-0400 I SHARDING [conn36] distributed lock with ts: 5d3a2c499cfa09cae7a7991e' unlocked.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.372-0400 c20021| 2019-07-25T18:25:13.371-0400 I COMMAND [conn36] command admin.$cmd appName: "MongoDB Shell" command: _configsvrEnableSharding { _configsvrEnableSharding: "test", writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, tracking_info: { operId: ObjectId('5d3a2c493e6b567cf0908dd4'), operName: "", parentOperId: "5d3a2c493e6b567cf0908dd3" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1564093511, 9), signature: { hash: BinData(0, E637D02C02A4B24DFEF7FB7BCABAF1FF5587424C), keyId: 6717730434681143305 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" }, mongos: { host: "Jasons-MacBook-Pro.local:20024", client: "127.0.0.1:49516", version: "4.3.0-703-g917d338" } }, $configServerState: { opTime: { ts: Timestamp(1564093511, 9), t: 1 } }, $db: "admin" } numYields:0 reslen:505 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 7 } }, ReplicationStateTransition: { acquireCount: { w: 8 } }, Global: { acquireCount: { r: 2, w: 6 } }, Database: { acquireCount: { r: 1, w: 6 } }, Collection: { acquireCount: { r: 4, w: 5, W: 1 } }, Mutex: { acquireCount: { r: 12 } } } flowControl:{ acquireCount: 6 } storage:{} protocol:op_msg 280ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.373-0400 s20024| 2019-07-25T18:25:13.372-0400 I COMMAND [conn8] command test appName: "MongoDB Shell" command: enableSharding { enableSharding: "test", lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47") }, $clusterTime: { clusterTime: Timestamp(1564093511, 9), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:163 protocol:op_msg 283ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.385-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.385-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.385-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.385-0400 [jsTest] New session started with sessionID: { "id" : UUID("6a42a806-c274-46d1-b124-fa04d86e12f7") } and options: { "causalConsistency" : false }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.385-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.385-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.386-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.394-0400 d20020| 2019-07-25T18:25:13.394-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49531 #15 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.394-0400 d20020| 2019-07-25T18:25:13.394-0400 I NETWORK [conn15] received client metadata from 127.0.0.1:49531 conn15: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.395-0400 d20020| 2019-07-25T18:25:13.395-0400 W COMMAND [conn15] failpoint: hangBeforeShardingCollection set to: { mode: 1, data: {} }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.410-0400 2019-07-25T18:25:13.409-0400 I - [js] shell: started program (sh2750): /Users/jason.zhang/mongodb/mongo/mongo --host localhost --port 20024 --eval TestData = {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.410-0400 "minPort" : 20020,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.410-0400 "maxPort" : 20249,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.411-0400 "failIfUnterminatedProcesses" : true,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.411-0400 "isMainTest" : true,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.411-0400 "numTestClients" : 1,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.411-0400 "enableMajorityReadConcern" : true,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.411-0400 "noJournal" : false,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.412-0400 "serviceExecutor" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.412-0400 "storageEngine" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.412-0400 "storageEngineCacheSizeGB" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.412-0400 "testName" : "configsvr_failover_repro",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.412-0400 "transportLayer" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.412-0400 "wiredTigerCollectionConfigString" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.412-0400 "wiredTigerEngineConfigString" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 "wiredTigerIndexConfigString" : "",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 "setParameters" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 "logComponentVerbosity" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 "replication" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 "rollback" : 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 },
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 "transaction" : 4
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 },
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 "setParametersMongos" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 "logComponentVerbosity" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 "transaction" : 3
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.413-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.414-0400 },
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.414-0400 "transactionLifetimeLimitSeconds" : 86400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.414-0400 };db = db.getSiblingDB('test');{
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.414-0400 var shardCollectionCmd = db.getMongo().adminCommand({shardCollection: 'test.foo', key: {_id : 1}});
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.414-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.422-0400 c20023| 2019-07-25T18:25:13.422-0400 I CONNPOOL [RS] Ending connection to host Jasons-MacBook-Pro.local:20021 due to bad connection status: CallbackCanceled: Callback was canceled; 2 connections to that host remain open
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.422-0400 c20021| 2019-07-25T18:25:13.422-0400 I NETWORK [conn13] end connection 127.0.0.1:49484 (13 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:13.603-0400 sh2750| MongoDB shell version v4.3.0-703-g917d338
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.002-0400 sh2750| connecting to: mongodb://localhost:20024/?compressors=disabled&gssapiServiceName=mongodb
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.006-0400 s20024| 2019-07-25T18:25:14.005-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49532 #9 (2 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.006-0400 s20024| 2019-07-25T18:25:14.006-0400 I NETWORK [conn9] received client metadata from 127.0.0.1:49532 conn9: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.007-0400 s20024| 2019-07-25T18:25:14.006-0400 I COMMAND [conn9] command admin.$cmd appName: "MongoDB Shell" command: isMaster { isMaster: 1, hostInfo: "Jasons-MacBook-Pro.local:27017", client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }, $db: "admin" } numYields:0 reslen:389 protocol:op_query 0ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.007-0400 s20024| 2019-07-25T18:25:14.007-0400 I COMMAND [conn9] command admin.$cmd appName: "MongoDB Shell" command: whatsmyuri { whatsmyuri: 1, $db: "admin" } numYields:0 reslen:188 protocol:op_msg 0ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.032-0400 sh2750| Implicit session: session { "id" : UUID("3c6e8552-60a8-42a4-bc64-789d45b7044d") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.045-0400 s20024| 2019-07-25T18:25:14.045-0400 I COMMAND [conn9] command admin.$cmd appName: "MongoDB Shell" command: buildInfo { buildinfo: 1.0, $db: "admin" } numYields:0 reslen:2098 protocol:op_msg 0ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.046-0400 sh2750| MongoDB server version: 4.3.0-703-g917d338
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.058-0400 s20024| 2019-07-25T18:25:14.058-0400 D1 TRACKING [conn9] Cmd: shardCollection, TrackingId: 5d3a2c4a3e6b567cf0908dd8
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.090-0400 c20021| 2019-07-25T18:25:14.089-0400 I SHARDING [conn36] distributed lock 'test' acquired for 'shardCollection', ts : 5d3a2c4a9cfa09cae7a7994e
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.121-0400 c20021| 2019-07-25T18:25:14.120-0400 I SHARDING [conn36] distributed lock 'test.foo' acquired for 'shardCollection', ts : 5d3a2c4a9cfa09cae7a79956
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.130-0400 d20020| 2019-07-25T18:25:14.130-0400 I STORAGE [conn13] createCollection: test.foo with provided UUID: 3df0dab2-7ed2-4ca2-81ad-7d5ddc465fb1 and options: { uuid: UUID("3df0dab2-7ed2-4ca2-81ad-7d5ddc465fb1") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.176-0400 d20020| 2019-07-25T18:25:14.176-0400 I INDEX [conn13] index build: done building index _id_ on ns test.foo
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.177-0400 d20020| 2019-07-25T18:25:14.177-0400 I INDEX [conn13] Registering index build: caeb0489-4f67-472b-8d15-d0ed01371012
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.177-0400 d20020| 2019-07-25T18:25:14.177-0400 I INDEX [conn13] Waiting for index build to complete: caeb0489-4f67-472b-8d15-d0ed01371012
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.177-0400 d20020| 2019-07-25T18:25:14.177-0400 I INDEX [conn13] Index build completed: caeb0489-4f67-472b-8d15-d0ed01371012
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.192-0400 c20023| 2019-07-25T18:25:14.192-0400 I SHARDING [conn27] Marking collection config.tags as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.199-0400 d20020| 2019-07-25T18:25:14.199-0400 I SHARDING [conn13] CMD: shardcollection: { _shardsvrShardCollection: "test.foo", key: { _id: 1.0 }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("3c6e8552-60a8-42a4-bc64-789d45b7044d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1564093514, 2), signature: { hash: BinData(0, 87D82898CCFA8515001F6FFAA14ECE7CCA627262), keyId: 6717730434681143305 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" }, mongos: { host: "Jasons-MacBook-Pro.local:20024", client: "127.0.0.1:49532", version: "4.3.0-703-g917d338" } }, $configServerState: { opTime: { ts: Timestamp(1564093514, 2), t: 1 } }, $db: "admin" }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.199-0400 d20020| 2019-07-25T18:25:14.199-0400 I SHARDING [conn13] about to log metadata event into changelog: { _id: "Jasons-MacBook-Pro.local:20020-2019-07-25T18:25:14.199-0400-5d3a2c4a2d71daf4c4e5f05d", server: "Jasons-MacBook-Pro.local:20020", shard: "configsvr_failover_repro-rs0", clientAddr: "127.0.0.1:49529", time: new Date(1564093514199), what: "shardCollection.start", ns: "test.foo", details: { shardKey: { _id: 1.0 }, collection: "test.foo", uuid: UUID("3df0dab2-7ed2-4ca2-81ad-7d5ddc465fb1"), empty: true, fromMapReduce: false, primary: "configsvr_failover_repro-rs0:configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020", numChunks: 1 } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.232-0400 c20021| 2019-07-25T18:25:14.232-0400 D4 TXN [conn42] New transaction started with txnNumber: 0 on session with lsid 2dac31cd-4db1-4bab-8931-f16948cf573a
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.306-0400 d20020| 2019-07-25T18:25:14.306-0400 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test.foo to version 1|0||5d3a2c4a2d71daf4c4e5f05e took 6 ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.306-0400 d20020| 2019-07-25T18:25:14.306-0400 I SHARDING [conn13] Marking collection test.foo as collection version: 1|0||5d3a2c4a2d71daf4c4e5f05e, shard version: 1|0||5d3a2c4a2d71daf4c4e5f05e
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.307-0400 d20020| 2019-07-25T18:25:14.307-0400 I SHARDING [conn13] Hit hangBeforeShardingCollection failpoint
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.307-0400 d20020| 2019-07-25T18:25:14.307-0400 I STORAGE [ShardServerCatalogCacheLoader-1] createCollection: config.cache.chunks.test.foo with provided UUID: 1f577be9-e257-4d51-9a39-fb861aada357 and options: { uuid: UUID("1f577be9-e257-4d51-9a39-fb861aada357") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.355-0400 d20020| 2019-07-25T18:25:14.355-0400 I INDEX [ShardServerCatalogCacheLoader-1] index build: done building index _id_ on ns config.cache.chunks.test.foo
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.356-0400 d20020| 2019-07-25T18:25:14.356-0400 I INDEX [ShardServerCatalogCacheLoader-1] Registering index build: bec886f4-bd9f-46bb-8d50-40ca50b4e562
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.404-0400 d20020| 2019-07-25T18:25:14.404-0400 I INDEX [ShardServerCatalogCacheLoader-1] index build: starting on config.cache.chunks.test.foo properties: { v: 2, key: { lastmod: 1 }, name: "lastmod_1", ns: "config.cache.chunks.test.foo" } using method: Hybrid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.404-0400 d20020| 2019-07-25T18:25:14.404-0400 I INDEX [ShardServerCatalogCacheLoader-1] build may temporarily use up to 500 megabytes of RAM
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.404-0400 d20020| 2019-07-25T18:25:14.404-0400 I STORAGE [ShardServerCatalogCacheLoader-1] Index build initialized: bec886f4-bd9f-46bb-8d50-40ca50b4e562: config.cache.chunks.test.foo (1f577be9-e257-4d51-9a39-fb861aada357 ): indexes: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.404-0400 d20020| 2019-07-25T18:25:14.404-0400 I INDEX [ShardServerCatalogCacheLoader-1] Waiting for index build to complete: bec886f4-bd9f-46bb-8d50-40ca50b4e562
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.405-0400 d20020| 2019-07-25T18:25:14.405-0400 I INDEX [IndexBuildsCoordinatorMongod-1] index build: collection scan done. scanned 0 total records in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.405-0400 d20020| 2019-07-25T18:25:14.405-0400 I INDEX [IndexBuildsCoordinatorMongod-1] index build: inserted 0 keys from external sorter into index in 0 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.429-0400 d20020| 2019-07-25T18:25:14.428-0400 I INDEX [IndexBuildsCoordinatorMongod-1] index build: done building index lastmod_1 on ns config.cache.chunks.test.foo
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.429-0400 d20020| 2019-07-25T18:25:14.429-0400 I STORAGE [IndexBuildsCoordinatorMongod-1] Index build completed successfully: bec886f4-bd9f-46bb-8d50-40ca50b4e562: config.cache.chunks.test.foo ( 1f577be9-e257-4d51-9a39-fb861aada357 ). Index specs built: 1. Indexes in catalog before build: 1. Indexes in catalog after build: 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.429-0400 d20020| 2019-07-25T18:25:14.429-0400 I INDEX [ShardServerCatalogCacheLoader-1] Index build completed: bec886f4-bd9f-46bb-8d50-40ca50b4e562
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.430-0400 d20020| 2019-07-25T18:25:14.429-0400 I COMMAND [ShardServerCatalogCacheLoader-1] command config.cache.chunks.test.foo command: createIndexes { createIndexes: "cache.chunks.test.foo", indexes: [ { name: "lastmod_1", key: { lastmod: 1 } } ], $db: "config" } numYields:0 reslen:427 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 4 } }, Global: { acquireCount: { r: 1, w: 3 } }, Database: { acquireCount: { r: 1, w: 3 } }, Collection: { acquireCount: { r: 2, w: 1, W: 2 } }, Mutex: { acquireCount: { r: 4 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 122ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.537-0400 c20021| 2019-07-25T18:25:14.536-0400 I COMMAND [conn1] Attempting to step down in response to replSetStepDown command
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.537-0400 c20021| 2019-07-25T18:25:14.537-0400 I REPL [RstlKillOpThread] Starting to kill user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.537-0400 c20021| 2019-07-25T18:25:14.537-0400 I REPL [RstlKillOpThread] Stopped killing user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.537-0400 c20021| 2019-07-25T18:25:14.537-0400 I CONNPOOL [ShardRegistry] Ending connection to host Jasons-MacBook-Pro.local:20020 due to bad connection status: CallbackCanceled: Callback was canceled; 0 connections to that host remain open
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.537-0400 c20021| 2019-07-25T18:25:14.537-0400 I REPL [conn1] Stepping down from primary, stats: { userOpsKilled: 1, userOpsRunning: 2 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.537-0400 c20021| 2019-07-25T18:25:14.537-0400 I REPL [conn1] transition to SECONDARY from PRIMARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.537-0400 c20021| 2019-07-25T18:25:14.537-0400 I CONNPOOL [ShardRegistry] Connecting to Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.538-0400 c20021| 2019-07-25T18:25:14.537-0400 I SHARDING [Balancer] CSRS balancer is now stopped
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.538-0400 c20021| 2019-07-25T18:25:14.538-0400 I COMMAND [conn1] replSetStepDown command completed
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.538-0400 c20021| 2019-07-25T18:25:14.538-0400 W COMMAND [conn36] Unable to gather storage statistics for a slow operation due to lock aquire timeout
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.538-0400 c20021| 2019-07-25T18:25:14.538-0400 I COMMAND [conn36] command admin.$cmd appName: "MongoDB Shell" command: _configsvrShardCollection { _configsvrShardCollection: "test.foo", key: { _id: 1.0 }, unique: false, numInitialChunks: 0, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("3c6e8552-60a8-42a4-bc64-789d45b7044d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, tracking_info: { operId: ObjectId('5d3a2c4a3e6b567cf0908dd9'), operName: "", parentOperId: "5d3a2c4a3e6b567cf0908dd8" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1564093513, 13), signature: { hash: BinData(0, 6BA18161DAF14C2AB20C8B3ADBC9D7C12F681330), keyId: 6717730434681143305 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" }, mongos: { host: "Jasons-MacBook-Pro.local:20024", client: "127.0.0.1:49532", version: "4.3.0-703-g917d338" } }, $configServerState: { opTime: { ts: Timestamp(1564093513, 13), t: 1 } }, $db: "admin" } numYields:0 ok:0 errMsg:"operation was interrupted" errName:InterruptedDueToReplStateChange errCode:11602 reslen:717 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 2 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { r: 1, w: 2 } }, Database: { acquireCount: { r: 1, w: 2 } }, Collection: { acquireCount: { r: 1, w: 2 } }, Mutex: { acquireCount: { r: 5, W: 1 } } } flowControl:{ acquireCount: 2 } protocol:op_msg 479ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.539-0400 s20024| 2019-07-25T18:25:14.538-0400 I NETWORK [UpdateReplicaSetOnConfigServer] Marking host Jasons-MacBook-Pro.local:20021 as failed :: caused by :: InterruptedDueToReplStateChange: operation was interrupted
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.539-0400 s20024| 2019-07-25T18:25:14.539-0400 I COMMAND [conn9] command test.foo appName: "MongoDB Shell" command: shardCollection { shardCollection: "test.foo", key: { _id: 1.0 }, lsid: { id: UUID("3c6e8552-60a8-42a4-bc64-789d45b7044d") }, $clusterTime: { clusterTime: Timestamp(1564093513, 13), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "admin" } numYields:0 reslen:375 protocol:op_msg 481ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.540-0400 d20020| 2019-07-25T18:25:14.540-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49533 #16 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.541-0400 d20020| 2019-07-25T18:25:14.540-0400 I NETWORK [conn16] received client metadata from 127.0.0.1:49533 conn16: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.542-0400 sh2750| test
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.546-0400 s20024| 2019-07-25T18:25:14.546-0400 I COMMAND [conn9] command admin.$cmd appName: "MongoDB Shell" command: endSessions { endSessions: [ { id: UUID("3c6e8552-60a8-42a4-bc64-789d45b7044d") } ], $db: "admin" } numYields:0 reslen:163 protocol:op_msg 0ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.546-0400 s20024| 2019-07-25T18:25:14.546-0400 I NETWORK [conn9] end connection 127.0.0.1:49532 (1 connection now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.548-0400 s20024| 2019-07-25T18:25:14.548-0400 D1 NETWORK [conn8] Next replica set scan scheduled for 2019-07-25T18:25:15.048-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.551-0400 s20024| 2019-07-25T18:25:14.551-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:14.551-0400 s20024| 2019-07-25T18:25:14.551-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 2ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:15.052-0400 s20024| 2019-07-25T18:25:15.052-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:15.552-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:15.056-0400 s20024| 2019-07-25T18:25:15.055-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:15.056-0400 s20024| 2019-07-25T18:25:15.056-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:15.068-0400 c20023| 2019-07-25T18:25:15.068-0400 I REPL [replexec-0] Member Jasons-MacBook-Pro.local:20021 is now in state SECONDARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:15.086-0400 c20022| 2019-07-25T18:25:15.086-0400 I REPL [replexec-2] Member Jasons-MacBook-Pro.local:20021 is now in state SECONDARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:15.556-0400 s20024| 2019-07-25T18:25:15.556-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:16.055-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:15.559-0400 s20024| 2019-07-25T18:25:15.559-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:15.559-0400 s20024| 2019-07-25T18:25:15.559-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 491ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:16.058-0400 s20024| 2019-07-25T18:25:16.057-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:16.557-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:16.060-0400 s20024| 2019-07-25T18:25:16.060-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:16.060-0400 s20024| 2019-07-25T18:25:16.060-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:16.560-0400 s20024| 2019-07-25T18:25:16.560-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:17.059-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:16.563-0400 s20024| 2019-07-25T18:25:16.562-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:16.563-0400 s20024| 2019-07-25T18:25:16.562-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:17.062-0400 s20024| 2019-07-25T18:25:17.062-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:17.562-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:17.065-0400 s20024| 2019-07-25T18:25:17.065-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:17.065-0400 s20024| 2019-07-25T18:25:17.065-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:17.564-0400 s20024| 2019-07-25T18:25:17.564-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:18.064-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:17.567-0400 s20024| 2019-07-25T18:25:17.567-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:17.567-0400 s20024| 2019-07-25T18:25:17.567-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:18.066-0400 s20024| 2019-07-25T18:25:18.066-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:18.566-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:18.069-0400 s20024| 2019-07-25T18:25:18.069-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:18.069-0400 s20024| 2019-07-25T18:25:18.069-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 4ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:18.570-0400 s20024| 2019-07-25T18:25:18.570-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:19.070-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:18.574-0400 s20024| 2019-07-25T18:25:18.573-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:18.574-0400 s20024| 2019-07-25T18:25:18.573-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.073-0400 s20024| 2019-07-25T18:25:19.073-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:19.573-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.077-0400 s20024| 2019-07-25T18:25:19.077-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.077-0400 s20024| 2019-07-25T18:25:19.077-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.306-0400 c20023| 2019-07-25T18:25:19.305-0400 I REPL [replication-1] Choosing new sync source. Our current sync source is not primary and does not have a sync source, so we require that it is ahead of us. Current sync source: Jasons-MacBook-Pro.local:20021, my last fetched oplog optime: { ts: Timestamp(1564093514, 6), t: 1 }, latest oplog optime of sync source: { ts: Timestamp(1564093514, 6), t: 1 } (sync source does not know the primary)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.306-0400 c20023| 2019-07-25T18:25:19.305-0400 I REPL [replication-1] Canceling oplog query due to OplogQueryMetadata. We have to choose a new sync source. Current source: Jasons-MacBook-Pro.local:20021, OpTime { ts: Timestamp(1564093514, 6), t: 1 }, its sync source index:-1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.306-0400 c20022| 2019-07-25T18:25:19.305-0400 I REPL [replication-1] Choosing new sync source. Our current sync source is not primary and does not have a sync source, so we require that it is ahead of us. Current sync source: Jasons-MacBook-Pro.local:20021, my last fetched oplog optime: { ts: Timestamp(1564093514, 6), t: 1 }, latest oplog optime of sync source: { ts: Timestamp(1564093514, 6), t: 1 } (sync source does not know the primary)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.306-0400 c20022| 2019-07-25T18:25:19.305-0400 I REPL [replication-1] Canceling oplog query due to OplogQueryMetadata. We have to choose a new sync source. Current source: Jasons-MacBook-Pro.local:20021, OpTime { ts: Timestamp(1564093514, 6), t: 1 }, its sync source index:-1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.306-0400 c20023| 2019-07-25T18:25:19.305-0400 W REPL [rsBackgroundSync] Fetcher stopped querying remote oplog with error: InvalidSyncSource: sync source Jasons-MacBook-Pro.local:20021 (config version: 2; last applied optime: { ts: Timestamp(1564093514, 6), t: 1 }; sync source index: -1; primary index: -1) is no longer valid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.306-0400 c20022| 2019-07-25T18:25:19.306-0400 W REPL [rsBackgroundSync] Fetcher stopped querying remote oplog with error: InvalidSyncSource: sync source Jasons-MacBook-Pro.local:20021 (config version: 2; last applied optime: { ts: Timestamp(1564093514, 6), t: 1 }; sync source index: -1; primary index: -1) is no longer valid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.308-0400 c20023| 2019-07-25T18:25:19.308-0400 I REPL [rsBackgroundSync] Clearing sync source Jasons-MacBook-Pro.local:20021 to choose a new one.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.308-0400 c20023| 2019-07-25T18:25:19.308-0400 I REPL [rsBackgroundSync] could not find member to sync from
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.308-0400 c20022| 2019-07-25T18:25:19.308-0400 I REPL [rsBackgroundSync] Clearing sync source Jasons-MacBook-Pro.local:20021 to choose a new one.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.308-0400 c20022| 2019-07-25T18:25:19.308-0400 I REPL [rsBackgroundSync] could not find member to sync from
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.574-0400 s20024| 2019-07-25T18:25:19.573-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:20.073-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.580-0400 s20024| 2019-07-25T18:25:19.579-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:19.580-0400 s20024| 2019-07-25T18:25:19.580-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 6ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:20.076-0400 s20024| 2019-07-25T18:25:20.076-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:20.576-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:20.083-0400 s20024| 2019-07-25T18:25:20.083-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:20.083-0400 s20024| 2019-07-25T18:25:20.083-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 7ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:20.580-0400 s20024| 2019-07-25T18:25:20.579-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:21.079-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:20.586-0400 s20024| 2019-07-25T18:25:20.585-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:20.586-0400 s20024| 2019-07-25T18:25:20.586-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 6ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:21.084-0400 s20024| 2019-07-25T18:25:21.084-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:21.584-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:21.088-0400 s20024| 2019-07-25T18:25:21.088-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:21.088-0400 s20024| 2019-07-25T18:25:21.088-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 4ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:21.584-0400 s20024| 2019-07-25T18:25:21.584-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:22.084-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:21.590-0400 s20024| 2019-07-25T18:25:21.590-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:21.591-0400 s20024| 2019-07-25T18:25:21.590-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 6ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:22.088-0400 s20024| 2019-07-25T18:25:22.088-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:22.588-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:22.092-0400 s20024| 2019-07-25T18:25:22.092-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:22.092-0400 s20024| 2019-07-25T18:25:22.092-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:22.592-0400 s20024| 2019-07-25T18:25:22.592-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:23.092-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:22.596-0400 s20024| 2019-07-25T18:25:22.595-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:22.596-0400 s20024| 2019-07-25T18:25:22.596-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:23.094-0400 s20024| 2019-07-25T18:25:23.093-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:23.593-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:23.097-0400 s20024| 2019-07-25T18:25:23.097-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:23.097-0400 s20024| 2019-07-25T18:25:23.097-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 4ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:23.597-0400 s20024| 2019-07-25T18:25:23.597-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:24.097-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:23.600-0400 s20024| 2019-07-25T18:25:23.600-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:23.600-0400 s20024| 2019-07-25T18:25:23.600-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.101-0400 s20024| 2019-07-25T18:25:24.100-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:24.600-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.104-0400 s20024| 2019-07-25T18:25:24.104-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.104-0400 s20024| 2019-07-25T18:25:24.104-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.189-0400 c20021| 2019-07-25T18:25:24.189-0400 W SHARDING [replSetDistLockPinger] Failed to unlock lock with ts: 5d3a2c4a9cfa09cae7a79956 :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.190-0400 c20021| 2019-07-25T18:25:24.190-0400 W SHARDING [replSetDistLockPinger] Failed to unlock lock with ts: 5d3a2c4a9cfa09cae7a7994e :: caused by :: NotMaster: Not primary while running findAndModify command on collection config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.308-0400 c20022| 2019-07-25T18:25:24.307-0400 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to Jasons-MacBook-Pro.local:20021: InvalidSyncSource: Sync source was cleared. Was Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.308-0400 c20023| 2019-07-25T18:25:24.307-0400 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to Jasons-MacBook-Pro.local:20021: InvalidSyncSource: Sync source was cleared. Was Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.565-0400 c20022| 2019-07-25T18:25:24.565-0400 I ELECTION [replexec-3] Starting an election, since we've seen no PRIMARY in the past 10000ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.565-0400 c20022| 2019-07-25T18:25:24.565-0400 I ELECTION [replexec-3] conducting a dry run election to see if we could be elected. current term: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.566-0400 c20022| 2019-07-25T18:25:24.565-0400 I REPL [replexec-3] Scheduling remote command request for vote request: RemoteCommand 206 -- target:Jasons-MacBook-Pro.local:20021 db:admin cmd:{ replSetRequestVotes: 1, setName: "configsvr_failover_repro-configRS", dryRun: true, term: 1, candidateIndex: 1, configVersion: 2, lastCommittedOp: { ts: Timestamp(1564093514, 6), t: 1 } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.566-0400 c20022| 2019-07-25T18:25:24.566-0400 I REPL [replexec-3] Scheduling remote command request for vote request: RemoteCommand 207 -- target:Jasons-MacBook-Pro.local:20023 db:admin cmd:{ replSetRequestVotes: 1, setName: "configsvr_failover_repro-configRS", dryRun: true, term: 1, candidateIndex: 1, configVersion: 2, lastCommittedOp: { ts: Timestamp(1564093514, 6), t: 1 } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.567-0400 c20021| 2019-07-25T18:25:24.567-0400 I ELECTION [conn7] Received vote request: { replSetRequestVotes: 1, setName: "configsvr_failover_repro-configRS", dryRun: true, term: 1, candidateIndex: 1, configVersion: 2, lastCommittedOp: { ts: Timestamp(1564093514, 6), t: 1 } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.567-0400 c20021| 2019-07-25T18:25:24.567-0400 I ELECTION [conn7] Sending vote response: { term: 1, voteGranted: true, reason: "" }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.567-0400 c20023| 2019-07-25T18:25:24.567-0400 I ELECTION [conn9] Received vote request: { replSetRequestVotes: 1, setName: "configsvr_failover_repro-configRS", dryRun: true, term: 1, candidateIndex: 1, configVersion: 2, lastCommittedOp: { ts: Timestamp(1564093514, 6), t: 1 } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.567-0400 c20023| 2019-07-25T18:25:24.567-0400 I ELECTION [conn9] Sending vote response: { term: 1, voteGranted: true, reason: "" }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.568-0400 c20022| 2019-07-25T18:25:24.568-0400 I ELECTION [replexec-4] VoteRequester(term 1 dry run) received a yes vote from Jasons-MacBook-Pro.local:20021; response message: { term: 1, voteGranted: true, reason: "", ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1564093514, 6), $clusterTime: { clusterTime: Timestamp(1564093514, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1564093514, 6) }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.568-0400 c20022| 2019-07-25T18:25:24.568-0400 I ELECTION [replexec-2] dry election run succeeded, running for election in term 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.586-0400 c20022| 2019-07-25T18:25:24.586-0400 I REPL [replexec-2] Scheduling remote command request for vote request: RemoteCommand 208 -- target:Jasons-MacBook-Pro.local:20021 db:admin cmd:{ replSetRequestVotes: 1, setName: "configsvr_failover_repro-configRS", dryRun: false, term: 2, candidateIndex: 1, configVersion: 2, lastCommittedOp: { ts: Timestamp(1564093514, 6), t: 1 } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.587-0400 c20022| 2019-07-25T18:25:24.586-0400 I REPL [replexec-2] Scheduling remote command request for vote request: RemoteCommand 209 -- target:Jasons-MacBook-Pro.local:20023 db:admin cmd:{ replSetRequestVotes: 1, setName: "configsvr_failover_repro-configRS", dryRun: false, term: 2, candidateIndex: 1, configVersion: 2, lastCommittedOp: { ts: Timestamp(1564093514, 6), t: 1 } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.587-0400 c20022| 2019-07-25T18:25:24.587-0400 I CONNPOOL [Replication] Connecting to Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.587-0400 c20021| 2019-07-25T18:25:24.587-0400 I ELECTION [conn7] Received vote request: { replSetRequestVotes: 1, setName: "configsvr_failover_repro-configRS", dryRun: false, term: 2, candidateIndex: 1, configVersion: 2, lastCommittedOp: { ts: Timestamp(1564093514, 6), t: 1 } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.588-0400 c20021| 2019-07-25T18:25:24.587-0400 I ELECTION [conn7] Sending vote response: { term: 2, voteGranted: true, reason: "" }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.589-0400 c20023| 2019-07-25T18:25:24.588-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49534 #29 (10 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.589-0400 c20023| 2019-07-25T18:25:24.589-0400 I NETWORK [conn29] received client metadata from 127.0.0.1:49534 conn29: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.591-0400 c20023| 2019-07-25T18:25:24.591-0400 I ELECTION [conn29] Received vote request: { replSetRequestVotes: 1, setName: "configsvr_failover_repro-configRS", dryRun: false, term: 2, candidateIndex: 1, configVersion: 2, lastCommittedOp: { ts: Timestamp(1564093514, 6), t: 1 } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.591-0400 c20023| 2019-07-25T18:25:24.591-0400 I ELECTION [conn29] Sending vote response: { term: 2, voteGranted: true, reason: "" }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.598-0400 c20022| 2019-07-25T18:25:24.597-0400 I ELECTION [replexec-4] VoteRequester(term 2) received a yes vote from Jasons-MacBook-Pro.local:20021; response message: { term: 2, voteGranted: true, reason: "", ok: 1.0, $gleStats: { lastOpTime: Timestamp(0, 0), electionId: ObjectId('7fffffff0000000000000001') }, lastCommittedOpTime: Timestamp(1564093514, 6), $clusterTime: { clusterTime: Timestamp(1564093514, 6), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, operationTime: Timestamp(1564093514, 6) }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.598-0400 c20022| 2019-07-25T18:25:24.597-0400 I ELECTION [replexec-4] election succeeded, assuming primary role in term 2
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.598-0400 c20022| 2019-07-25T18:25:24.598-0400 I REPL [replexec-4] transition to PRIMARY from SECONDARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.598-0400 c20022| 2019-07-25T18:25:24.598-0400 I REPL [replexec-4] Resetting sync source to empty, which was :27017
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.598-0400 c20022| 2019-07-25T18:25:24.598-0400 I REPL [replexec-4] Entering primary catch-up mode.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.599-0400 c20023| 2019-07-25T18:25:24.599-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49535 #30 (11 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.600-0400 c20023| 2019-07-25T18:25:24.600-0400 I NETWORK [conn30] received client metadata from 127.0.0.1:49535 conn30: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.602-0400 c20022| 2019-07-25T18:25:24.601-0400 I REPL [replexec-3] Caught up to the latest optime known via heartbeats after becoming primary. Target optime: { ts: Timestamp(1564093514, 6), t: 1 }. My Last Applied: { ts: Timestamp(1564093514, 6), t: 1 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.602-0400 c20022| 2019-07-25T18:25:24.601-0400 I REPL [replexec-3] Exited primary catch-up mode.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.602-0400 c20022| 2019-07-25T18:25:24.601-0400 I REPL [replexec-3] Stopping replication producer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.602-0400 s20024| 2019-07-25T18:25:24.602-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:25.102-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.605-0400 s20024| 2019-07-25T18:25:24.605-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.605-0400 s20024| 2019-07-25T18:25:24.605-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 2ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.710-0400 c20021| 2019-07-25T18:25:24.710-0400 I REPL [replexec-0] Member Jasons-MacBook-Pro.local:20022 is now in state PRIMARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:24.865-0400 c20023| 2019-07-25T18:25:24.865-0400 I REPL [replexec-1] Member Jasons-MacBook-Pro.local:20022 is now in state PRIMARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:25.105-0400 s20024| 2019-07-25T18:25:25.105-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:25.605-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:25.108-0400 s20024| 2019-07-25T18:25:25.108-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:25.109-0400 s20024| 2019-07-25T18:25:25.108-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:25.323-0400 c20022| 2019-07-25T18:25:25.322-0400 I REPL [RstlKillOpThread] Starting to kill user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:25.323-0400 c20022| 2019-07-25T18:25:25.323-0400 I REPL [RstlKillOpThread] Stopped killing user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:25.608-0400 s20024| 2019-07-25T18:25:25.608-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:26.108-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:25.611-0400 s20024| 2019-07-25T18:25:25.611-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:25.611-0400 s20024| 2019-07-25T18:25:25.611-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:25.616-0400 c20022| 2019-07-25T18:25:25.616-0400 I NETWORK [shard-registry-reload] Starting new replica set monitor for configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.111-0400 s20024| 2019-07-25T18:25:26.111-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:26.611-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.114-0400 s20024| 2019-07-25T18:25:26.114-0400 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.114-0400 s20024| 2019-07-25T18:25:26.114-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 3ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.327-0400 c20022| 2019-07-25T18:25:26.327-0400 I REPL [RstlKillOpThread] Starting to kill user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.327-0400 c20022| 2019-07-25T18:25:26.327-0400 I REPL [RstlKillOpThread] Stopped killing user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.342-0400 c20022| 2019-07-25T18:25:26.342-0400 I SHARDING [rsSync-0] Marking collection config.migrations as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.343-0400 c20022| 2019-07-25T18:25:26.343-0400 I SHARDING [Balancer] CSRS balancer is starting
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.343-0400 c20022| 2019-07-25T18:25:26.343-0400 D3 TXN [TransactionCoordinator] Waiting for OpTime { ts: Timestamp(1564093526, 3), t: 2 } to become majority committed
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.344-0400 c20022| 2019-07-25T18:25:26.344-0400 I REPL [rsSync-0] transition to primary complete; database writes are now permitted
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.612-0400 s20024| 2019-07-25T18:25:26.612-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:27.112-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.616-0400 s20024| 2019-07-25T18:25:26.615-0400 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for configsvr_failover_repro-configRS is configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.616-0400 s20024| 2019-07-25T18:25:26.616-0400 I SHARDING [UpdateReplicaSetOnConfigServer] Updating sharding state with confirmed set configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.616-0400 s20024| 2019-07-25T18:25:26.616-0400 D1 TRACKING [conn8] Cmd: dropDatabase, TrackingId: 5d3a2c4a3e6b567cf0908ddb
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.616-0400 s20024| 2019-07-25T18:25:26.616-0400 D1 TRACKING [Uptime-reporter] Cmd: NotSet, TrackingId: 5d3a2c563e6b567cf0908ddc
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.616-0400 s20024| 2019-07-25T18:25:26.616-0400 D1 NETWORK [UpdateReplicaSetOnConfigServer] Started targeter for configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.616-0400 s20024| 2019-07-25T18:25:26.616-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:56.616-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.616-0400 s20024| 2019-07-25T18:25:26.616-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Next replica set scan scheduled for 2019-07-25T18:25:56.616-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.616-0400 s20024| 2019-07-25T18:25:26.616-0400 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set configsvr_failover_repro-configRS took 4ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.617-0400 c20022| 2019-07-25T18:25:26.617-0400 I SHARDING [conn25] Marking collection config.mongos as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.618-0400 c20022| 2019-07-25T18:25:26.617-0400 I STORAGE [conn25] createCollection: config.mongos with generated UUID: 81e54234-9908-454a-817f-30651e5cf0b6 and options: {}
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.697-0400 c20022| 2019-07-25T18:25:26.697-0400 I INDEX [conn25] index build: done building index _id_ on ns config.mongos
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.909-0400 c20021| 2019-07-25T18:25:26.908-0400 I REPL [rsBackgroundSync] sync source candidate: Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.909-0400 c20021| 2019-07-25T18:25:26.909-0400 I CONNPOOL [RS] Connecting to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.911-0400 c20022| 2019-07-25T18:25:26.910-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49536 #31 (10 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.911-0400 c20022| 2019-07-25T18:25:26.911-0400 I NETWORK [conn31] received client metadata from 127.0.0.1:49536 conn31: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.915-0400 c20021| 2019-07-25T18:25:26.915-0400 I REPL [rsBackgroundSync] Changed sync source from empty to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.916-0400 c20021| 2019-07-25T18:25:26.916-0400 I REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on Jasons-MacBook-Pro.local:20022 starting at filter: { ts: { $gte: Timestamp(1564093514, 6) } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.920-0400 c20021| 2019-07-25T18:25:26.920-0400 I SHARDING [rsSync-0] Marking collection local.replset.oplogTruncateAfterPoint as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.925-0400 c20022| 2019-07-25T18:25:26.925-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49537 #32 (11 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.926-0400 c20022| 2019-07-25T18:25:26.926-0400 I NETWORK [conn32] received client metadata from 127.0.0.1:49537 conn32: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.927-0400 c20021| 2019-07-25T18:25:26.927-0400 I STORAGE [repl-writer-worker-5] createCollection: config.mongos with provided UUID: 81e54234-9908-454a-817f-30651e5cf0b6 and options: { uuid: UUID("81e54234-9908-454a-817f-30651e5cf0b6") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.934-0400 c20022| 2019-07-25T18:25:26.934-0400 I SHARDING [TransactionCoordinator] Marking collection config.transaction_coordinators as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.935-0400 c20022| 2019-07-25T18:25:26.935-0400 I TXN [TransactionCoordinator] Need to resume coordinating commit for 0 transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.935-0400 c20022| 2019-07-25T18:25:26.935-0400 I TXN [TransactionCoordinator] Incoming coordinateCommit requests are now enabled
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.938-0400 c20022| 2019-07-25T18:25:26.938-0400 I SHARDING [Balancer] ShouldAutoSplit changing from 1 to 0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.938-0400 c20022| 2019-07-25T18:25:26.938-0400 I SHARDING [Balancer] CSRS balancer thread is recovering
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.938-0400 c20022| 2019-07-25T18:25:26.938-0400 I SHARDING [Balancer] CSRS balancer thread is recovered
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.939-0400 c20022| 2019-07-25T18:25:26.939-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.940-0400 d20020| 2019-07-25T18:25:26.940-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49538 #17 (9 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.941-0400 d20020| 2019-07-25T18:25:26.940-0400 I NETWORK [conn17] received client metadata from 127.0.0.1:49538 conn17: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.942-0400 c20022| 2019-07-25T18:25:26.942-0400 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for configsvr_failover_repro-rs0 is configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.946-0400 d20020| 2019-07-25T18:25:26.945-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49539 #18 (10 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.946-0400 d20020| 2019-07-25T18:25:26.946-0400 I NETWORK [conn18] received client metadata from 127.0.0.1:49539 conn18: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.947-0400 d20020| 2019-07-25T18:25:26.947-0400 I SHARDING [conn18] Received request from 127.0.0.1:49539 indicating config server optime term has increased, previous optime { ts: Timestamp(1564093514, 6), t: 1 }, now { ts: Timestamp(1564093526, 3), t: 2 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.980-0400 c20021| 2019-07-25T18:25:26.980-0400 I INDEX [repl-writer-worker-5] index build: done building index _id_ on ns config.mongos
[js_test:configsvr_failover_repro] 2019-07-25T18:25:26.988-0400 c20021| 2019-07-25T18:25:26.987-0400 I SHARDING [repl-writer-worker-7] Marking collection config.mongos as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.001-0400 c20023| 2019-07-25T18:25:27.001-0400 I NETWORK [shard-registry-reload] Starting new replica set monitor for configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.009-0400 c20022| 2019-07-25T18:25:27.007-0400 I COMMAND [conn25] command config.$cmd command: update { update: "mongos", bypassDocumentValidation: false, ordered: true, updates: [ { q: { _id: "Jasons-MacBook-Pro.local:20024" }, u: { $set: { _id: "Jasons-MacBook-Pro.local:20024", ping: new Date(1564093510063), up: 0, waiting: true, mongoVersion: "4.3.0-703-g917d338", advisoryHostFQDNs: [] } }, multi: false, upsert: true } ], writeConcern: { w: "majority", wtimeout: 60000 }, allowImplicitCollectionCreation: true, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d3a2c563e6b567cf0908dde'), operName: "", parentOperId: "5d3a2c563e6b567cf0908ddc" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1564093514, 6), signature: { hash: BinData(0, 87D82898CCFA8515001F6FFAA14ECE7CCA627262), keyId: 6717730434681143305 } }, $configServerState: { opTime: { ts: Timestamp(1564093514, 6), t: 1 } }, $db: "config" } numYields:0 reslen:661 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 3 } }, ReplicationStateTransition: { acquireCount: { w: 3 } }, Global: { acquireCount: { w: 3 } }, Database: { acquireCount: { w: 3 } }, Collection: { acquireCount: { r: 2, w: 2, W: 1 } }, Mutex: { acquireCount: { r: 5 } } } flowControl:{ acquireCount: 3 } storage:{} protocol:op_msg 390ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.010-0400 c20022| 2019-07-25T18:25:27.008-0400 I COMMAND [conn24] command config.locks appName: "MongoDB Shell" command: findAndModify { findAndModify: "locks", query: { _id: "test", state: 0 }, update: { $set: { ts: ObjectId('5d3a2c5643f454cabbd96781'), state: 2, who: "ConfigServer:conn24", process: "ConfigServer", when: new Date(1564093526617), why: "dropDatabase" } }, upsert: true, new: true, writeConcern: { w: "majority", wtimeout: 15000 }, $db: "config" } planSummary: IXSCAN { _id: 1 } keysExamined:1 docsExamined:1 nMatched:1 nModified:1 keysInserted:2 keysDeleted:2 numYields:0 queryHash:A6518C99 planCacheKey:27784964 reslen:463 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 390ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.010-0400 c20022| 2019-07-25T18:25:27.008-0400 I SHARDING [conn24] distributed lock 'test' acquired for 'dropDatabase', ts : 5d3a2c5643f454cabbd96781
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.011-0400 s20024| 2019-07-25T18:25:27.010-0400 I SHARDING [ShardRegistry] Received reply from config server node (unknown) indicating config server optime term has increased, previous optime { ts: Timestamp(1564093514, 6), t: 1 }, now { ts: Timestamp(1564093526, 6), t: 2 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.018-0400 c20022| 2019-07-25T18:25:27.017-0400 I SHARDING [conn24] about to log metadata event into changelog: { _id: "Jasons-MacBook-Pro.local:20022-2019-07-25T18:25:27.017-0400-5d3a2c5743f454cabbd967a0", server: "Jasons-MacBook-Pro.local:20022", shard: "config", clientAddr: "127.0.0.1:49513", time: new Date(1564093527017), what: "dropDatabase.start", ns: "test", details: {} }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.082-0400 c20022| 2019-07-25T18:25:27.082-0400 I SHARDING [conn24] distributed lock 'test.foo' acquired for 'dropCollection', ts : 5d3a2c5743f454cabbd967a8
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.082-0400 c20022| 2019-07-25T18:25:27.082-0400 I SHARDING [conn24] about to log metadata event into changelog: { _id: "Jasons-MacBook-Pro.local:20022-2019-07-25T18:25:27.082-0400-5d3a2c5743f454cabbd967ad", server: "Jasons-MacBook-Pro.local:20022", shard: "config", clientAddr: "127.0.0.1:49513", time: new Date(1564093527082), what: "dropCollection.start", ns: "test.foo", details: {} }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.110-0400 d20020| 2019-07-25T18:25:27.110-0400 I COMMAND [conn18] CMD: drop test.foo
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.110-0400 d20020| 2019-07-25T18:25:27.110-0400 I STORAGE [conn18] dropCollection: test.foo (3df0dab2-7ed2-4ca2-81ad-7d5ddc465fb1) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.111-0400 d20020| 2019-07-25T18:25:27.111-0400 I STORAGE [conn18] Finishing collection drop for test.foo (3df0dab2-7ed2-4ca2-81ad-7d5ddc465fb1).
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.111-0400 d20020| 2019-07-25T18:25:27.111-0400 I STORAGE [conn18] Deferring table drop for index '_id_' on collection 'test.foo (3df0dab2-7ed2-4ca2-81ad-7d5ddc465fb1)'. Ident: 'index-32-4192590575378879396', commit timestamp: 'Timestamp(1564093527, 4)'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.111-0400 d20020| 2019-07-25T18:25:27.111-0400 I STORAGE [conn18] Deferring table drop for collection 'test.foo' (3df0dab2-7ed2-4ca2-81ad-7d5ddc465fb1). Ident: collection-31-4192590575378879396, commit timestamp: Timestamp(1564093527, 4)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.189-0400 d20020| 2019-07-25T18:25:27.189-0400 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for collection test.foo took 5 ms and found the collection is not sharded
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.189-0400 d20020| 2019-07-25T18:25:27.189-0400 I SHARDING [conn18] Updating metadata for collection test.foo from collection version: 1|0||5d3a2c4a2d71daf4c4e5f05e, shard version: 1|0||5d3a2c4a2d71daf4c4e5f05e to collection version: due to epoch change
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.191-0400 d20020| 2019-07-25T18:25:27.190-0400 I COMMAND [ShardServerCatalogCacheLoader-1] CMD: drop config.cache.chunks.test.foo
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.191-0400 c20022| 2019-07-25T18:25:27.191-0400 I SHARDING [conn24] about to log metadata event into changelog: { _id: "Jasons-MacBook-Pro.local:20022-2019-07-25T18:25:27.191-0400-5d3a2c5743f454cabbd967c1", server: "Jasons-MacBook-Pro.local:20022", shard: "config", clientAddr: "127.0.0.1:49513", time: new Date(1564093527191), what: "dropCollection", ns: "test.foo", details: {} }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.192-0400 d20020| 2019-07-25T18:25:27.192-0400 I STORAGE [ShardServerCatalogCacheLoader-1] dropCollection: config.cache.chunks.test.foo (1f577be9-e257-4d51-9a39-fb861aada357) - storage engine will take ownership of drop-pending collection with optime { ts: Timestamp(0, 0), t: -1 } and commit timestamp Timestamp(0, 0)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.192-0400 d20020| 2019-07-25T18:25:27.192-0400 I STORAGE [ShardServerCatalogCacheLoader-1] Finishing collection drop for config.cache.chunks.test.foo (1f577be9-e257-4d51-9a39-fb861aada357).
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.193-0400 d20020| 2019-07-25T18:25:27.192-0400 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index '_id_' on collection 'config.cache.chunks.test.foo (1f577be9-e257-4d51-9a39-fb861aada357)'. Ident: 'index-34-4192590575378879396', commit timestamp: 'Timestamp(1564093527, 8)'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.193-0400 d20020| 2019-07-25T18:25:27.193-0400 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for index 'lastmod_1' on collection 'config.cache.chunks.test.foo (1f577be9-e257-4d51-9a39-fb861aada357)'. Ident: 'index-35-4192590575378879396', commit timestamp: 'Timestamp(1564093527, 8)'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.193-0400 d20020| 2019-07-25T18:25:27.193-0400 I STORAGE [ShardServerCatalogCacheLoader-1] Deferring table drop for collection 'config.cache.chunks.test.foo' (1f577be9-e257-4d51-9a39-fb861aada357). Ident: collection-33-4192590575378879396, commit timestamp: Timestamp(1564093527, 8)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.259-0400 c20022| 2019-07-25T18:25:27.259-0400 I SHARDING [conn24] distributed lock with ts: 5d3a2c5743f454cabbd967a8' unlocked.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.260-0400 d20020| 2019-07-25T18:25:27.260-0400 I COMMAND [conn18] dropDatabase test - starting
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.260-0400 d20020| 2019-07-25T18:25:27.260-0400 I COMMAND [conn18] dropDatabase test - dropped 0 collection(s)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.260-0400 d20020| 2019-07-25T18:25:27.260-0400 I COMMAND [conn18] dropDatabase test - finished
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.305-0400 d20020| 2019-07-25T18:25:27.305-0400 I NETWORK [ConfigServerCatalogCacheLoader-0] Marking host Jasons-MacBook-Pro.local:20021 as failed :: caused by :: NotMasterNoSlaveOk: not master and slaveOk=false
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.305-0400 d20020| 2019-07-25T18:25:27.305-0400 I SH_REFR [ConfigServerCatalogCacheLoader-0] Refresh for database test took 2 ms and failed :: caused by :: NotMasterNoSlaveOk: Could not confirm non-existence of database test :: caused by :: not master and slaveOk=false
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.306-0400 c20022| 2019-07-25T18:25:27.306-0400 I NETWORK [TransactionCoordinator] Marking host Jasons-MacBook-Pro.local:20020 as failed :: caused by :: NotMasterNoSlaveOk: Could not confirm non-existence of database test :: caused by :: not master and slaveOk=false
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.306-0400 c20022| 2019-07-25T18:25:27.306-0400 I SHARDING [conn24] about to log metadata event into changelog: { _id: "Jasons-MacBook-Pro.local:20022-2019-07-25T18:25:27.306-0400-5d3a2c5743f454cabbd967d3", server: "Jasons-MacBook-Pro.local:20022", shard: "config", clientAddr: "127.0.0.1:49513", time: new Date(1564093527306), what: "dropDatabase", ns: "test", details: {} }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.333-0400 c20023| 2019-07-25T18:25:27.333-0400 I REPL [rsBackgroundSync] sync source candidate: Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.334-0400 c20023| 2019-07-25T18:25:27.333-0400 I CONNPOOL [RS] Connecting to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.335-0400 c20022| 2019-07-25T18:25:27.335-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49540 #35 (12 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.335-0400 c20022| 2019-07-25T18:25:27.335-0400 I NETWORK [conn35] received client metadata from 127.0.0.1:49540 conn35: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.345-0400 c20023| 2019-07-25T18:25:27.345-0400 I REPL [rsBackgroundSync] Changed sync source from empty to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.346-0400 c20023| 2019-07-25T18:25:27.346-0400 I REPL [rsBackgroundSync] scheduling fetcher to read remote oplog on Jasons-MacBook-Pro.local:20022 starting at filter: { ts: { $gte: Timestamp(1564093514, 6) } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.347-0400 c20022| 2019-07-25T18:25:27.347-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49541 #36 (13 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.348-0400 c20022| 2019-07-25T18:25:27.348-0400 I NETWORK [conn36] received client metadata from 127.0.0.1:49541 conn36: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.364-0400 c20022| 2019-07-25T18:25:27.364-0400 I SHARDING [conn24] distributed lock with ts: 5d3a2c5643f454cabbd96781' unlocked.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.365-0400 c20022| 2019-07-25T18:25:27.364-0400 I COMMAND [conn24] command admin.$cmd appName: "MongoDB Shell" command: _configsvrDropDatabase { _configsvrDropDatabase: "test", writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, tracking_info: { operId: ObjectId('5d3a2c563e6b567cf0908ddd'), operName: "", parentOperId: "5d3a2c4a3e6b567cf0908ddb" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1564093514, 6), signature: { hash: BinData(0, 87D82898CCFA8515001F6FFAA14ECE7CCA627262), keyId: 6717730434681143305 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" }, mongos: { host: "Jasons-MacBook-Pro.local:20024", client: "127.0.0.1:49516", version: "4.3.0-703-g917d338" } }, $configServerState: { opTime: { ts: Timestamp(1564093514, 6), t: 1 } }, $db: "admin" } numYields:0 reslen:523 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 14 } }, ReplicationStateTransition: { acquireCount: { w: 17 } }, Global: { acquireCount: { r: 4, w: 13 } }, Database: { acquireCount: { r: 3, w: 13 } }, Collection: { acquireCount: { r: 3, w: 12, W: 1 } }, Metadata: { acquireCount: { W: 4 } }, Mutex: { acquireCount: { r: 30 } } } flowControl:{ acquireCount: 13 } storage:{} protocol:op_msg 747ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.366-0400 s20024| 2019-07-25T18:25:27.365-0400 I COMMAND [conn8] command test.$cmd appName: "MongoDB Shell" command: dropDatabase { dropDatabase: 1.0, lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47") }, $clusterTime: { clusterTime: Timestamp(1564093513, 13), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "test" } numYields:0 reslen:181 protocol:op_msg 12817ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.367-0400 c20023| 2019-07-25T18:25:27.367-0400 I STORAGE [repl-writer-worker-2] createCollection: config.mongos with provided UUID: 81e54234-9908-454a-817f-30651e5cf0b6 and options: { uuid: UUID("81e54234-9908-454a-817f-30651e5cf0b6") }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.377-0400 d20020| 2019-07-25T18:25:27.377-0400 W COMMAND [conn15] failpoint: hangBeforeShardingCollection set to: { mode: 0, data: {} }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.398-0400 s20024| 2019-07-25T18:25:27.398-0400 I CONNPOOL [TaskExecutorPool-0] Connecting to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.400-0400 c20022| 2019-07-25T18:25:27.399-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49542 #37 (14 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.400-0400 c20022| 2019-07-25T18:25:27.400-0400 I NETWORK [conn37] received client metadata from 127.0.0.1:49542 conn37: { driver: { name: "NetworkInterfaceTL", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.402-0400 s20024| 2019-07-25T18:25:27.402-0400 I COMMAND [conn8] command config.databases appName: "MongoDB Shell" command: find { find: "databases", filter: { _id: "test" }, lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47") }, $clusterTime: { clusterTime: Timestamp(1564093527, 13), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:0 reslen:230 protocol:op_msg 4ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.410-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.410-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.410-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.410-0400 [jsTest] Config DB Entry: [ ]
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.410-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.411-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.411-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.422-0400 s20024| 2019-07-25T18:25:27.421-0400 I COMMAND [conn8] command config.collections appName: "MongoDB Shell" command: find { find: "collections", filter: { _id: "test.foo" }, lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47") }, $clusterTime: { clusterTime: Timestamp(1564093527, 13), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:311 protocol:op_msg 2ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.427-0400 c20023| 2019-07-25T18:25:27.426-0400 I INDEX [repl-writer-worker-2] index build: done building index _id_ on ns config.mongos
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.433-0400 c20023| 2019-07-25T18:25:27.433-0400 I SHARDING [repl-writer-worker-0] Marking collection config.mongos as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.439-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.439-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.439-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.439-0400 [jsTest] Config Collection Entry: [
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.439-0400 [jsTest] {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.439-0400 [jsTest] "_id" : "test.foo",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.439-0400 [jsTest] "lastmodEpoch" : ObjectId("000000000000000000000000"),
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.439-0400 [jsTest] "lastmod" : ISODate("2019-07-25T22:25:27.152Z"),
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.440-0400 [jsTest] "dropped" : true
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.440-0400 [jsTest] }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.440-0400 [jsTest] ]
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.440-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.440-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.440-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.441-0400 c20023| 2019-07-25T18:25:27.441-0400 I COMMAND [conn24] command config.settings command: find { find: "settings", filter: { _id: "chunksize" }, readConcern: { level: "majority", afterOpTime: { ts: Timestamp(1564093526, 6), t: 2 } }, limit: 1, maxTimeMS: 30000, tracking_info: { operId: ObjectId('5d3a2c573e6b567cf0908de0'), operName: "", parentOperId: "5d3a2c563e6b567cf0908ddc" }, $readPreference: { mode: "nearest" }, $replData: 1, $clusterTime: { clusterTime: Timestamp(1564093527, 1), signature: { hash: BinData(0, 59357FFE84E36A40CAAAF9AFA701E37601B5E0AF), keyId: 6717730434681143305 } }, $configServerState: { opTime: { ts: Timestamp(1564093526, 6), t: 2 } }, $db: "config" } planSummary: IDHACK keysExamined:0 docsExamined:0 cursorExhausted:1 numYields:0 nreturned:0 reslen:551 locks:{ ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } }, Mutex: { acquireCount: { r: 1 } } } storage:{} protocol:op_msg 420ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.444-0400 s20024| 2019-07-25T18:25:27.444-0400 I SHARDING [Uptime-reporter] ShouldAutoSplit changing from 1 to 0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.445-0400 Checking consistency of the sharding catalog with shards' storage catalogs and catalog caches
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.461-0400 d20020| 2019-07-25T18:25:27.461-0400 I SHARDING [conn13] Created 1 chunk(s) for: test.foo, producing collection version 1|0||5d3a2c4a2d71daf4c4e5f05e
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.461-0400 d20020| 2019-07-25T18:25:27.461-0400 I SHARDING [conn13] about to log metadata event into changelog: { _id: "Jasons-MacBook-Pro.local:20020-2019-07-25T18:25:27.461-0400-5d3a2c572d71daf4c4e5f078", server: "Jasons-MacBook-Pro.local:20020", shard: "configsvr_failover_repro-rs0", clientAddr: "127.0.0.1:49529", time: new Date(1564093527461), what: "shardCollection.end", ns: "test.foo", details: { version: "1|0||5d3a2c4a2d71daf4c4e5f05e" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.464-0400 d20020| 2019-07-25T18:25:27.464-0400 I NETWORK [ReplicaSetMonitor-TaskExecutor] Confirmed replica set for configsvr_failover_repro-configRS is configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.464-0400 d20020| 2019-07-25T18:25:27.464-0400 I SHARDING [updateShardIdentityConfigString] Updating config server with confirmed set configsvr_failover_repro-configRS/Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20022,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.467-0400 s20024| 2019-07-25T18:25:27.467-0400 I COMMAND [conn8] command config.chunks appName: "MongoDB Shell" command: aggregate { aggregate: "chunks", pipeline: [ { $lookup: { from: "shards", localField: "shard", foreignField: "_id", as: "shardHost" } }, { $unwind: "$shardHost" }, { $group: { _id: "$ns", shardConnStrings: { $addToSet: "$shardHost.host" } } }, { $lookup: { from: "collections", localField: "_id", foreignField: "_id", as: "collInfo" } }, { $unwind: "$collInfo" } ], cursor: {}, lsid: { id: UUID("806e05d3-7000-445f-820e-060646a14c47") }, $clusterTime: { clusterTime: Timestamp(1564093527, 13), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $db: "config" } nShards:1 cursorExhausted:1 numYields:0 nreturned:1 reslen:512 protocol:op_msg 7ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.481-0400 Aggregated authoritative metadata on config server for all sharded collections: [
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.481-0400 {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.481-0400 "_id" : "config.system.sessions",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.481-0400 "shardConnStrings" : [
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.481-0400 "configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020"
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.481-0400 ],
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.481-0400 "collInfo" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.482-0400 "_id" : "config.system.sessions",
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.482-0400 "lastmodEpoch" : ObjectId("5d3a2c482d71daf4c4e5f043"),
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.482-0400 "lastmod" : ISODate("1970-02-19T17:02:47.296Z"),
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.482-0400 "dropped" : false,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.482-0400 "key" : {
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.482-0400 "_id" : 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.482-0400 },
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.482-0400 "unique" : false,
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.482-0400 "uuid" : UUID("169b4ca7-9147-452d-b8b8-2496698e9e94")
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.482-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.482-0400 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.482-0400 ]
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.500-0400 d20020| 2019-07-25T18:25:27.500-0400 I COMMAND [conn13] command admin.$cmd appName: "MongoDB Shell" command: _shardsvrShardCollection { _shardsvrShardCollection: "test.foo", key: { _id: 1.0 }, unique: false, numInitialChunks: 0, collation: {}, getUUIDfromPrimaryShard: true, writeConcern: { w: "majority", wtimeout: 60000 }, lsid: { id: UUID("3c6e8552-60a8-42a4-bc64-789d45b7044d"), uid: BinData(0, E3B0C44298FC1C149AFBF4C8996FB92427AE41E4649B934CA495991B7852B855) }, $clusterTime: { clusterTime: Timestamp(1564093514, 2), signature: { hash: BinData(0, 87D82898CCFA8515001F6FFAA14ECE7CCA627262), keyId: 6717730434681143305 } }, $client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" }, mongos: { host: "Jasons-MacBook-Pro.local:20024", client: "127.0.0.1:49532", version: "4.3.0-703-g917d338" } }, $configServerState: { opTime: { ts: Timestamp(1564093514, 2), t: 1 } }, $db: "admin" } numYields:0 reslen:402 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 9 } }, ReplicationStateTransition: { acquireCount: { w: 15 } }, Global: { acquireCount: { r: 8, w: 7 } }, Database: { acquireCount: { r: 8, w: 7, W: 1 } }, Collection: { acquireCount: { r: 8, w: 3, W: 4 } }, Mutex: { acquireCount: { r: 16, W: 4 } } } flowControl:{ acquireCount: 5 } storage:{} protocol:op_msg 13375ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.501-0400 d20020| 2019-07-25T18:25:27.501-0400 I NETWORK [conn13] end connection 127.0.0.1:49529 (9 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.528-0400 d20020| 2019-07-25T18:25:27.528-0400 I COMMAND [conn1] successfully set parameter writePeriodicNoops to true (was false)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.552-0400 Waiting for op with OpTime { "ts" : Timestamp(1564093527, 10), "t" : NumberLong(1) } to be committed on all secondaries
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.580-0400 Checking that the UUID for config.system.sessions returned by listCollections on connection to configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020 is consistent with the UUID in config.collections on the config server
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.605-0400 Checking that the UUID for config.system.sessions in config.cache.collections on connection to configsvr_failover_repro-rs0/Jasons-MacBook-Pro.local:20020 is consistent with the UUID in config.collections on the config server
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.640-0400 s20024| 2019-07-25T18:25:27.640-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49543 #11 (2 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.640-0400 s20024| 2019-07-25T18:25:27.640-0400 I NETWORK [conn11] received client metadata from 127.0.0.1:49543 conn11: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.640-0400 s20024| 2019-07-25T18:25:27.640-0400 I COMMAND [conn11] command admin.$cmd appName: "MongoDB Shell" command: isMaster { isMaster: 1, hostInfo: "Jasons-MacBook-Pro.local:27017", client: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }, $db: "admin" } numYields:0 reslen:389 protocol:op_query 0ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.648-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.648-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.648-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.648-0400 [jsTest] New session started with sessionID: { "id" : UUID("5812bf1c-c7d6-4740-bac2-1299dde24fd5") } and options: { "causalConsistency" : false }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.648-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.648-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.648-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.656-0400 s20024| 2019-07-25T18:25:27.656-0400 I COMMAND [conn11] command admin.$cmd appName: "MongoDB Shell" command: isMaster { isMaster: 1.0, lsid: { id: UUID("5812bf1c-c7d6-4740-bac2-1299dde24fd5") }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:374 protocol:op_msg 0ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.657-0400 Skipping collection validation: giving up after running the isMaster command: not running validate against mongos
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.658-0400 s20024| 2019-07-25T18:25:27.658-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated: 15), will terminate after current cmd ends
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.658-0400 s20024| 2019-07-25T18:25:27.658-0400 I NETWORK [signalProcessingThread] shutdown: going to close all sockets...
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.658-0400 s20024| 2019-07-25T18:25:27.658-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20024.sock
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.658-0400 s20024| 2019-07-25T18:25:27.658-0400 D1 NETWORK [signalProcessingThread] Shutting down task executor used for monitoring replica sets
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.658-0400 s20024| 2019-07-25T18:25:27.658-0400 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.658-0400 s20024| 2019-07-25T18:25:27.658-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to Jasons-MacBook-Pro.local:20023 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.658-0400 s20024| 2019-07-25T18:25:27.658-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to Jasons-MacBook-Pro.local:20021 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.659-0400 s20024| 2019-07-25T18:25:27.658-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to Jasons-MacBook-Pro.local:20022 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.659-0400 c20021| 2019-07-25T18:25:27.659-0400 I NETWORK [conn35] end connection 127.0.0.1:49511 (12 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.659-0400 c20023| 2019-07-25T18:25:27.659-0400 I NETWORK [conn23] end connection 127.0.0.1:49510 (10 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.659-0400 c20022| 2019-07-25T18:25:27.659-0400 I NETWORK [conn23] end connection 127.0.0.1:49509 (13 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.659-0400 s20024| 2019-07-25T18:25:27.659-0400 D1 EXECUTOR [UpdateReplicaSetOnConfigServer] shutting down thread in pool Sharding-Fixed
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.659-0400 s20024| 2019-07-25T18:25:27.659-0400 I ASIO [ShardRegistry] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.660-0400 c20022| 2019-07-25T18:25:27.659-0400 I NETWORK [conn25] end connection 127.0.0.1:49514 (12 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.660-0400 c20021| 2019-07-25T18:25:27.659-0400 I NETWORK [conn36] end connection 127.0.0.1:49515 (11 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.660-0400 c20023| 2019-07-25T18:25:27.660-0400 I NETWORK [conn24] end connection 127.0.0.1:49512 (9 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.660-0400 c20022| 2019-07-25T18:25:27.660-0400 I NETWORK [conn24] end connection 127.0.0.1:49513 (11 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.660-0400 s20024| 2019-07-25T18:25:27.660-0400 I ASIO [TaskExecutorPool-0] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.660-0400 c20022| 2019-07-25T18:25:27.660-0400 I NETWORK [conn37] end connection 127.0.0.1:49542 (10 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.660-0400 s20024| 2019-07-25T18:25:27.660-0400 D1 SHARDING [signalProcessingThread] ShardingCatalogClientImpl::shutDown() called.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.660-0400 s20024| 2019-07-25T18:25:27.660-0400 W SHARDING [signalProcessingThread] error encountered while cleaning up distributed ping entry for Jasons-MacBook-Pro.local:20024:1564093507:3564609982540738235 :: caused by :: ShutdownInProgress: Server is shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.660-0400 s20024| 2019-07-25T18:25:27.660-0400 D1 SHARDING [signalProcessingThread] Shutting down task executor for reloading shard registry
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.661-0400 s20024| 2019-07-25T18:25:27.660-0400 D1 SHARDING [shard-registry-reload] Reloading shardRegistry
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.661-0400 s20024| 2019-07-25T18:25:27.660-0400 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.661-0400 s20024| 2019-07-25T18:25:27.661-0400 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.661-0400 s20024| 2019-07-25T18:25:27.661-0400 D1 EXECUTOR [ConfigServerCatalogCacheLoader-0] shutting down thread in pool ConfigServerCatalogCacheLoader
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.661-0400 s20024| 2019-07-25T18:25:27.661-0400 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.661-0400 s20024| 2019-07-25T18:25:27.661-0400 I CONTROL [signalProcessingThread] shutting down with code:0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.664-0400 2019-07-25T18:25:27.664-0400 I - [js] shell: stopped mongo program on port 20024
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.695-0400 d20020| 2019-07-25T18:25:27.695-0400 I COMMAND [conn1] successfully set parameter waitForStepDownOnNonCommandShutdown to false (was true)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.698-0400 ReplSetTest stop *** Shutting down mongod in port 20020 ***
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.700-0400 d20020| 2019-07-25T18:25:27.700-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49544 #19 (10 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.700-0400 d20020| 2019-07-25T18:25:27.700-0400 I NETWORK [conn19] received client metadata from 127.0.0.1:49544 conn19: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.706-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.706-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.706-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.707-0400 [jsTest] New session started with sessionID: { "id" : UUID("cd3465f7-031b-41c3-9200-3661ee566747") } and options: { "causalConsistency" : false }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.707-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.708-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.708-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.725-0400 d20020| 2019-07-25T18:25:27.725-0400 I COMMAND [conn19] Attempting to step down in response to replSetStepDown command
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.725-0400 d20020| 2019-07-25T18:25:27.725-0400 I REPL [RstlKillOpThread] Starting to kill user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.725-0400 d20020| 2019-07-25T18:25:27.725-0400 I REPL [RstlKillOpThread] Stopped killing user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.725-0400 d20020| 2019-07-25T18:25:27.725-0400 I REPL [conn19] Stepping down from primary, stats: { userOpsKilled: 0, userOpsRunning: 0 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.725-0400 d20020| 2019-07-25T18:25:27.725-0400 I REPL [conn19] transition to SECONDARY from PRIMARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.726-0400 d20020| 2019-07-25T18:25:27.725-0400 I SHARDING [conn19] The ChunkSplitter has stopped and will no longer run new autosplit tasks. Any autosplit tasks that have already started will be allowed to finish.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.726-0400 d20020| 2019-07-25T18:25:27.726-0400 I COMMAND [conn19] replSetStepDown command completed
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.738-0400 d20020| 2019-07-25T18:25:27.738-0400 I REPL [conn19] 'freezing' for 86400 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.793-0400 d20020| 2019-07-25T18:25:27.793-0400 I COMMAND [conn19] CMD: validate admin.foo
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.794-0400 d20020| 2019-07-25T18:25:27.794-0400 I INDEX [conn19] validating collection admin.foo (UUID: ccb6f54c-06bc-4983-8255-a57c7806c0d8)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.794-0400 d20020| 2019-07-25T18:25:27.794-0400 W STORAGE [conn19] Could not complete validation of table:collection-17-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.794-0400 d20020| 2019-07-25T18:25:27.794-0400 I INDEX [conn19] validating index _id_ on collection admin.foo
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.794-0400 d20020| 2019-07-25T18:25:27.794-0400 W STORAGE [conn19] Could not complete validation of table:index-18-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.799-0400 d20020| 2019-07-25T18:25:27.799-0400 I INDEX [conn19] Validation complete for collection admin.foo (UUID: ccb6f54c-06bc-4983-8255-a57c7806c0d8). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.807-0400 d20020| 2019-07-25T18:25:27.807-0400 I COMMAND [conn19] CMD: validate admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.808-0400 d20020| 2019-07-25T18:25:27.808-0400 I INDEX [conn19] validating collection admin.system.version (UUID: bc985b18-dc55-4ec5-9923-34d0a77fadef)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.808-0400 d20020| 2019-07-25T18:25:27.808-0400 W STORAGE [conn19] Could not complete validation of table:collection-13-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.808-0400 d20020| 2019-07-25T18:25:27.808-0400 I INDEX [conn19] validating index _id_ on collection admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.808-0400 d20020| 2019-07-25T18:25:27.808-0400 W STORAGE [conn19] Could not complete validation of table:index-14-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.809-0400 d20020| 2019-07-25T18:25:27.809-0400 I INDEX [conn19] Validation complete for collection admin.system.version (UUID: bc985b18-dc55-4ec5-9923-34d0a77fadef). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.837-0400 d20020| 2019-07-25T18:25:27.837-0400 I COMMAND [conn19] CMD: validate config.cache.chunks.config.system.sessions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.838-0400 d20020| 2019-07-25T18:25:27.838-0400 I INDEX [conn19] validating collection config.cache.chunks.config.system.sessions (UUID: af7277a7-a3e5-406d-ac28-4ce6862bbaa0)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.838-0400 d20020| 2019-07-25T18:25:27.838-0400 W STORAGE [conn19] Could not complete validation of table:collection-26-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.839-0400 d20020| 2019-07-25T18:25:27.838-0400 I INDEX [conn19] validating index _id_ on collection config.cache.chunks.config.system.sessions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.839-0400 d20020| 2019-07-25T18:25:27.839-0400 W STORAGE [conn19] Could not complete validation of table:index-28-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.839-0400 d20020| 2019-07-25T18:25:27.839-0400 I INDEX [conn19] validating index lastmod_1 on collection config.cache.chunks.config.system.sessions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.839-0400 d20020| 2019-07-25T18:25:27.839-0400 W STORAGE [conn19] Could not complete validation of table:index-29-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.840-0400 d20020| 2019-07-25T18:25:27.840-0400 I INDEX [conn19] Validation complete for collection config.cache.chunks.config.system.sessions (UUID: af7277a7-a3e5-406d-ac28-4ce6862bbaa0). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.851-0400 d20020| 2019-07-25T18:25:27.851-0400 I COMMAND [conn19] CMD: validate config.cache.collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.852-0400 d20020| 2019-07-25T18:25:27.852-0400 I INDEX [conn19] validating collection config.cache.collections (UUID: aaeeb449-06a2-456c-88ad-c9f05fde12f6)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.852-0400 d20020| 2019-07-25T18:25:27.852-0400 W STORAGE [conn19] Could not complete validation of table:collection-22-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.853-0400 d20020| 2019-07-25T18:25:27.852-0400 I INDEX [conn19] validating index _id_ on collection config.cache.collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.853-0400 d20020| 2019-07-25T18:25:27.853-0400 W STORAGE [conn19] Could not complete validation of table:index-24-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.854-0400 d20020| 2019-07-25T18:25:27.853-0400 I INDEX [conn19] Validation complete for collection config.cache.collections (UUID: aaeeb449-06a2-456c-88ad-c9f05fde12f6). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.861-0400 d20020| 2019-07-25T18:25:27.861-0400 I COMMAND [conn19] CMD: validate config.cache.databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.862-0400 d20020| 2019-07-25T18:25:27.862-0400 I INDEX [conn19] validating collection config.cache.databases (UUID: 245bb089-d70e-4fce-aac4-3f2cb83a2712)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.862-0400 d20020| 2019-07-25T18:25:27.862-0400 W STORAGE [conn19] Could not complete validation of table:collection-21-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.862-0400 d20020| 2019-07-25T18:25:27.862-0400 I INDEX [conn19] validating index _id_ on collection config.cache.databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.862-0400 d20020| 2019-07-25T18:25:27.862-0400 W STORAGE [conn19] Could not complete validation of table:index-23-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.863-0400 d20020| 2019-07-25T18:25:27.863-0400 I INDEX [conn19] Validation complete for collection config.cache.databases (UUID: 245bb089-d70e-4fce-aac4-3f2cb83a2712). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.875-0400 d20020| 2019-07-25T18:25:27.875-0400 I COMMAND [conn19] CMD: validate config.system.sessions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.876-0400 d20020| 2019-07-25T18:25:27.876-0400 I INDEX [conn19] validating collection config.system.sessions (UUID: 169b4ca7-9147-452d-b8b8-2496698e9e94)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.876-0400 d20020| 2019-07-25T18:25:27.876-0400 W STORAGE [conn19] Could not complete validation of table:collection-19-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.876-0400 d20020| 2019-07-25T18:25:27.876-0400 I INDEX [conn19] validating index _id_ on collection config.system.sessions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.876-0400 d20020| 2019-07-25T18:25:27.876-0400 W STORAGE [conn19] Could not complete validation of table:index-20-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.876-0400 d20020| 2019-07-25T18:25:27.876-0400 I INDEX [conn19] validating index lsidTTLIndex on collection config.system.sessions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.876-0400 d20020| 2019-07-25T18:25:27.876-0400 W STORAGE [conn19] Could not complete validation of table:index-25-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.877-0400 d20020| 2019-07-25T18:25:27.877-0400 I INDEX [conn19] Validation complete for collection config.system.sessions (UUID: 169b4ca7-9147-452d-b8b8-2496698e9e94). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.885-0400 d20020| 2019-07-25T18:25:27.885-0400 I COMMAND [conn19] CMD: validate config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.886-0400 d20020| 2019-07-25T18:25:27.886-0400 I INDEX [conn19] validating collection config.transactions (UUID: f9f26136-f846-44b6-bee2-23bf19357d18)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.887-0400 d20020| 2019-07-25T18:25:27.887-0400 I INDEX [conn19] validating index _id_ on collection config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.888-0400 d20020| 2019-07-25T18:25:27.888-0400 I INDEX [conn19] Validation complete for collection config.transactions (UUID: f9f26136-f846-44b6-bee2-23bf19357d18). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.923-0400 d20020| 2019-07-25T18:25:27.923-0400 I COMMAND [conn19] CMD: validate local.oplog.rs
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.924-0400 d20020| 2019-07-25T18:25:27.924-0400 I INDEX [conn19] validating collection local.oplog.rs (UUID: 08673422-206a-444f-b25a-f5d97d203ce4)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.924-0400 d20020| 2019-07-25T18:25:27.924-0400 W STORAGE [conn19] Could not complete validation of table:collection-10-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.925-0400 d20020| 2019-07-25T18:25:27.925-0400 I INDEX [conn19] Validation complete for collection local.oplog.rs (UUID: 08673422-206a-444f-b25a-f5d97d203ce4). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.933-0400 d20020| 2019-07-25T18:25:27.933-0400 I COMMAND [conn19] CMD: validate local.replset.election
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.933-0400 d20020| 2019-07-25T18:25:27.933-0400 I INDEX [conn19] validating collection local.replset.election (UUID: 9ccc327f-c6ab-4ca2-b7fa-6a714cac8cf7)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.935-0400 d20020| 2019-07-25T18:25:27.934-0400 I INDEX [conn19] validating index _id_ on collection local.replset.election
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.936-0400 d20020| 2019-07-25T18:25:27.936-0400 I INDEX [conn19] Validation complete for collection local.replset.election (UUID: 9ccc327f-c6ab-4ca2-b7fa-6a714cac8cf7). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.944-0400 d20020| 2019-07-25T18:25:27.944-0400 I COMMAND [conn19] CMD: validate local.replset.minvalid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.945-0400 d20020| 2019-07-25T18:25:27.945-0400 I INDEX [conn19] validating collection local.replset.minvalid (UUID: e8263b28-ca9a-4c97-941d-fb2dc499603f)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.945-0400 d20020| 2019-07-25T18:25:27.945-0400 W STORAGE [conn19] Could not complete validation of table:collection-4-4192590575378879396. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.945-0400 d20020| 2019-07-25T18:25:27.945-0400 I INDEX [conn19] validating index _id_ on collection local.replset.minvalid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.947-0400 d20020| 2019-07-25T18:25:27.947-0400 I INDEX [conn19] Validation complete for collection local.replset.minvalid (UUID: e8263b28-ca9a-4c97-941d-fb2dc499603f). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.958-0400 d20020| 2019-07-25T18:25:27.958-0400 I COMMAND [conn19] CMD: validate local.replset.oplogTruncateAfterPoint
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.959-0400 d20020| 2019-07-25T18:25:27.959-0400 I INDEX [conn19] validating collection local.replset.oplogTruncateAfterPoint (UUID: 5ca27ba1-8f6a-47f5-9625-a3de4098e12f)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.960-0400 d20020| 2019-07-25T18:25:27.960-0400 I INDEX [conn19] validating index _id_ on collection local.replset.oplogTruncateAfterPoint
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.990-0400 d20020| 2019-07-25T18:25:27.990-0400 I INDEX [conn19] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 5ca27ba1-8f6a-47f5-9625-a3de4098e12f). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.998-0400 d20020| 2019-07-25T18:25:27.998-0400 I COMMAND [conn19] CMD: validate local.startup_log
[js_test:configsvr_failover_repro] 2019-07-25T18:25:27.999-0400 d20020| 2019-07-25T18:25:27.999-0400 I INDEX [conn19] validating collection local.startup_log (UUID: 853d0bbb-8ea9-4c98-8372-510603570068)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.000-0400 d20020| 2019-07-25T18:25:28.000-0400 I INDEX [conn19] validating index _id_ on collection local.startup_log
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.013-0400 d20020| 2019-07-25T18:25:28.013-0400 I INDEX [conn19] Validation complete for collection local.startup_log (UUID: 853d0bbb-8ea9-4c98-8372-510603570068). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.023-0400 d20020| 2019-07-25T18:25:28.022-0400 I COMMAND [conn19] CMD: validate local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.023-0400 d20020| 2019-07-25T18:25:28.023-0400 I INDEX [conn19] validating collection local.system.replset (UUID: e035716a-4d5c-43a4-9f25-4a1cadba3ea8)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.025-0400 d20020| 2019-07-25T18:25:28.025-0400 I INDEX [conn19] validating index _id_ on collection local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.027-0400 d20020| 2019-07-25T18:25:28.027-0400 I INDEX [conn19] Validation complete for collection local.system.replset (UUID: e035716a-4d5c-43a4-9f25-4a1cadba3ea8). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.037-0400 d20020| 2019-07-25T18:25:28.037-0400 I COMMAND [conn19] CMD: validate local.system.rollback.id
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.038-0400 d20020| 2019-07-25T18:25:28.038-0400 I INDEX [conn19] validating collection local.system.rollback.id (UUID: b624fc86-aa3a-4115-8ae9-6fa3be0dbf2c)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.039-0400 d20020| 2019-07-25T18:25:28.039-0400 I INDEX [conn19] validating index _id_ on collection local.system.rollback.id
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.041-0400 d20020| 2019-07-25T18:25:28.041-0400 I INDEX [conn19] Validation complete for collection local.system.rollback.id (UUID: b624fc86-aa3a-4115-8ae9-6fa3be0dbf2c). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.044-0400 d20020| 2019-07-25T18:25:28.043-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated: 15), will terminate after current cmd ends
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.044-0400 d20020| 2019-07-25T18:25:28.044-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.044-0400 d20020| 2019-07-25T18:25:28.044-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20020.sock
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.044-0400 d20020| 2019-07-25T18:25:28.044-0400 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.044-0400 d20020| 2019-07-25T18:25:28.044-0400 I REPL [signalProcessingThread] shutting down replication subsystems
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.044-0400 d20020| 2019-07-25T18:25:28.044-0400 I REPL [signalProcessingThread] Stopping replication reporter thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.044-0400 d20020| 2019-07-25T18:25:28.044-0400 I REPL [signalProcessingThread] Stopping replication fetcher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.045-0400 d20020| 2019-07-25T18:25:28.044-0400 I REPL [signalProcessingThread] Stopping replication applier thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.045-0400 d20020| 2019-07-25T18:25:28.045-0400 I REPL [rsSync-0] Finished oplog application
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.928-0400 d20020| 2019-07-25T18:25:28.927-0400 I REPL [rsBackgroundSync] Stopping replication producer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.928-0400 d20020| 2019-07-25T18:25:28.927-0400 I REPL [signalProcessingThread] Stopping replication storage threads
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.928-0400 d20020| 2019-07-25T18:25:28.928-0400 I ASIO [RS] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.929-0400 d20020| 2019-07-25T18:25:28.929-0400 I ASIO [RS] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.931-0400 d20020| 2019-07-25T18:25:28.931-0400 I ASIO [Replication] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.931-0400 d20020| 2019-07-25T18:25:28.931-0400 I ASIO [ShardRegistry] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.931-0400 d20020| 2019-07-25T18:25:28.931-0400 I CONNPOOL [ShardRegistry] Dropping all pooled connections to Jasons-MacBook-Pro.local:20021 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.932-0400 d20020| 2019-07-25T18:25:28.931-0400 I CONNPOOL [ShardRegistry] Dropping all pooled connections to Jasons-MacBook-Pro.local:20022 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.932-0400 d20020| 2019-07-25T18:25:28.931-0400 I CONNPOOL [ShardRegistry] Dropping all pooled connections to Jasons-MacBook-Pro.local:20023 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.932-0400 c20021| 2019-07-25T18:25:28.932-0400 I NETWORK [conn43] end connection 127.0.0.1:49527 (10 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.932-0400 c20021| 2019-07-25T18:25:28.932-0400 I NETWORK [conn42] end connection 127.0.0.1:49526 (9 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.932-0400 c20022| 2019-07-25T18:25:28.932-0400 I NETWORK [conn28] end connection 127.0.0.1:49528 (9 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.933-0400 c20023| 2019-07-25T18:25:28.932-0400 I NETWORK [conn28] end connection 127.0.0.1:49525 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.933-0400 d20020| 2019-07-25T18:25:28.932-0400 I ASIO [TaskExecutorPool-0] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.933-0400 c20023| 2019-07-25T18:25:28.932-0400 I NETWORK [conn27] end connection 127.0.0.1:49524 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.933-0400 d20020| 2019-07-25T18:25:28.933-0400 W SHARDING [signalProcessingThread] error encountered while cleaning up distributed ping entry for Jasons-MacBook-Pro.local:20020:1564093511:1668631399416606862 :: caused by :: ShutdownInProgress: Shutdown in progress
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.933-0400 d20020| 2019-07-25T18:25:28.933-0400 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.933-0400 d20020| 2019-07-25T18:25:28.933-0400 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.933-0400 d20020| 2019-07-25T18:25:28.933-0400 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.934-0400 c20022| 2019-07-25T18:25:28.934-0400 I NETWORK [conn27] end connection 127.0.0.1:49522 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.934-0400 c20023| 2019-07-25T18:25:28.934-0400 I NETWORK [conn26] end connection 127.0.0.1:49521 (6 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.934-0400 c20021| 2019-07-25T18:25:28.934-0400 I NETWORK [conn41] end connection 127.0.0.1:49523 (8 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.934-0400 d20020| 2019-07-25T18:25:28.934-0400 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.937-0400 d20020| 2019-07-25T18:25:28.937-0400 I STORAGE [signalProcessingThread] Deregistering all the collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.937-0400 d20020| 2019-07-25T18:25:28.937-0400 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.937-0400 d20020| 2019-07-25T18:25:28.937-0400 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.938-0400 d20020| 2019-07-25T18:25:28.938-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.950-0400 d20020| 2019-07-25T18:25:28.950-0400 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.950-0400 d20020| 2019-07-25T18:25:28.950-0400 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:28.950-0400 d20020| 2019-07-25T18:25:28.950-0400 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.034-0400 d20020| 2019-07-25T18:25:29.033-0400 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.034-0400 d20020| 2019-07-25T18:25:29.033-0400 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.034-0400 d20020| 2019-07-25T18:25:29.034-0400 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.374-0400 d20020| 2019-07-25T18:25:29.374-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.377-0400 d20020| 2019-07-25T18:25:29.377-0400 I CONTROL [signalProcessingThread] now exiting
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.377-0400 d20020| 2019-07-25T18:25:29.377-0400 I CONTROL [signalProcessingThread] shutting down with code:0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.395-0400 2019-07-25T18:25:29.394-0400 I - [js] shell: stopped mongo program on port 20020
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.395-0400 ReplSetTest stop *** Mongod in port 20020 shutdown with code (0) ***
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.395-0400 ReplSetTest stopSet deleting all dbpaths
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.402-0400 2019-07-25T18:25:29.401-0400 I NETWORK [js] Removed ReplicaSetMonitor for replica set configsvr_failover_repro-rs0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.402-0400 ReplSetTest stopSet *** Shut down repl set - test worked ****
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.649-0400 c20023| 2019-07-25T18:25:29.649-0400 I SHARDING [repl-writer-worker-8] Marking collection config.migrations as collection version:
[js_test:configsvr_failover_repro] 2019-07-25T18:25:29.688-0400 c20022| 2019-07-25T18:25:29.688-0400 I COMMAND [conn1] CMD fsync: sync:1 lock:1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.281-0400 c20022| 2019-07-25T18:25:30.281-0400 W COMMAND [fsyncLockWorker] WARNING: instance is locked, blocking all writes. The fsync command has finished execution, remember to unlock the instance using fsyncUnlock().
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.282-0400 c20022| 2019-07-25T18:25:30.281-0400 I COMMAND [conn1] mongod is locked and no writes are allowed. db.fsyncUnlock() to unlock
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.282-0400 c20022| 2019-07-25T18:25:30.281-0400 I COMMAND [conn1] Lock count is 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.282-0400 c20022| 2019-07-25T18:25:30.281-0400 I COMMAND [conn1] For more info see http://dochub.mongodb.org/core/fsynccommand
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.282-0400 c20022| 2019-07-25T18:25:30.282-0400 I COMMAND [conn1] command admin.$cmd appName: "MongoDB Shell" command: fsync { fsync: 1.0, lock: 1.0, allowFsyncFailure: true, lsid: { id: UUID("44321189-0df7-4024-bb83-362f10fea9c6") }, $clusterTime: { clusterTime: Timestamp(1564093529, 14), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:416 locks:{ Mutex: { acquireCount: { W: 1 } } } protocol:op_msg 593ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.287-0400 ReplSetTest awaitReplication: going to check only Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.386-0400 ReplSetTest awaitReplication: starting: optime for primary, Jasons-MacBook-Pro.local:20022, is { "ts" : Timestamp(1564093529, 14), "t" : NumberLong(2) }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.388-0400 ReplSetTest awaitReplication: checking secondaries against latest primary optime { "ts" : Timestamp(1564093529, 14), "t" : NumberLong(2) }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.411-0400 ReplSetTest awaitReplication: checking secondary #0: Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.429-0400 ReplSetTest awaitReplication: secondary #0, Jasons-MacBook-Pro.local:20021, is synced
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.456-0400 ReplSetTest awaitReplication: checking secondary #1: Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.474-0400 ReplSetTest awaitReplication: secondary #1, Jasons-MacBook-Pro.local:20023, is synced
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.476-0400 ReplSetTest awaitReplication: finished: all 2 secondaries synced at optime { "ts" : Timestamp(1564093529, 14), "t" : NumberLong(2) }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.576-0400 c20022| 2019-07-25T18:25:30.576-0400 I COMMAND [conn1] command: unlock requested
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.576-0400 c20022| 2019-07-25T18:25:30.576-0400 I COMMAND [conn1] fsyncUnlock completed. mongod is now unlocked and free to accept writes
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.821-0400 c20022| 2019-07-25T18:25:30.821-0400 I COMMAND [conn1] CMD fsync: sync:1 lock:1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.998-0400 c20022| 2019-07-25T18:25:30.998-0400 W COMMAND [fsyncLockWorker] WARNING: instance is locked, blocking all writes. The fsync command has finished execution, remember to unlock the instance using fsyncUnlock().
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.998-0400 c20022| 2019-07-25T18:25:30.998-0400 I COMMAND [conn1] mongod is locked and no writes are allowed. db.fsyncUnlock() to unlock
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.998-0400 c20022| 2019-07-25T18:25:30.998-0400 I COMMAND [conn1] Lock count is 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.999-0400 c20022| 2019-07-25T18:25:30.998-0400 I COMMAND [conn1] For more info see http://dochub.mongodb.org/core/fsynccommand
[js_test:configsvr_failover_repro] 2019-07-25T18:25:30.999-0400 c20022| 2019-07-25T18:25:30.999-0400 I COMMAND [conn1] command admin.$cmd appName: "MongoDB Shell" command: fsync { fsync: 1.0, lock: 1.0, allowFsyncFailure: true, lsid: { id: UUID("44321189-0df7-4024-bb83-362f10fea9c6") }, $clusterTime: { clusterTime: Timestamp(1564093530, 14), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, $readPreference: { mode: "secondaryPreferred" }, $db: "admin" } numYields:0 reslen:416 locks:{ Mutex: { acquireCount: { W: 1 } } } protocol:op_msg 177ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:31.001-0400 ReplSetTest awaitReplication: going to check only Jasons-MacBook-Pro.local:20021,Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:31.105-0400 ReplSetTest awaitReplication: starting: optime for primary, Jasons-MacBook-Pro.local:20022, is { "ts" : Timestamp(1564093530, 14), "t" : NumberLong(2) }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:31.106-0400 ReplSetTest awaitReplication: checking secondaries against latest primary optime { "ts" : Timestamp(1564093530, 14), "t" : NumberLong(2) }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:31.130-0400 ReplSetTest awaitReplication: checking secondary #0: Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:31.146-0400 ReplSetTest awaitReplication: secondary #0, Jasons-MacBook-Pro.local:20021, is synced
[js_test:configsvr_failover_repro] 2019-07-25T18:25:31.174-0400 ReplSetTest awaitReplication: checking secondary #1: Jasons-MacBook-Pro.local:20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:31.189-0400 ReplSetTest awaitReplication: secondary #1, Jasons-MacBook-Pro.local:20023, is synced
[js_test:configsvr_failover_repro] 2019-07-25T18:25:31.190-0400 ReplSetTest awaitReplication: finished: all 2 secondaries synced at optime { "ts" : Timestamp(1564093530, 14), "t" : NumberLong(2) }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.622-0400 c20022| 2019-07-25T18:25:32.622-0400 I COMMAND [conn1] command: unlock requested
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.622-0400 c20022| 2019-07-25T18:25:32.622-0400 I COMMAND [conn1] fsyncUnlock completed. mongod is now unlocked and free to accept writes
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.655-0400 c20021| 2019-07-25T18:25:32.655-0400 I COMMAND [conn1] successfully set parameter waitForStepDownOnNonCommandShutdown to false (was true)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.664-0400 c20022| 2019-07-25T18:25:32.664-0400 I COMMAND [conn1] successfully set parameter waitForStepDownOnNonCommandShutdown to false (was true)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.676-0400 c20023| 2019-07-25T18:25:32.676-0400 I COMMAND [conn1] successfully set parameter waitForStepDownOnNonCommandShutdown to false (was true)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.677-0400 ReplSetTest stop *** Shutting down mongod in port 20021 ***
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.680-0400 c20021| 2019-07-25T18:25:32.680-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49545 #49 (9 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.680-0400 c20021| 2019-07-25T18:25:32.680-0400 I NETWORK [conn49] received client metadata from 127.0.0.1:49545 conn49: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.686-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.686-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.686-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.686-0400 [jsTest] New session started with sessionID: { "id" : UUID("30e31a33-c594-4e2b-acaa-21f3cf31e12f") } and options: { "causalConsistency" : false }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.686-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.686-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.686-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.705-0400 c20021| 2019-07-25T18:25:32.704-0400 I COMMAND [conn49] Attempting to step down in response to replSetStepDown command
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.718-0400 c20021| 2019-07-25T18:25:32.718-0400 I REPL [conn49] 'freezing' for 86400 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.772-0400 c20021| 2019-07-25T18:25:32.772-0400 I COMMAND [conn49] CMD: validate admin.system.keys
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.772-0400 c20021| 2019-07-25T18:25:32.772-0400 I INDEX [conn49] validating collection admin.system.keys (UUID: 7d5bfd11-2f9e-43fa-b296-05f895e4aea7)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.773-0400 c20021| 2019-07-25T18:25:32.772-0400 W STORAGE [conn49] Could not complete validation of table:collection-57-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.773-0400 c20021| 2019-07-25T18:25:32.773-0400 I INDEX [conn49] validating index _id_ on collection admin.system.keys
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.773-0400 c20021| 2019-07-25T18:25:32.773-0400 W STORAGE [conn49] Could not complete validation of table:index-58-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.774-0400 c20021| 2019-07-25T18:25:32.774-0400 I INDEX [conn49] Validation complete for collection admin.system.keys (UUID: 7d5bfd11-2f9e-43fa-b296-05f895e4aea7). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.781-0400 c20021| 2019-07-25T18:25:32.781-0400 I COMMAND [conn49] CMD: validate admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.782-0400 c20021| 2019-07-25T18:25:32.782-0400 I INDEX [conn49] validating collection admin.system.version (UUID: 9eb89103-fb3d-4038-bb54-c402876ca16e)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.783-0400 c20021| 2019-07-25T18:25:32.783-0400 I INDEX [conn49] validating index _id_ on collection admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.785-0400 c20021| 2019-07-25T18:25:32.785-0400 I INDEX [conn49] Validation complete for collection admin.system.version (UUID: 9eb89103-fb3d-4038-bb54-c402876ca16e). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.818-0400 c20021| 2019-07-25T18:25:32.818-0400 I COMMAND [conn49] CMD: validate config.actionlog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.818-0400 c20021| 2019-07-25T18:25:32.818-0400 I INDEX [conn49] validating collection config.actionlog (UUID: bb55f986-13a3-489d-bc35-a22a32b44c10)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.819-0400 c20021| 2019-07-25T18:25:32.818-0400 W STORAGE [conn49] Could not complete validation of table:collection-61-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.819-0400 c20021| 2019-07-25T18:25:32.819-0400 I INDEX [conn49] validating index _id_ on collection config.actionlog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.819-0400 c20021| 2019-07-25T18:25:32.819-0400 W STORAGE [conn49] Could not complete validation of table:index-62-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.820-0400 c20021| 2019-07-25T18:25:32.820-0400 I INDEX [conn49] Validation complete for collection config.actionlog (UUID: bb55f986-13a3-489d-bc35-a22a32b44c10). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.827-0400 c20021| 2019-07-25T18:25:32.827-0400 I COMMAND [conn49] CMD: validate config.changelog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.828-0400 c20021| 2019-07-25T18:25:32.828-0400 I INDEX [conn49] validating collection config.changelog (UUID: b00cc6c9-f585-4cd8-9cf1-362a83e2e9df)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.828-0400 c20021| 2019-07-25T18:25:32.828-0400 W STORAGE [conn49] Could not complete validation of table:collection-63-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.828-0400 c20021| 2019-07-25T18:25:32.828-0400 I INDEX [conn49] validating index _id_ on collection config.changelog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.828-0400 c20021| 2019-07-25T18:25:32.828-0400 W STORAGE [conn49] Could not complete validation of table:index-64-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.829-0400 c20021| 2019-07-25T18:25:32.829-0400 I INDEX [conn49] Validation complete for collection config.changelog (UUID: b00cc6c9-f585-4cd8-9cf1-362a83e2e9df). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.837-0400 c20021| 2019-07-25T18:25:32.837-0400 I COMMAND [conn49] CMD: validate config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.838-0400 c20021| 2019-07-25T18:25:32.838-0400 I INDEX [conn49] validating collection config.chunks (UUID: 63c02d1c-5493-42cd-9595-17fe7298418c)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.838-0400 c20021| 2019-07-25T18:25:32.838-0400 W STORAGE [conn49] Could not complete validation of table:collection-17-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.838-0400 c20021| 2019-07-25T18:25:32.838-0400 I INDEX [conn49] validating index _id_ on collection config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.839-0400 c20021| 2019-07-25T18:25:32.838-0400 W STORAGE [conn49] Could not complete validation of table:index-18-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.839-0400 c20021| 2019-07-25T18:25:32.839-0400 I INDEX [conn49] validating index ns_1_min_1 on collection config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.839-0400 c20021| 2019-07-25T18:25:32.839-0400 W STORAGE [conn49] Could not complete validation of table:index-19-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.839-0400 c20021| 2019-07-25T18:25:32.839-0400 I INDEX [conn49] validating index ns_1_shard_1_min_1 on collection config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.839-0400 c20021| 2019-07-25T18:25:32.839-0400 W STORAGE [conn49] Could not complete validation of table:index-22-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.839-0400 c20021| 2019-07-25T18:25:32.839-0400 I INDEX [conn49] validating index ns_1_lastmod_1 on collection config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.839-0400 c20021| 2019-07-25T18:25:32.839-0400 W STORAGE [conn49] Could not complete validation of table:index-25-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.840-0400 c20021| 2019-07-25T18:25:32.840-0400 I INDEX [conn49] Validation complete for collection config.chunks (UUID: 63c02d1c-5493-42cd-9595-17fe7298418c). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.848-0400 c20021| 2019-07-25T18:25:32.848-0400 I COMMAND [conn49] CMD: validate config.collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.849-0400 c20021| 2019-07-25T18:25:32.849-0400 I INDEX [conn49] validating collection config.collections (UUID: c91bd94c-858a-4b52-a9a4-ed241d46bb6b)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.849-0400 c20021| 2019-07-25T18:25:32.849-0400 W STORAGE [conn49] Could not complete validation of table:collection-65-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.849-0400 c20021| 2019-07-25T18:25:32.849-0400 I INDEX [conn49] validating index _id_ on collection config.collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.849-0400 c20021| 2019-07-25T18:25:32.849-0400 W STORAGE [conn49] Could not complete validation of table:index-66-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.850-0400 c20021| 2019-07-25T18:25:32.850-0400 I INDEX [conn49] Validation complete for collection config.collections (UUID: c91bd94c-858a-4b52-a9a4-ed241d46bb6b). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.862-0400 c20021| 2019-07-25T18:25:32.861-0400 I COMMAND [conn49] CMD: validate config.databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.862-0400 c20021| 2019-07-25T18:25:32.862-0400 I INDEX [conn49] validating collection config.databases (UUID: 01649270-e43f-438a-ad71-36bd6eeffe6b)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.862-0400 c20021| 2019-07-25T18:25:32.862-0400 W STORAGE [conn49] Could not complete validation of table:collection-67-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.863-0400 c20021| 2019-07-25T18:25:32.862-0400 I INDEX [conn49] validating index _id_ on collection config.databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.863-0400 c20021| 2019-07-25T18:25:32.863-0400 W STORAGE [conn49] Could not complete validation of table:index-68-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.864-0400 c20021| 2019-07-25T18:25:32.863-0400 I INDEX [conn49] Validation complete for collection config.databases (UUID: 01649270-e43f-438a-ad71-36bd6eeffe6b). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.871-0400 c20021| 2019-07-25T18:25:32.871-0400 I COMMAND [conn49] CMD: validate config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.872-0400 c20021| 2019-07-25T18:25:32.872-0400 I INDEX [conn49] validating collection config.lockpings (UUID: dd0672e8-19c6-432b-9b6a-d21b02c0bf6e)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.872-0400 c20021| 2019-07-25T18:25:32.872-0400 W STORAGE [conn49] Could not complete validation of table:collection-44-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.872-0400 c20021| 2019-07-25T18:25:32.872-0400 I INDEX [conn49] validating index _id_ on collection config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.873-0400 c20021| 2019-07-25T18:25:32.872-0400 W STORAGE [conn49] Could not complete validation of table:index-45-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.873-0400 c20021| 2019-07-25T18:25:32.873-0400 I INDEX [conn49] validating index ping_1 on collection config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.873-0400 c20021| 2019-07-25T18:25:32.873-0400 W STORAGE [conn49] Could not complete validation of table:index-46-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.874-0400 c20021| 2019-07-25T18:25:32.874-0400 I INDEX [conn49] Validation complete for collection config.lockpings (UUID: dd0672e8-19c6-432b-9b6a-d21b02c0bf6e). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.885-0400 c20021| 2019-07-25T18:25:32.885-0400 I COMMAND [conn49] CMD: validate config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.886-0400 c20021| 2019-07-25T18:25:32.886-0400 I INDEX [conn49] validating collection config.locks (UUID: dd929b42-c13c-4682-8066-ef80c2666228)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.886-0400 c20021| 2019-07-25T18:25:32.886-0400 W STORAGE [conn49] Could not complete validation of table:collection-38-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.887-0400 c20021| 2019-07-25T18:25:32.887-0400 I INDEX [conn49] validating index _id_ on collection config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.887-0400 c20021| 2019-07-25T18:25:32.887-0400 W STORAGE [conn49] Could not complete validation of table:index-39-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.887-0400 c20021| 2019-07-25T18:25:32.887-0400 I INDEX [conn49] validating index ts_1 on collection config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.887-0400 c20021| 2019-07-25T18:25:32.887-0400 W STORAGE [conn49] Could not complete validation of table:index-40-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.887-0400 c20021| 2019-07-25T18:25:32.887-0400 I INDEX [conn49] validating index state_1_process_1 on collection config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.887-0400 c20021| 2019-07-25T18:25:32.887-0400 W STORAGE [conn49] Could not complete validation of table:index-42-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.888-0400 c20021| 2019-07-25T18:25:32.888-0400 I INDEX [conn49] Validation complete for collection config.locks (UUID: dd929b42-c13c-4682-8066-ef80c2666228). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.896-0400 c20021| 2019-07-25T18:25:32.896-0400 I COMMAND [conn49] CMD: validate config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.897-0400 c20021| 2019-07-25T18:25:32.897-0400 I INDEX [conn49] validating collection config.migrations (UUID: 91fc80cd-1974-4835-96e0-c0c276b056ee)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.898-0400 c20021| 2019-07-25T18:25:32.898-0400 I INDEX [conn49] validating index _id_ on collection config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.898-0400 c20021| 2019-07-25T18:25:32.898-0400 W STORAGE [conn49] Could not complete validation of table:index-29-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.898-0400 c20021| 2019-07-25T18:25:32.898-0400 I INDEX [conn49] validating index ns_1_min_1 on collection config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.900-0400 c20021| 2019-07-25T18:25:32.900-0400 I INDEX [conn49] Validation complete for collection config.migrations (UUID: 91fc80cd-1974-4835-96e0-c0c276b056ee). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.912-0400 c20021| 2019-07-25T18:25:32.912-0400 I COMMAND [conn49] CMD: validate config.mongos
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.913-0400 c20021| 2019-07-25T18:25:32.913-0400 I INDEX [conn49] validating collection config.mongos (UUID: 81e54234-9908-454a-817f-30651e5cf0b6)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.913-0400 c20021| 2019-07-25T18:25:32.913-0400 W STORAGE [conn49] Could not complete validation of table:collection-69-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.913-0400 c20021| 2019-07-25T18:25:32.913-0400 I INDEX [conn49] validating index _id_ on collection config.mongos
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.913-0400 c20021| 2019-07-25T18:25:32.913-0400 W STORAGE [conn49] Could not complete validation of table:index-70-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.914-0400 c20021| 2019-07-25T18:25:32.914-0400 I INDEX [conn49] Validation complete for collection config.mongos (UUID: 81e54234-9908-454a-817f-30651e5cf0b6). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.922-0400 c20021| 2019-07-25T18:25:32.922-0400 I COMMAND [conn49] CMD: validate config.settings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.923-0400 c20021| 2019-07-25T18:25:32.923-0400 I INDEX [conn49] validating collection config.settings (UUID: 5b34234e-9f2a-4dc0-a6ec-4c3c2c8d8c4a)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.923-0400 c20021| 2019-07-25T18:25:32.923-0400 W STORAGE [conn49] Could not complete validation of table:collection-59-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.923-0400 c20021| 2019-07-25T18:25:32.923-0400 I INDEX [conn49] validating index _id_ on collection config.settings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.923-0400 c20021| 2019-07-25T18:25:32.923-0400 W STORAGE [conn49] Could not complete validation of table:index-60-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.924-0400 c20021| 2019-07-25T18:25:32.924-0400 I INDEX [conn49] Validation complete for collection config.settings (UUID: 5b34234e-9f2a-4dc0-a6ec-4c3c2c8d8c4a). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.935-0400 c20021| 2019-07-25T18:25:32.935-0400 I COMMAND [conn49] CMD: validate config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.936-0400 c20021| 2019-07-25T18:25:32.936-0400 I INDEX [conn49] validating collection config.shards (UUID: 9dc58f2f-04de-441a-b6d7-36d58adac3fa)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.936-0400 c20021| 2019-07-25T18:25:32.936-0400 W STORAGE [conn49] Could not complete validation of table:collection-33-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.937-0400 c20021| 2019-07-25T18:25:32.936-0400 I INDEX [conn49] validating index _id_ on collection config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.937-0400 c20021| 2019-07-25T18:25:32.937-0400 W STORAGE [conn49] Could not complete validation of table:index-34-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.937-0400 c20021| 2019-07-25T18:25:32.937-0400 I INDEX [conn49] validating index host_1 on collection config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.937-0400 c20021| 2019-07-25T18:25:32.937-0400 W STORAGE [conn49] Could not complete validation of table:index-35-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.938-0400 c20021| 2019-07-25T18:25:32.938-0400 I INDEX [conn49] Validation complete for collection config.shards (UUID: 9dc58f2f-04de-441a-b6d7-36d58adac3fa). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.946-0400 c20021| 2019-07-25T18:25:32.946-0400 I COMMAND [conn49] CMD: validate config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.947-0400 c20021| 2019-07-25T18:25:32.947-0400 I INDEX [conn49] validating collection config.tags (UUID: f1867b25-f9fb-445f-8bca-c3b4a21b38ee)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.948-0400 c20021| 2019-07-25T18:25:32.948-0400 I INDEX [conn49] validating index _id_ on collection config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.948-0400 c20021| 2019-07-25T18:25:32.948-0400 W STORAGE [conn49] Could not complete validation of table:index-49-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.948-0400 c20021| 2019-07-25T18:25:32.948-0400 I INDEX [conn49] validating index ns_1_min_1 on collection config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.949-0400 c20021| 2019-07-25T18:25:32.949-0400 I INDEX [conn49] validating index ns_1_tag_1 on collection config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.951-0400 c20021| 2019-07-25T18:25:32.951-0400 I INDEX [conn49] Validation complete for collection config.tags (UUID: f1867b25-f9fb-445f-8bca-c3b4a21b38ee). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.962-0400 c20021| 2019-07-25T18:25:32.962-0400 I COMMAND [conn49] CMD: validate config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.963-0400 c20021| 2019-07-25T18:25:32.963-0400 I INDEX [conn49] validating collection config.transactions (UUID: 2ff387c1-0957-46b7-b825-992aba2ed063)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.963-0400 c20021| 2019-07-25T18:25:32.963-0400 W STORAGE [conn49] Could not complete validation of table:collection-15-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.963-0400 c20021| 2019-07-25T18:25:32.963-0400 I INDEX [conn49] validating index _id_ on collection config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.963-0400 c20021| 2019-07-25T18:25:32.963-0400 W STORAGE [conn49] Could not complete validation of table:index-16-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.964-0400 c20021| 2019-07-25T18:25:32.964-0400 I INDEX [conn49] Validation complete for collection config.transactions (UUID: 2ff387c1-0957-46b7-b825-992aba2ed063). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.972-0400 c20021| 2019-07-25T18:25:32.972-0400 I COMMAND [conn49] CMD: validate config.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.973-0400 c20021| 2019-07-25T18:25:32.973-0400 I INDEX [conn49] validating collection config.version (UUID: e2da88e1-afec-4a2a-9c9c-0b4b51073f63)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.973-0400 c20021| 2019-07-25T18:25:32.973-0400 W STORAGE [conn49] Could not complete validation of table:collection-55-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.973-0400 c20021| 2019-07-25T18:25:32.973-0400 I INDEX [conn49] validating index _id_ on collection config.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.973-0400 c20021| 2019-07-25T18:25:32.973-0400 W STORAGE [conn49] Could not complete validation of table:index-56-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:32.974-0400 c20021| 2019-07-25T18:25:32.974-0400 I INDEX [conn49] Validation complete for collection config.version (UUID: e2da88e1-afec-4a2a-9c9c-0b4b51073f63). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.010-0400 c20021| 2019-07-25T18:25:33.010-0400 I COMMAND [conn49] CMD: validate local.oplog.rs
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.011-0400 c20021| 2019-07-25T18:25:33.011-0400 I INDEX [conn49] validating collection local.oplog.rs (UUID: 4897c211-3daa-4388-9897-4f5bcb131a4b)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.013-0400 c20021| 2019-07-25T18:25:33.013-0400 W STORAGE [conn49] Could not complete validation of table:collection-10-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.015-0400 c20021| 2019-07-25T18:25:33.015-0400 I INDEX [conn49] Validation complete for collection local.oplog.rs (UUID: 4897c211-3daa-4388-9897-4f5bcb131a4b). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.025-0400 c20021| 2019-07-25T18:25:33.024-0400 I COMMAND [conn49] CMD: validate local.replset.election
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.026-0400 c20021| 2019-07-25T18:25:33.026-0400 I INDEX [conn49] validating collection local.replset.election (UUID: f6e92dc2-1307-4a6a-8605-87b9037fa27b)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.056-0400 c20021| 2019-07-25T18:25:33.055-0400 I INDEX [conn49] validating index _id_ on collection local.replset.election
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.058-0400 c20021| 2019-07-25T18:25:33.058-0400 I INDEX [conn49] Validation complete for collection local.replset.election (UUID: f6e92dc2-1307-4a6a-8605-87b9037fa27b). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.066-0400 c20021| 2019-07-25T18:25:33.066-0400 I COMMAND [conn49] CMD: validate local.replset.minvalid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.067-0400 c20021| 2019-07-25T18:25:33.067-0400 I INDEX [conn49] validating collection local.replset.minvalid (UUID: e35ccdb9-1979-4a0e-afcc-2ad0c80ef4a3)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.067-0400 c20021| 2019-07-25T18:25:33.067-0400 W STORAGE [conn49] Could not complete validation of table:collection-4-7559448855182571804. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.067-0400 c20021| 2019-07-25T18:25:33.067-0400 I INDEX [conn49] validating index _id_ on collection local.replset.minvalid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.070-0400 c20021| 2019-07-25T18:25:33.070-0400 I INDEX [conn49] Validation complete for collection local.replset.minvalid (UUID: e35ccdb9-1979-4a0e-afcc-2ad0c80ef4a3). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.080-0400 c20021| 2019-07-25T18:25:33.080-0400 I COMMAND [conn49] CMD: validate local.replset.oplogTruncateAfterPoint
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.080-0400 c20021| 2019-07-25T18:25:33.080-0400 I INDEX [conn49] validating collection local.replset.oplogTruncateAfterPoint (UUID: 96b5a123-08a3-4ab4-8639-2e342910c9d3)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.109-0400 c20021| 2019-07-25T18:25:33.108-0400 I INDEX [conn49] validating index _id_ on collection local.replset.oplogTruncateAfterPoint
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.135-0400 c20021| 2019-07-25T18:25:33.135-0400 I INDEX [conn49] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 96b5a123-08a3-4ab4-8639-2e342910c9d3). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.148-0400 c20021| 2019-07-25T18:25:33.148-0400 I COMMAND [conn49] CMD: validate local.startup_log
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.149-0400 c20021| 2019-07-25T18:25:33.149-0400 I INDEX [conn49] validating collection local.startup_log (UUID: 84420a9d-4c68-4563-849d-b250fd14e9f7)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.150-0400 c20021| 2019-07-25T18:25:33.150-0400 I INDEX [conn49] validating index _id_ on collection local.startup_log
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.153-0400 c20021| 2019-07-25T18:25:33.153-0400 I INDEX [conn49] Validation complete for collection local.startup_log (UUID: 84420a9d-4c68-4563-849d-b250fd14e9f7). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.163-0400 c20021| 2019-07-25T18:25:33.162-0400 I COMMAND [conn49] CMD: validate local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.165-0400 c20021| 2019-07-25T18:25:33.164-0400 I INDEX [conn49] validating collection local.system.replset (UUID: dd5952c4-3c13-4ad9-bf30-e86a595e836e)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.189-0400 c20021| 2019-07-25T18:25:33.189-0400 I INDEX [conn49] validating index _id_ on collection local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.191-0400 c20021| 2019-07-25T18:25:33.191-0400 I INDEX [conn49] Validation complete for collection local.system.replset (UUID: dd5952c4-3c13-4ad9-bf30-e86a595e836e). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.203-0400 c20021| 2019-07-25T18:25:33.203-0400 I COMMAND [conn49] CMD: validate local.system.rollback.id
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.204-0400 c20021| 2019-07-25T18:25:33.204-0400 I INDEX [conn49] validating collection local.system.rollback.id (UUID: 0f09bfc1-1827-4436-b3ef-50bc9cb16e6c)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.205-0400 c20021| 2019-07-25T18:25:33.205-0400 I INDEX [conn49] validating index _id_ on collection local.system.rollback.id
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.207-0400 c20021| 2019-07-25T18:25:33.207-0400 I INDEX [conn49] Validation complete for collection local.system.rollback.id (UUID: 0f09bfc1-1827-4436-b3ef-50bc9cb16e6c). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.209-0400 c20021| 2019-07-25T18:25:33.209-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated: 15), will terminate after current cmd ends
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.210-0400 c20021| 2019-07-25T18:25:33.210-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.210-0400 c20021| 2019-07-25T18:25:33.210-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20021.sock
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.210-0400 c20021| 2019-07-25T18:25:33.210-0400 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.210-0400 c20021| 2019-07-25T18:25:33.210-0400 I REPL [signalProcessingThread] shutting down replication subsystems
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.210-0400 c20021| 2019-07-25T18:25:33.210-0400 I REPL [signalProcessingThread] Stopping replication reporter thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.210-0400 c20021| 2019-07-25T18:25:33.210-0400 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to Jasons-MacBook-Pro.local:20022: CallbackCanceled: Reporter no longer valid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.211-0400 c20021| 2019-07-25T18:25:33.210-0400 I REPL [signalProcessingThread] Stopping replication fetcher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.211-0400 c20021| 2019-07-25T18:25:33.210-0400 I REPL [signalProcessingThread] Stopping replication applier thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.211-0400 c20021| 2019-07-25T18:25:33.211-0400 I REPL [rsBackgroundSync] Replication producer stopped after oplog fetcher finished returning a batch from our sync source. Abandoning this batch of oplog entries and re-evaluating our sync source.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.211-0400 c20021| 2019-07-25T18:25:33.211-0400 I REPL [rsBackgroundSync] Stopping replication producer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.211-0400 c20021| 2019-07-25T18:25:33.211-0400 I REPL [rsSync-0] Finished oplog application
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.211-0400 c20021| 2019-07-25T18:25:33.211-0400 I REPL [signalProcessingThread] Stopping replication storage threads
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.211-0400 c20021| 2019-07-25T18:25:33.211-0400 I ASIO [RS] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.211-0400 c20021| 2019-07-25T18:25:33.211-0400 I ASIO [RS] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.212-0400 c20021| 2019-07-25T18:25:33.212-0400 I CONNPOOL [RS] Dropping all pooled connections to Jasons-MacBook-Pro.local:20022 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.212-0400 c20022| 2019-07-25T18:25:33.212-0400 I NETWORK [conn32] end connection 127.0.0.1:49537 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.213-0400 c20021| 2019-07-25T18:25:33.213-0400 I ASIO [Replication] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.214-0400 c20021| 2019-07-25T18:25:33.213-0400 I CONNPOOL [Replication] Dropping all pooled connections to Jasons-MacBook-Pro.local:20023 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.214-0400 c20022| 2019-07-25T18:25:33.214-0400 I NETWORK [conn3] end connection 127.0.0.1:49471 (6 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.214-0400 c20023| 2019-07-25T18:25:33.214-0400 I NETWORK [conn3] end connection 127.0.0.1:49472 (5 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.214-0400 c20021| 2019-07-25T18:25:33.214-0400 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.214-0400 c20021| 2019-07-25T18:25:33.214-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to Jasons-MacBook-Pro.local:20020 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.214-0400 c20021| 2019-07-25T18:25:33.214-0400 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.215-0400 c20021| 2019-07-25T18:25:33.214-0400 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.215-0400 c20021| 2019-07-25T18:25:33.215-0400 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.217-0400 c20021| 2019-07-25T18:25:33.217-0400 I STORAGE [signalProcessingThread] Deregistering all the collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.218-0400 c20021| 2019-07-25T18:25:33.218-0400 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.218-0400 c20021| 2019-07-25T18:25:33.218-0400 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.218-0400 c20021| 2019-07-25T18:25:33.218-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.226-0400 c20021| 2019-07-25T18:25:33.226-0400 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.226-0400 c20021| 2019-07-25T18:25:33.226-0400 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.227-0400 c20021| 2019-07-25T18:25:33.226-0400 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.257-0400 c20021| 2019-07-25T18:25:33.257-0400 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.257-0400 c20021| 2019-07-25T18:25:33.257-0400 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.257-0400 c20021| 2019-07-25T18:25:33.257-0400 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.402-0400 c20023| 2019-07-25T18:25:33.402-0400 I REPL_HB [replexec-1] Heartbeat to Jasons-MacBook-Pro.local:20021 failed after 2 retries, response status: InterruptedAtShutdown: interrupted at shutdown
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.402-0400 c20023| 2019-07-25T18:25:33.402-0400 I REPL [replexec-1] Member Jasons-MacBook-Pro.local:20021 is now in state RS_DOWN - interrupted at shutdown
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.832-0400 c20021| 2019-07-25T18:25:33.832-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.835-0400 c20021| 2019-07-25T18:25:33.835-0400 I CONTROL [signalProcessingThread] now exiting
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.836-0400 c20021| 2019-07-25T18:25:33.835-0400 I CONTROL [signalProcessingThread] shutting down with code:0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.854-0400 2019-07-25T18:25:33.853-0400 I - [js] shell: stopped mongo program on port 20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.854-0400 ReplSetTest stop *** Mongod in port 20021 shutdown with code (0) ***
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.854-0400 ReplSetTest stop *** Shutting down mongod in port 20022 ***
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.858-0400 c20022| 2019-07-25T18:25:33.857-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49546 #38 (7 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.858-0400 c20022| 2019-07-25T18:25:33.858-0400 I NETWORK [conn38] received client metadata from 127.0.0.1:49546 conn38: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.865-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.866-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.866-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.866-0400 [jsTest] New session started with sessionID: { "id" : UUID("69441b01-eadb-45da-8892-0ef8c459fadd") } and options: { "causalConsistency" : false }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.866-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.866-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.866-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.882-0400 c20022| 2019-07-25T18:25:33.882-0400 I COMMAND [conn38] Attempting to step down in response to replSetStepDown command
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.882-0400 c20022| 2019-07-25T18:25:33.882-0400 I REPL [RstlKillOpThread] Starting to kill user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.883-0400 c20022| 2019-07-25T18:25:33.883-0400 I REPL [RstlKillOpThread] Stopped killing user operations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.883-0400 c20022| 2019-07-25T18:25:33.883-0400 I REPL [conn38] Stepping down from primary, stats: { userOpsKilled: 0, userOpsRunning: 2 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.883-0400 c20022| 2019-07-25T18:25:33.883-0400 I REPL [conn38] transition to SECONDARY from PRIMARY
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.883-0400 c20022| 2019-07-25T18:25:33.883-0400 I SHARDING [Balancer] CSRS balancer is now stopped
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.883-0400 c20022| 2019-07-25T18:25:33.883-0400 I COMMAND [conn38] replSetStepDown command completed
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.900-0400 c20022| 2019-07-25T18:25:33.900-0400 I REPL [conn38] 'freezing' for 86400 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.971-0400 c20022| 2019-07-25T18:25:33.970-0400 I COMMAND [conn38] CMD: validate admin.system.keys
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.971-0400 c20022| 2019-07-25T18:25:33.971-0400 I INDEX [conn38] validating collection admin.system.keys (UUID: 7d5bfd11-2f9e-43fa-b296-05f895e4aea7)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.972-0400 c20022| 2019-07-25T18:25:33.972-0400 I INDEX [conn38] validating index _id_ on collection admin.system.keys
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.974-0400 c20022| 2019-07-25T18:25:33.974-0400 I INDEX [conn38] Validation complete for collection admin.system.keys (UUID: 7d5bfd11-2f9e-43fa-b296-05f895e4aea7). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.982-0400 c20022| 2019-07-25T18:25:33.982-0400 I COMMAND [conn38] CMD: validate admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.983-0400 c20022| 2019-07-25T18:25:33.982-0400 I INDEX [conn38] validating collection admin.system.version (UUID: 9eb89103-fb3d-4038-bb54-c402876ca16e)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.984-0400 c20022| 2019-07-25T18:25:33.984-0400 I INDEX [conn38] validating index _id_ on collection admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:33.985-0400 c20022| 2019-07-25T18:25:33.985-0400 I INDEX [conn38] Validation complete for collection admin.system.version (UUID: 9eb89103-fb3d-4038-bb54-c402876ca16e). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.033-0400 c20022| 2019-07-25T18:25:34.033-0400 I COMMAND [conn38] CMD: validate config.actionlog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.034-0400 c20022| 2019-07-25T18:25:34.034-0400 I INDEX [conn38] validating collection config.actionlog (UUID: bb55f986-13a3-489d-bc35-a22a32b44c10)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.035-0400 c20022| 2019-07-25T18:25:34.035-0400 I INDEX [conn38] validating index _id_ on collection config.actionlog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.037-0400 c20022| 2019-07-25T18:25:34.037-0400 I INDEX [conn38] Validation complete for collection config.actionlog (UUID: bb55f986-13a3-489d-bc35-a22a32b44c10). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.044-0400 c20022| 2019-07-25T18:25:34.044-0400 I COMMAND [conn38] CMD: validate config.changelog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.045-0400 c20022| 2019-07-25T18:25:34.045-0400 I INDEX [conn38] validating collection config.changelog (UUID: b00cc6c9-f585-4cd8-9cf1-362a83e2e9df)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.045-0400 c20022| 2019-07-25T18:25:34.045-0400 W STORAGE [conn38] Could not complete validation of table:collection-87-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.046-0400 c20022| 2019-07-25T18:25:34.046-0400 I INDEX [conn38] validating index _id_ on collection config.changelog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.046-0400 c20022| 2019-07-25T18:25:34.046-0400 W STORAGE [conn38] Could not complete validation of table:index-88-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.047-0400 c20022| 2019-07-25T18:25:34.047-0400 I INDEX [conn38] Validation complete for collection config.changelog (UUID: b00cc6c9-f585-4cd8-9cf1-362a83e2e9df). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.055-0400 c20022| 2019-07-25T18:25:34.055-0400 I COMMAND [conn38] CMD: validate config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.056-0400 c20022| 2019-07-25T18:25:34.056-0400 I INDEX [conn38] validating collection config.chunks (UUID: 63c02d1c-5493-42cd-9595-17fe7298418c)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.056-0400 c20022| 2019-07-25T18:25:34.056-0400 W STORAGE [conn38] Could not complete validation of table:collection-29-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.056-0400 c20022| 2019-07-25T18:25:34.056-0400 I INDEX [conn38] validating index ns_1_min_1 on collection config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.056-0400 c20022| 2019-07-25T18:25:34.056-0400 W STORAGE [conn38] Could not complete validation of table:index-30-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.057-0400 c20022| 2019-07-25T18:25:34.057-0400 I INDEX [conn38] validating index ns_1_shard_1_min_1 on collection config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.057-0400 c20022| 2019-07-25T18:25:34.057-0400 W STORAGE [conn38] Could not complete validation of table:index-33-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.057-0400 c20022| 2019-07-25T18:25:34.057-0400 I INDEX [conn38] validating index ns_1_lastmod_1 on collection config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.057-0400 c20022| 2019-07-25T18:25:34.057-0400 W STORAGE [conn38] Could not complete validation of table:index-36-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.057-0400 c20022| 2019-07-25T18:25:34.057-0400 I INDEX [conn38] validating index _id_ on collection config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.057-0400 c20022| 2019-07-25T18:25:34.057-0400 W STORAGE [conn38] Could not complete validation of table:index-39-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.058-0400 c20022| 2019-07-25T18:25:34.058-0400 I INDEX [conn38] Validation complete for collection config.chunks (UUID: 63c02d1c-5493-42cd-9595-17fe7298418c). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.067-0400 c20022| 2019-07-25T18:25:34.067-0400 I COMMAND [conn38] CMD: validate config.collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.068-0400 c20022| 2019-07-25T18:25:34.068-0400 I INDEX [conn38] validating collection config.collections (UUID: c91bd94c-858a-4b52-a9a4-ed241d46bb6b)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.068-0400 c20022| 2019-07-25T18:25:34.068-0400 W STORAGE [conn38] Could not complete validation of table:collection-89-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.068-0400 c20022| 2019-07-25T18:25:34.068-0400 I INDEX [conn38] validating index _id_ on collection config.collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.070-0400 c20022| 2019-07-25T18:25:34.070-0400 I INDEX [conn38] Validation complete for collection config.collections (UUID: c91bd94c-858a-4b52-a9a4-ed241d46bb6b). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.080-0400 c20022| 2019-07-25T18:25:34.080-0400 I COMMAND [conn38] CMD: validate config.databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.081-0400 c20022| 2019-07-25T18:25:34.081-0400 I INDEX [conn38] validating collection config.databases (UUID: 01649270-e43f-438a-ad71-36bd6eeffe6b)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.081-0400 c20022| 2019-07-25T18:25:34.081-0400 W STORAGE [conn38] Could not complete validation of table:collection-91-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.081-0400 c20022| 2019-07-25T18:25:34.081-0400 I INDEX [conn38] validating index _id_ on collection config.databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.081-0400 c20022| 2019-07-25T18:25:34.081-0400 W STORAGE [conn38] Could not complete validation of table:index-92-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.082-0400 c20022| 2019-07-25T18:25:34.082-0400 I INDEX [conn38] Validation complete for collection config.databases (UUID: 01649270-e43f-438a-ad71-36bd6eeffe6b). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.093-0400 c20022| 2019-07-25T18:25:34.093-0400 I COMMAND [conn38] CMD: validate config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.093-0400 c20022| 2019-07-25T18:25:34.093-0400 I INDEX [conn38] validating collection config.lockpings (UUID: dd0672e8-19c6-432b-9b6a-d21b02c0bf6e)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.095-0400 c20022| 2019-07-25T18:25:34.094-0400 I INDEX [conn38] validating index ping_1 on collection config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.096-0400 c20022| 2019-07-25T18:25:34.095-0400 I INDEX [conn38] validating index _id_ on collection config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.097-0400 c20022| 2019-07-25T18:25:34.097-0400 I INDEX [conn38] Validation complete for collection config.lockpings (UUID: dd0672e8-19c6-432b-9b6a-d21b02c0bf6e). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.109-0400 c20022| 2019-07-25T18:25:34.109-0400 I COMMAND [conn38] CMD: validate config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.110-0400 c20022| 2019-07-25T18:25:34.110-0400 I INDEX [conn38] validating collection config.locks (UUID: dd929b42-c13c-4682-8066-ef80c2666228)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.111-0400 c20022| 2019-07-25T18:25:34.110-0400 W STORAGE [conn38] Could not complete validation of table:collection-62-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.111-0400 c20022| 2019-07-25T18:25:34.111-0400 I INDEX [conn38] validating index ts_1 on collection config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.111-0400 c20022| 2019-07-25T18:25:34.111-0400 W STORAGE [conn38] Could not complete validation of table:index-63-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.111-0400 c20022| 2019-07-25T18:25:34.111-0400 I INDEX [conn38] validating index state_1_process_1 on collection config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.111-0400 c20022| 2019-07-25T18:25:34.111-0400 W STORAGE [conn38] Could not complete validation of table:index-65-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.112-0400 c20022| 2019-07-25T18:25:34.111-0400 I INDEX [conn38] validating index _id_ on collection config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.113-0400 c20022| 2019-07-25T18:25:34.113-0400 I INDEX [conn38] Validation complete for collection config.locks (UUID: dd929b42-c13c-4682-8066-ef80c2666228). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.121-0400 c20022| 2019-07-25T18:25:34.121-0400 I COMMAND [conn38] CMD: validate config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.122-0400 c20022| 2019-07-25T18:25:34.122-0400 I INDEX [conn38] validating collection config.migrations (UUID: 91fc80cd-1974-4835-96e0-c0c276b056ee)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.123-0400 c20022| 2019-07-25T18:25:34.123-0400 I INDEX [conn38] validating index ns_1_min_1 on collection config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.124-0400 c20022| 2019-07-25T18:25:34.124-0400 I INDEX [conn38] validating index _id_ on collection config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.124-0400 c20022| 2019-07-25T18:25:34.124-0400 W STORAGE [conn38] Could not complete validation of table:index-46-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.125-0400 c20022| 2019-07-25T18:25:34.125-0400 I INDEX [conn38] Validation complete for collection config.migrations (UUID: 91fc80cd-1974-4835-96e0-c0c276b056ee). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.136-0400 c20022| 2019-07-25T18:25:34.136-0400 I COMMAND [conn38] CMD: validate config.mongos
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.145-0400 c20022| 2019-07-25T18:25:34.145-0400 I INDEX [conn38] validating collection config.mongos (UUID: 81e54234-9908-454a-817f-30651e5cf0b6)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.145-0400 c20022| 2019-07-25T18:25:34.145-0400 W STORAGE [conn38] Could not complete validation of table:collection-93-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.145-0400 c20022| 2019-07-25T18:25:34.145-0400 I INDEX [conn38] validating index _id_ on collection config.mongos
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.145-0400 c20022| 2019-07-25T18:25:34.145-0400 W STORAGE [conn38] Could not complete validation of table:index-94-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.147-0400 c20022| 2019-07-25T18:25:34.147-0400 I INDEX [conn38] Validation complete for collection config.mongos (UUID: 81e54234-9908-454a-817f-30651e5cf0b6). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.159-0400 c20022| 2019-07-25T18:25:34.159-0400 I COMMAND [conn38] CMD: validate config.settings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.159-0400 c20022| 2019-07-25T18:25:34.159-0400 I INDEX [conn38] validating collection config.settings (UUID: 5b34234e-9f2a-4dc0-a6ec-4c3c2c8d8c4a)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.161-0400 c20022| 2019-07-25T18:25:34.161-0400 I INDEX [conn38] validating index _id_ on collection config.settings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.163-0400 c20022| 2019-07-25T18:25:34.162-0400 I INDEX [conn38] Validation complete for collection config.settings (UUID: 5b34234e-9f2a-4dc0-a6ec-4c3c2c8d8c4a). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.171-0400 c20022| 2019-07-25T18:25:34.171-0400 I COMMAND [conn38] CMD: validate config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.171-0400 c20022| 2019-07-25T18:25:34.171-0400 I INDEX [conn38] validating collection config.shards (UUID: 9dc58f2f-04de-441a-b6d7-36d58adac3fa)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.173-0400 c20022| 2019-07-25T18:25:34.172-0400 I INDEX [conn38] validating index host_1 on collection config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.174-0400 c20022| 2019-07-25T18:25:34.174-0400 I INDEX [conn38] validating index _id_ on collection config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.178-0400 c20022| 2019-07-25T18:25:34.177-0400 I INDEX [conn38] Validation complete for collection config.shards (UUID: 9dc58f2f-04de-441a-b6d7-36d58adac3fa). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.186-0400 c20022| 2019-07-25T18:25:34.186-0400 I COMMAND [conn38] CMD: validate config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.191-0400 c20022| 2019-07-25T18:25:34.190-0400 I INDEX [conn38] validating collection config.tags (UUID: f1867b25-f9fb-445f-8bca-c3b4a21b38ee)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.192-0400 c20022| 2019-07-25T18:25:34.192-0400 I INDEX [conn38] validating index ns_1_min_1 on collection config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.193-0400 c20022| 2019-07-25T18:25:34.193-0400 I INDEX [conn38] validating index ns_1_tag_1 on collection config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.194-0400 c20022| 2019-07-25T18:25:34.194-0400 I INDEX [conn38] validating index _id_ on collection config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.194-0400 c20022| 2019-07-25T18:25:34.194-0400 W STORAGE [conn38] Could not complete validation of table:index-80-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.195-0400 c20022| 2019-07-25T18:25:34.195-0400 I INDEX [conn38] Validation complete for collection config.tags (UUID: f1867b25-f9fb-445f-8bca-c3b4a21b38ee). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.210-0400 c20022| 2019-07-25T18:25:34.210-0400 I COMMAND [conn38] CMD: validate config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.211-0400 c20022| 2019-07-25T18:25:34.211-0400 I INDEX [conn38] validating collection config.transactions (UUID: 2ff387c1-0957-46b7-b825-992aba2ed063)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.212-0400 c20022| 2019-07-25T18:25:34.212-0400 I INDEX [conn38] validating index _id_ on collection config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.214-0400 c20022| 2019-07-25T18:25:34.214-0400 I INDEX [conn38] Validation complete for collection config.transactions (UUID: 2ff387c1-0957-46b7-b825-992aba2ed063). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.224-0400 c20022| 2019-07-25T18:25:34.224-0400 I COMMAND [conn38] CMD: validate config.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.225-0400 c20022| 2019-07-25T18:25:34.225-0400 I INDEX [conn38] validating collection config.version (UUID: e2da88e1-afec-4a2a-9c9c-0b4b51073f63)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.227-0400 c20022| 2019-07-25T18:25:34.227-0400 I INDEX [conn38] validating index _id_ on collection config.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.229-0400 c20022| 2019-07-25T18:25:34.229-0400 I INDEX [conn38] Validation complete for collection config.version (UUID: e2da88e1-afec-4a2a-9c9c-0b4b51073f63). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.262-0400 c20022| 2019-07-25T18:25:34.261-0400 I COMMAND [conn38] CMD: validate local.oplog.rs
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.262-0400 c20022| 2019-07-25T18:25:34.262-0400 I INDEX [conn38] validating collection local.oplog.rs (UUID: 7f0ff0ec-c05e-440e-9aab-2bbf7acc65fd)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.262-0400 c20022| 2019-07-25T18:25:34.262-0400 W STORAGE [conn38] Could not complete validation of table:collection-16-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.264-0400 c20022| 2019-07-25T18:25:34.264-0400 I INDEX [conn38] Validation complete for collection local.oplog.rs (UUID: 7f0ff0ec-c05e-440e-9aab-2bbf7acc65fd). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.275-0400 c20022| 2019-07-25T18:25:34.275-0400 I COMMAND [conn38] CMD: validate local.replset.election
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.276-0400 c20022| 2019-07-25T18:25:34.276-0400 I INDEX [conn38] validating collection local.replset.election (UUID: 0a8ecaac-2b80-4d37-bf64-ea6fbd07a116)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.277-0400 c20022| 2019-07-25T18:25:34.277-0400 I INDEX [conn38] validating index _id_ on collection local.replset.election
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.279-0400 c20022| 2019-07-25T18:25:34.278-0400 I INDEX [conn38] Validation complete for collection local.replset.election (UUID: 0a8ecaac-2b80-4d37-bf64-ea6fbd07a116). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.287-0400 c20022| 2019-07-25T18:25:34.287-0400 I COMMAND [conn38] CMD: validate local.replset.minvalid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.288-0400 c20022| 2019-07-25T18:25:34.287-0400 I INDEX [conn38] validating collection local.replset.minvalid (UUID: 5a4f9fe6-2cc5-49a2-ac03-9e5410450c17)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.288-0400 c20022| 2019-07-25T18:25:34.288-0400 W STORAGE [conn38] Could not complete validation of table:collection-4-601854497284713564. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.288-0400 c20022| 2019-07-25T18:25:34.288-0400 I INDEX [conn38] validating index _id_ on collection local.replset.minvalid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.290-0400 c20022| 2019-07-25T18:25:34.290-0400 I INDEX [conn38] Validation complete for collection local.replset.minvalid (UUID: 5a4f9fe6-2cc5-49a2-ac03-9e5410450c17). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.302-0400 c20022| 2019-07-25T18:25:34.302-0400 I COMMAND [conn38] CMD: validate local.replset.oplogTruncateAfterPoint
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.303-0400 c20022| 2019-07-25T18:25:34.303-0400 I INDEX [conn38] validating collection local.replset.oplogTruncateAfterPoint (UUID: 6ffcae3b-70ce-43e9-bb8d-7c62c9daa35c)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.304-0400 c20022| 2019-07-25T18:25:34.304-0400 I INDEX [conn38] validating index _id_ on collection local.replset.oplogTruncateAfterPoint
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.306-0400 c20022| 2019-07-25T18:25:34.306-0400 I INDEX [conn38] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 6ffcae3b-70ce-43e9-bb8d-7c62c9daa35c). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.314-0400 c20022| 2019-07-25T18:25:34.313-0400 I COMMAND [conn38] CMD: validate local.startup_log
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.314-0400 c20022| 2019-07-25T18:25:34.314-0400 I INDEX [conn38] validating collection local.startup_log (UUID: 4d892c1c-600f-4312-98d1-33586fe9f3ec)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.316-0400 c20022| 2019-07-25T18:25:34.316-0400 I INDEX [conn38] validating index _id_ on collection local.startup_log
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.320-0400 c20022| 2019-07-25T18:25:34.320-0400 I INDEX [conn38] Validation complete for collection local.startup_log (UUID: 4d892c1c-600f-4312-98d1-33586fe9f3ec). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.328-0400 c20022| 2019-07-25T18:25:34.328-0400 I COMMAND [conn38] CMD: validate local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.329-0400 c20022| 2019-07-25T18:25:34.329-0400 I INDEX [conn38] validating collection local.system.replset (UUID: 3e5f7f9f-70c8-4ba0-8948-7ef1822d7896)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.330-0400 c20022| 2019-07-25T18:25:34.330-0400 I INDEX [conn38] validating index _id_ on collection local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.332-0400 c20022| 2019-07-25T18:25:34.332-0400 I INDEX [conn38] Validation complete for collection local.system.replset (UUID: 3e5f7f9f-70c8-4ba0-8948-7ef1822d7896). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.346-0400 c20022| 2019-07-25T18:25:34.346-0400 I COMMAND [conn38] CMD: validate local.system.rollback.id
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.349-0400 c20022| 2019-07-25T18:25:34.349-0400 I INDEX [conn38] validating collection local.system.rollback.id (UUID: 43a8c4fd-c300-4cd6-bbb7-772dcc208de2)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.350-0400 c20022| 2019-07-25T18:25:34.350-0400 I INDEX [conn38] validating index _id_ on collection local.system.rollback.id
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.352-0400 c20022| 2019-07-25T18:25:34.352-0400 I INDEX [conn38] Validation complete for collection local.system.rollback.id (UUID: 43a8c4fd-c300-4cd6-bbb7-772dcc208de2). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.355-0400 c20022| 2019-07-25T18:25:34.354-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated: 15), will terminate after current cmd ends
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.355-0400 c20022| 2019-07-25T18:25:34.355-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.355-0400 c20022| 2019-07-25T18:25:34.355-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20022.sock
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.355-0400 c20022| 2019-07-25T18:25:34.355-0400 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.355-0400 c20022| 2019-07-25T18:25:34.355-0400 I REPL [signalProcessingThread] shutting down replication subsystems
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.355-0400 c20022| 2019-07-25T18:25:34.355-0400 I REPL [signalProcessingThread] Stopping replication reporter thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.355-0400 c20022| 2019-07-25T18:25:34.355-0400 I REPL [signalProcessingThread] Stopping replication fetcher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.355-0400 c20022| 2019-07-25T18:25:34.355-0400 I REPL [signalProcessingThread] Stopping replication applier thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.356-0400 c20022| 2019-07-25T18:25:34.356-0400 I REPL [rsSync-0] Finished oplog application
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.591-0400 c20022| 2019-07-25T18:25:34.590-0400 I CONNPOOL [Replication] Ending connection to host Jasons-MacBook-Pro.local:20023 due to bad connection status: CallbackCanceled: Callback was canceled; 2 connections to that host remain open
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.591-0400 c20023| 2019-07-25T18:25:34.591-0400 I NETWORK [conn29] end connection 127.0.0.1:49534 (4 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.619-0400 c20022| 2019-07-25T18:25:34.619-0400 I CONNPOOL [replexec-5] dropping unhealthy pooled connection to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.619-0400 c20022| 2019-07-25T18:25:34.619-0400 I CONNPOOL [Replication] Connecting to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.624-0400 c20022| 2019-07-25T18:25:34.623-0400 I REPL_HB [replexec-2] Heartbeat to Jasons-MacBook-Pro.local:20021 failed after 2 retries, response status: HostUnreachable: Error connecting to Jasons-MacBook-Pro.local:20021 (127.0.0.1:20021) :: caused by :: Connection refused
[js_test:configsvr_failover_repro] 2019-07-25T18:25:34.624-0400 c20022| 2019-07-25T18:25:34.624-0400 I REPL [replexec-2] Member Jasons-MacBook-Pro.local:20021 is now in state RS_DOWN - Error connecting to Jasons-MacBook-Pro.local:20021 (127.0.0.1:20021) :: caused by :: Connection refused
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.129-0400 c20022| 2019-07-25T18:25:35.129-0400 I REPL_HB [replexec-3] Heartbeat to Jasons-MacBook-Pro.local:20021 failed after 2 retries, response status: HostUnreachable: Error connecting to Jasons-MacBook-Pro.local:20021 (127.0.0.1:20021) :: caused by :: Connection refused
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.349-0400 c20022| 2019-07-25T18:25:35.349-0400 I REPL [rsBackgroundSync] Stopping replication producer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.349-0400 c20022| 2019-07-25T18:25:35.349-0400 I REPL [signalProcessingThread] Stopping replication storage threads
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.350-0400 c20022| 2019-07-25T18:25:35.349-0400 I ASIO [RS] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.350-0400 c20022| 2019-07-25T18:25:35.350-0400 I ASIO [RS] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.350-0400 c20022| 2019-07-25T18:25:35.350-0400 I CONNPOOL [RS] Dropping all pooled connections to Jasons-MacBook-Pro.local:20021 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.352-0400 c20022| 2019-07-25T18:25:35.352-0400 I ASIO [Replication] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.352-0400 c20022| 2019-07-25T18:25:35.352-0400 I CONNPOOL [Replication] Dropping all pooled connections to Jasons-MacBook-Pro.local:20023 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.353-0400 c20023| 2019-07-25T18:25:35.353-0400 I NETWORK [conn9] end connection 127.0.0.1:49480 (3 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.353-0400 c20023| 2019-07-25T18:25:35.353-0400 I NETWORK [conn30] end connection 127.0.0.1:49535 (2 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.353-0400 c20022| 2019-07-25T18:25:35.353-0400 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.353-0400 c20022| 2019-07-25T18:25:35.353-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Dropping all pooled connections to Jasons-MacBook-Pro.local:20020 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.354-0400 c20022| 2019-07-25T18:25:35.353-0400 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.354-0400 c20022| 2019-07-25T18:25:35.354-0400 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.354-0400 c20022| 2019-07-25T18:25:35.354-0400 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.357-0400 c20022| 2019-07-25T18:25:35.357-0400 I STORAGE [signalProcessingThread] Deregistering all the collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.358-0400 c20022| 2019-07-25T18:25:35.358-0400 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.358-0400 c20022| 2019-07-25T18:25:35.358-0400 W QUERY [conn31] GetMore command executor error: FAILURE, status: InterruptedAtShutdown: interrupted at shutdown, stats: { stage: "COLLSCAN", nReturned: 46, executionTimeMillisEstimate: 17, works: 494, advanced: 46, needTime: 224, needYield: 0, saveState: 224, restoreState: 223, isEOF: 0, direction: "forward", minTs: Timestamp(1564093514, 6), docsExamined: 46 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.358-0400 c20022| 2019-07-25T18:25:35.358-0400 W QUERY [conn35] GetMore command executor error: FAILURE, status: InterruptedAtShutdown: interrupted at shutdown, stats: { stage: "COLLSCAN", nReturned: 46, executionTimeMillisEstimate: 13, works: 324, advanced: 46, needTime: 139, needYield: 0, saveState: 139, restoreState: 138, isEOF: 0, direction: "forward", minTs: Timestamp(1564093514, 6), docsExamined: 46 }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.358-0400 c20022| 2019-07-25T18:25:35.358-0400 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.359-0400 c20022| 2019-07-25T18:25:35.358-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.359-0400 c20022| 2019-07-25T18:25:35.359-0400 I NETWORK [conn31] end connection 127.0.0.1:49536 (6 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.359-0400 c20023| 2019-07-25T18:25:35.359-0400 I REPL [replication-0] Restarting oplog query due to error: InterruptedAtShutdown: error in fetcher batch callback :: caused by :: interrupted at shutdown. Last fetched optime: { ts: Timestamp(1564093530, 14), t: 2 }. Restarts remaining: 1
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.360-0400 c20023| 2019-07-25T18:25:35.360-0400 I REPL [replication-0] Scheduled new oplog query Fetcher source: Jasons-MacBook-Pro.local:20022 database: local query: { find: "oplog.rs", filter: { ts: { $gte: Timestamp(1564093530, 14) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 2, readConcern: { afterClusterTime: Timestamp(1564093530, 14) } } query metadata: { $replData: 1, $oplogQueryData: 1, $readPreference: { mode: "secondaryPreferred" } } active: 1 findNetworkTimeout: 7000ms getMoreNetworkTimeout: 10000ms shutting down?: 0 first: 1 firstCommandScheduler: RemoteCommandRetryScheduler request: RemoteCommand 332 -- target:Jasons-MacBook-Pro.local:20022 db:local cmd:{ find: "oplog.rs", filter: { ts: { $gte: Timestamp(1564093530, 14) } }, tailable: true, oplogReplay: true, awaitData: true, maxTimeMS: 2000, batchSize: 13981010, term: 2, readConcern: { afterClusterTime: Timestamp(1564093530, 14) } } active: 1 callbackHandle.valid: 1 callbackHandle.cancelled: 0 attempt: 1 retryPolicy: RetryPolicyImpl maxAttempts: 1 maxTimeMillis: -1ms
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.361-0400 c20023| 2019-07-25T18:25:35.361-0400 I REPL [replication-1] Error returned from oplog query (no more query restarts left): InterruptedAtShutdown: error in fetcher batch callback :: caused by :: interrupted at shutdown
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.361-0400 c20023| 2019-07-25T18:25:35.361-0400 W REPL [rsBackgroundSync] Fetcher stopped querying remote oplog with error: InterruptedAtShutdown: error in fetcher batch callback :: caused by :: interrupted at shutdown
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.361-0400 c20023| 2019-07-25T18:25:35.361-0400 I REPL [rsBackgroundSync] Clearing sync source Jasons-MacBook-Pro.local:20022 to choose a new one.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.361-0400 c20023| 2019-07-25T18:25:35.361-0400 I REPL [rsBackgroundSync] could not find member to sync from
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.362-0400 c20023| 2019-07-25T18:25:35.361-0400 I CONNPOOL [replexec-4] dropping unhealthy pooled connection to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.362-0400 c20023| 2019-07-25T18:25:35.362-0400 I CONNPOOL [Replication] Connecting to Jasons-MacBook-Pro.local:20021
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.366-0400 c20023| 2019-07-25T18:25:35.365-0400 I REPL_HB [replexec-4] Heartbeat to Jasons-MacBook-Pro.local:20022 failed after 2 retries, response status: InterruptedAtShutdown: interrupted at shutdown
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.366-0400 c20023| 2019-07-25T18:25:35.366-0400 I REPL [replexec-4] Member Jasons-MacBook-Pro.local:20022 is now in state RS_DOWN - interrupted at shutdown
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.366-0400 c20023| 2019-07-25T18:25:35.366-0400 I REPL_HB [replexec-1] Heartbeat to Jasons-MacBook-Pro.local:20021 failed after 2 retries, response status: HostUnreachable: Error connecting to Jasons-MacBook-Pro.local:20021 (127.0.0.1:20021) :: caused by :: Connection refused
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.383-0400 c20022| 2019-07-25T18:25:35.383-0400 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.383-0400 c20022| 2019-07-25T18:25:35.383-0400 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.383-0400 c20022| 2019-07-25T18:25:35.383-0400 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.458-0400 c20022| 2019-07-25T18:25:35.457-0400 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.458-0400 c20022| 2019-07-25T18:25:35.457-0400 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.458-0400 c20022| 2019-07-25T18:25:35.458-0400 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.618-0400 c20022| 2019-07-25T18:25:35.618-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.622-0400 c20022| 2019-07-25T18:25:35.621-0400 I CONTROL [signalProcessingThread] now exiting
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.622-0400 c20022| 2019-07-25T18:25:35.621-0400 I CONTROL [signalProcessingThread] shutting down with code:0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.633-0400 2019-07-25T18:25:35.633-0400 I - [js] shell: stopped mongo program on port 20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.634-0400 ReplSetTest stop *** Mongod in port 20022 shutdown with code (0) ***
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.634-0400 ReplSetTest stop *** Shutting down mongod in port 20023 ***
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.637-0400 c20023| 2019-07-25T18:25:35.636-0400 I NETWORK [listener] connection accepted from 127.0.0.1:49557 #33 (3 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.637-0400 c20023| 2019-07-25T18:25:35.637-0400 I NETWORK [conn33] received client metadata from 127.0.0.1:49557 conn33: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.3.0-703-g917d338" }, os: { type: "Darwin", name: "Mac OS X", architecture: "x86_64", version: "18.6.0" } }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.646-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.646-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.646-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.646-0400 [jsTest] New session started with sessionID: { "id" : UUID("9301e8fc-5e79-43de-91e1-25ebb4769863") } and options: { "causalConsistency" : false }
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.646-0400 [jsTest] ----
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.646-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.647-0400
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.668-0400 c20023| 2019-07-25T18:25:35.668-0400 I COMMAND [conn33] Attempting to step down in response to replSetStepDown command
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.677-0400 c20023| 2019-07-25T18:25:35.677-0400 I REPL [conn33] 'freezing' for 86400 seconds
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.761-0400 c20023| 2019-07-25T18:25:35.761-0400 I COMMAND [conn33] CMD: validate admin.system.keys
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.762-0400 c20023| 2019-07-25T18:25:35.761-0400 I INDEX [conn33] validating collection admin.system.keys (UUID: 7d5bfd11-2f9e-43fa-b296-05f895e4aea7)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.763-0400 c20023| 2019-07-25T18:25:35.763-0400 I INDEX [conn33] validating index _id_ on collection admin.system.keys
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.764-0400 c20023| 2019-07-25T18:25:35.764-0400 I INDEX [conn33] Validation complete for collection admin.system.keys (UUID: 7d5bfd11-2f9e-43fa-b296-05f895e4aea7). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.772-0400 c20023| 2019-07-25T18:25:35.772-0400 I COMMAND [conn33] CMD: validate admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.773-0400 c20023| 2019-07-25T18:25:35.772-0400 I INDEX [conn33] validating collection admin.system.version (UUID: 9eb89103-fb3d-4038-bb54-c402876ca16e)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.774-0400 c20023| 2019-07-25T18:25:35.773-0400 I INDEX [conn33] validating index _id_ on collection admin.system.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.775-0400 c20023| 2019-07-25T18:25:35.775-0400 I INDEX [conn33] Validation complete for collection admin.system.version (UUID: 9eb89103-fb3d-4038-bb54-c402876ca16e). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.810-0400 c20023| 2019-07-25T18:25:35.810-0400 I COMMAND [conn33] CMD: validate config.actionlog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.811-0400 c20023| 2019-07-25T18:25:35.811-0400 I INDEX [conn33] validating collection config.actionlog (UUID: bb55f986-13a3-489d-bc35-a22a32b44c10)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.811-0400 c20023| 2019-07-25T18:25:35.811-0400 W STORAGE [conn33] Could not complete validation of table:collection-85-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.811-0400 c20023| 2019-07-25T18:25:35.811-0400 I INDEX [conn33] validating index _id_ on collection config.actionlog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.811-0400 c20023| 2019-07-25T18:25:35.811-0400 W STORAGE [conn33] Could not complete validation of table:index-86-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.812-0400 c20023| 2019-07-25T18:25:35.812-0400 I INDEX [conn33] Validation complete for collection config.actionlog (UUID: bb55f986-13a3-489d-bc35-a22a32b44c10). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.820-0400 c20023| 2019-07-25T18:25:35.820-0400 I COMMAND [conn33] CMD: validate config.changelog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.821-0400 c20023| 2019-07-25T18:25:35.821-0400 I INDEX [conn33] validating collection config.changelog (UUID: b00cc6c9-f585-4cd8-9cf1-362a83e2e9df)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.821-0400 c20023| 2019-07-25T18:25:35.821-0400 W STORAGE [conn33] Could not complete validation of table:collection-87-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.821-0400 c20023| 2019-07-25T18:25:35.821-0400 I INDEX [conn33] validating index _id_ on collection config.changelog
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.821-0400 c20023| 2019-07-25T18:25:35.821-0400 W STORAGE [conn33] Could not complete validation of table:index-88-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.822-0400 c20023| 2019-07-25T18:25:35.822-0400 I INDEX [conn33] Validation complete for collection config.changelog (UUID: b00cc6c9-f585-4cd8-9cf1-362a83e2e9df). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.830-0400 c20023| 2019-07-25T18:25:35.830-0400 I COMMAND [conn33] CMD: validate config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.831-0400 c20023| 2019-07-25T18:25:35.831-0400 I INDEX [conn33] validating collection config.chunks (UUID: 63c02d1c-5493-42cd-9595-17fe7298418c)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.831-0400 c20023| 2019-07-25T18:25:35.831-0400 W STORAGE [conn33] Could not complete validation of table:collection-29-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.831-0400 c20023| 2019-07-25T18:25:35.831-0400 I INDEX [conn33] validating index ns_1_min_1 on collection config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.831-0400 c20023| 2019-07-25T18:25:35.831-0400 W STORAGE [conn33] Could not complete validation of table:index-30-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.831-0400 c20023| 2019-07-25T18:25:35.831-0400 I INDEX [conn33] validating index ns_1_shard_1_min_1 on collection config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.832-0400 c20023| 2019-07-25T18:25:35.831-0400 W STORAGE [conn33] Could not complete validation of table:index-33-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.832-0400 c20023| 2019-07-25T18:25:35.832-0400 I INDEX [conn33] validating index ns_1_lastmod_1 on collection config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.832-0400 c20023| 2019-07-25T18:25:35.832-0400 W STORAGE [conn33] Could not complete validation of table:index-36-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.832-0400 c20023| 2019-07-25T18:25:35.832-0400 I INDEX [conn33] validating index _id_ on collection config.chunks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.832-0400 c20023| 2019-07-25T18:25:35.832-0400 W STORAGE [conn33] Could not complete validation of table:index-39-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.833-0400 c20023| 2019-07-25T18:25:35.833-0400 I INDEX [conn33] Validation complete for collection config.chunks (UUID: 63c02d1c-5493-42cd-9595-17fe7298418c). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.841-0400 c20023| 2019-07-25T18:25:35.841-0400 I COMMAND [conn33] CMD: validate config.collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.842-0400 c20023| 2019-07-25T18:25:35.842-0400 I INDEX [conn33] validating collection config.collections (UUID: c91bd94c-858a-4b52-a9a4-ed241d46bb6b)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.842-0400 c20023| 2019-07-25T18:25:35.842-0400 W STORAGE [conn33] Could not complete validation of table:collection-89-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.842-0400 c20023| 2019-07-25T18:25:35.842-0400 I INDEX [conn33] validating index _id_ on collection config.collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.842-0400 c20023| 2019-07-25T18:25:35.842-0400 W STORAGE [conn33] Could not complete validation of table:index-90-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.843-0400 c20023| 2019-07-25T18:25:35.843-0400 I INDEX [conn33] Validation complete for collection config.collections (UUID: c91bd94c-858a-4b52-a9a4-ed241d46bb6b). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.852-0400 c20023| 2019-07-25T18:25:35.852-0400 I COMMAND [conn33] CMD: validate config.databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.854-0400 c20023| 2019-07-25T18:25:35.854-0400 I INDEX [conn33] validating collection config.databases (UUID: 01649270-e43f-438a-ad71-36bd6eeffe6b)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.854-0400 c20023| 2019-07-25T18:25:35.854-0400 W STORAGE [conn33] Could not complete validation of table:collection-91-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.854-0400 c20023| 2019-07-25T18:25:35.854-0400 I INDEX [conn33] validating index _id_ on collection config.databases
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.855-0400 c20023| 2019-07-25T18:25:35.854-0400 W STORAGE [conn33] Could not complete validation of table:index-92-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.856-0400 c20023| 2019-07-25T18:25:35.856-0400 I INDEX [conn33] Validation complete for collection config.databases (UUID: 01649270-e43f-438a-ad71-36bd6eeffe6b). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.863-0400 c20023| 2019-07-25T18:25:35.863-0400 I COMMAND [conn33] CMD: validate config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.864-0400 c20023| 2019-07-25T18:25:35.864-0400 I INDEX [conn33] validating collection config.lockpings (UUID: dd0672e8-19c6-432b-9b6a-d21b02c0bf6e)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.864-0400 c20023| 2019-07-25T18:25:35.864-0400 W STORAGE [conn33] Could not complete validation of table:collection-56-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.865-0400 c20023| 2019-07-25T18:25:35.864-0400 I INDEX [conn33] validating index ping_1 on collection config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.865-0400 c20023| 2019-07-25T18:25:35.865-0400 W STORAGE [conn33] Could not complete validation of table:index-57-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.865-0400 c20023| 2019-07-25T18:25:35.865-0400 I INDEX [conn33] validating index _id_ on collection config.lockpings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.865-0400 c20023| 2019-07-25T18:25:35.865-0400 W STORAGE [conn33] Could not complete validation of table:index-59-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.866-0400 c20023| 2019-07-25T18:25:35.866-0400 I INDEX [conn33] Validation complete for collection config.lockpings (UUID: dd0672e8-19c6-432b-9b6a-d21b02c0bf6e). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.869-0400 c20023| 2019-07-25T18:25:35.869-0400 I CONNPOOL [replexec-2] dropping unhealthy pooled connection to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.869-0400 c20023| 2019-07-25T18:25:35.869-0400 I CONNPOOL [Replication] Connecting to Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.874-0400 c20023| 2019-07-25T18:25:35.874-0400 I REPL_HB [replexec-2] Heartbeat to Jasons-MacBook-Pro.local:20022 failed after 2 retries, response status: HostUnreachable: Error connecting to Jasons-MacBook-Pro.local:20022 (127.0.0.1:20022) :: caused by :: Connection refused
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.874-0400 c20023| 2019-07-25T18:25:35.874-0400 I REPL_HB [replexec-3] Heartbeat to Jasons-MacBook-Pro.local:20021 failed after 2 retries, response status: HostUnreachable: Error connecting to Jasons-MacBook-Pro.local:20021 (127.0.0.1:20021) :: caused by :: Connection refused
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.878-0400 c20023| 2019-07-25T18:25:35.877-0400 I COMMAND [conn33] CMD: validate config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.880-0400 c20023| 2019-07-25T18:25:35.879-0400 I INDEX [conn33] validating collection config.locks (UUID: dd929b42-c13c-4682-8066-ef80c2666228)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.880-0400 c20023| 2019-07-25T18:25:35.880-0400 W STORAGE [conn33] Could not complete validation of table:collection-62-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.881-0400 c20023| 2019-07-25T18:25:35.881-0400 I INDEX [conn33] validating index ts_1 on collection config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.881-0400 c20023| 2019-07-25T18:25:35.881-0400 W STORAGE [conn33] Could not complete validation of table:index-63-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.881-0400 c20023| 2019-07-25T18:25:35.881-0400 I INDEX [conn33] validating index state_1_process_1 on collection config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.881-0400 c20023| 2019-07-25T18:25:35.881-0400 W STORAGE [conn33] Could not complete validation of table:index-65-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.882-0400 c20023| 2019-07-25T18:25:35.881-0400 I INDEX [conn33] validating index _id_ on collection config.locks
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.882-0400 c20023| 2019-07-25T18:25:35.882-0400 W STORAGE [conn33] Could not complete validation of table:index-67-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.883-0400 c20023| 2019-07-25T18:25:35.883-0400 I INDEX [conn33] Validation complete for collection config.locks (UUID: dd929b42-c13c-4682-8066-ef80c2666228). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.891-0400 c20023| 2019-07-25T18:25:35.891-0400 I COMMAND [conn33] CMD: validate config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.892-0400 c20023| 2019-07-25T18:25:35.892-0400 I INDEX [conn33] validating collection config.migrations (UUID: 91fc80cd-1974-4835-96e0-c0c276b056ee)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.903-0400 c20023| 2019-07-25T18:25:35.903-0400 I INDEX [conn33] validating index ns_1_min_1 on collection config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.905-0400 c20023| 2019-07-25T18:25:35.905-0400 I INDEX [conn33] validating index _id_ on collection config.migrations
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.905-0400 c20023| 2019-07-25T18:25:35.905-0400 W STORAGE [conn33] Could not complete validation of table:index-46-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.906-0400 c20023| 2019-07-25T18:25:35.906-0400 I INDEX [conn33] Validation complete for collection config.migrations (UUID: 91fc80cd-1974-4835-96e0-c0c276b056ee). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.915-0400 c20023| 2019-07-25T18:25:35.915-0400 I COMMAND [conn33] CMD: validate config.mongos
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.915-0400 c20023| 2019-07-25T18:25:35.915-0400 I INDEX [conn33] validating collection config.mongos (UUID: 81e54234-9908-454a-817f-30651e5cf0b6)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.916-0400 c20023| 2019-07-25T18:25:35.915-0400 W STORAGE [conn33] Could not complete validation of table:collection-93-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.916-0400 c20023| 2019-07-25T18:25:35.916-0400 I INDEX [conn33] validating index _id_ on collection config.mongos
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.916-0400 c20023| 2019-07-25T18:25:35.916-0400 W STORAGE [conn33] Could not complete validation of table:index-94-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.917-0400 c20023| 2019-07-25T18:25:35.917-0400 I INDEX [conn33] Validation complete for collection config.mongos (UUID: 81e54234-9908-454a-817f-30651e5cf0b6). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.925-0400 c20023| 2019-07-25T18:25:35.925-0400 I COMMAND [conn33] CMD: validate config.settings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.927-0400 c20023| 2019-07-25T18:25:35.927-0400 I INDEX [conn33] validating collection config.settings (UUID: 5b34234e-9f2a-4dc0-a6ec-4c3c2c8d8c4a)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.927-0400 c20023| 2019-07-25T18:25:35.927-0400 W STORAGE [conn33] Could not complete validation of table:collection-83-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.927-0400 c20023| 2019-07-25T18:25:35.927-0400 I INDEX [conn33] validating index _id_ on collection config.settings
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.928-0400 c20023| 2019-07-25T18:25:35.928-0400 W STORAGE [conn33] Could not complete validation of table:index-84-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.929-0400 c20023| 2019-07-25T18:25:35.929-0400 I INDEX [conn33] Validation complete for collection config.settings (UUID: 5b34234e-9f2a-4dc0-a6ec-4c3c2c8d8c4a). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.937-0400 c20023| 2019-07-25T18:25:35.937-0400 I COMMAND [conn33] CMD: validate config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.947-0400 c20023| 2019-07-25T18:25:35.947-0400 I INDEX [conn33] validating collection config.shards (UUID: 9dc58f2f-04de-441a-b6d7-36d58adac3fa)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.948-0400 c20023| 2019-07-25T18:25:35.947-0400 W STORAGE [conn33] Could not complete validation of table:collection-49-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.948-0400 c20023| 2019-07-25T18:25:35.948-0400 I INDEX [conn33] validating index host_1 on collection config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.948-0400 c20023| 2019-07-25T18:25:35.948-0400 W STORAGE [conn33] Could not complete validation of table:index-50-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.948-0400 c20023| 2019-07-25T18:25:35.948-0400 I INDEX [conn33] validating index _id_ on collection config.shards
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.949-0400 c20023| 2019-07-25T18:25:35.948-0400 W STORAGE [conn33] Could not complete validation of table:index-53-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.950-0400 c20023| 2019-07-25T18:25:35.950-0400 I INDEX [conn33] Validation complete for collection config.shards (UUID: 9dc58f2f-04de-441a-b6d7-36d58adac3fa). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.961-0400 c20023| 2019-07-25T18:25:35.960-0400 I COMMAND [conn33] CMD: validate config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.962-0400 c20023| 2019-07-25T18:25:35.961-0400 I INDEX [conn33] validating collection config.tags (UUID: f1867b25-f9fb-445f-8bca-c3b4a21b38ee)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.962-0400 c20023| 2019-07-25T18:25:35.962-0400 I INDEX [conn33] validating index ns_1_min_1 on collection config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.963-0400 c20023| 2019-07-25T18:25:35.962-0400 W STORAGE [conn33] Could not complete validation of table:index-75-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.963-0400 c20023| 2019-07-25T18:25:35.963-0400 I INDEX [conn33] validating index ns_1_tag_1 on collection config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.963-0400 c20023| 2019-07-25T18:25:35.963-0400 W STORAGE [conn33] Could not complete validation of table:index-78-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.963-0400 c20023| 2019-07-25T18:25:35.963-0400 I INDEX [conn33] validating index _id_ on collection config.tags
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.963-0400 c20023| 2019-07-25T18:25:35.963-0400 W STORAGE [conn33] Could not complete validation of table:index-80-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.964-0400 c20023| 2019-07-25T18:25:35.964-0400 I INDEX [conn33] Validation complete for collection config.tags (UUID: f1867b25-f9fb-445f-8bca-c3b4a21b38ee). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.972-0400 c20023| 2019-07-25T18:25:35.972-0400 I COMMAND [conn33] CMD: validate config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.973-0400 c20023| 2019-07-25T18:25:35.973-0400 I INDEX [conn33] validating collection config.transactions (UUID: 2ff387c1-0957-46b7-b825-992aba2ed063)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.973-0400 c20023| 2019-07-25T18:25:35.973-0400 W STORAGE [conn33] Could not complete validation of table:collection-25-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.974-0400 c20023| 2019-07-25T18:25:35.973-0400 I INDEX [conn33] validating index _id_ on collection config.transactions
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.974-0400 c20023| 2019-07-25T18:25:35.974-0400 W STORAGE [conn33] Could not complete validation of table:index-26-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.976-0400 c20023| 2019-07-25T18:25:35.975-0400 I INDEX [conn33] Validation complete for collection config.transactions (UUID: 2ff387c1-0957-46b7-b825-992aba2ed063). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.978-0400 c20023| 2019-07-25T18:25:35.978-0400 I REPL [SyncSourceFeedback] SyncSourceFeedback error sending update to Jasons-MacBook-Pro.local:20022: InvalidSyncSource: Sync source was cleared. Was Jasons-MacBook-Pro.local:20022
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.985-0400 c20023| 2019-07-25T18:25:35.985-0400 I COMMAND [conn33] CMD: validate config.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.986-0400 c20023| 2019-07-25T18:25:35.986-0400 I INDEX [conn33] validating collection config.version (UUID: e2da88e1-afec-4a2a-9c9c-0b4b51073f63)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.990-0400 c20023| 2019-07-25T18:25:35.990-0400 I INDEX [conn33] validating index _id_ on collection config.version
[js_test:configsvr_failover_repro] 2019-07-25T18:25:35.993-0400 c20023| 2019-07-25T18:25:35.992-0400 I INDEX [conn33] Validation complete for collection config.version (UUID: e2da88e1-afec-4a2a-9c9c-0b4b51073f63). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.028-0400 c20023| 2019-07-25T18:25:36.028-0400 I COMMAND [conn33] CMD: validate local.oplog.rs
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.029-0400 c20023| 2019-07-25T18:25:36.029-0400 I INDEX [conn33] validating collection local.oplog.rs (UUID: ef443948-cafd-4e05-ab9a-2309d9565d71)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.029-0400 c20023| 2019-07-25T18:25:36.029-0400 W STORAGE [conn33] Could not complete validation of table:collection-16-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.031-0400 c20023| 2019-07-25T18:25:36.031-0400 I INDEX [conn33] Validation complete for collection local.oplog.rs (UUID: ef443948-cafd-4e05-ab9a-2309d9565d71). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.039-0400 c20023| 2019-07-25T18:25:36.039-0400 I COMMAND [conn33] CMD: validate local.replset.election
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.040-0400 c20023| 2019-07-25T18:25:36.040-0400 I INDEX [conn33] validating collection local.replset.election (UUID: 33c7a131-0d93-4e1c-94c1-0ae885978575)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.071-0400 c20023| 2019-07-25T18:25:36.070-0400 I INDEX [conn33] validating index _id_ on collection local.replset.election
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.075-0400 c20023| 2019-07-25T18:25:36.075-0400 I INDEX [conn33] Validation complete for collection local.replset.election (UUID: 33c7a131-0d93-4e1c-94c1-0ae885978575). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.083-0400 c20023| 2019-07-25T18:25:36.083-0400 I COMMAND [conn33] CMD: validate local.replset.minvalid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.084-0400 c20023| 2019-07-25T18:25:36.084-0400 I INDEX [conn33] validating collection local.replset.minvalid (UUID: 5c18f3d3-fafa-400b-93c0-2cebd245146d)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.084-0400 c20023| 2019-07-25T18:25:36.084-0400 W STORAGE [conn33] Could not complete validation of table:collection-4-6844513311527823291. This is a transient issue as the collection was actively in use by other operations.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.084-0400 c20023| 2019-07-25T18:25:36.084-0400 I INDEX [conn33] validating index _id_ on collection local.replset.minvalid
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.087-0400 c20023| 2019-07-25T18:25:36.086-0400 I INDEX [conn33] Validation complete for collection local.replset.minvalid (UUID: 5c18f3d3-fafa-400b-93c0-2cebd245146d). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.099-0400 c20023| 2019-07-25T18:25:36.099-0400 I COMMAND [conn33] CMD: validate local.replset.oplogTruncateAfterPoint
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.100-0400 c20023| 2019-07-25T18:25:36.100-0400 I INDEX [conn33] validating collection local.replset.oplogTruncateAfterPoint (UUID: 80f29b0c-4a5a-4fdb-871a-4f092019c759)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.124-0400 c20023| 2019-07-25T18:25:36.124-0400 I INDEX [conn33] validating index _id_ on collection local.replset.oplogTruncateAfterPoint
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.127-0400 c20023| 2019-07-25T18:25:36.127-0400 I INDEX [conn33] Validation complete for collection local.replset.oplogTruncateAfterPoint (UUID: 80f29b0c-4a5a-4fdb-871a-4f092019c759). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.137-0400 c20023| 2019-07-25T18:25:36.137-0400 I COMMAND [conn33] CMD: validate local.startup_log
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.138-0400 c20023| 2019-07-25T18:25:36.138-0400 I INDEX [conn33] validating collection local.startup_log (UUID: 785c5cbb-6c24-4c5f-8954-dab243d20759)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.141-0400 c20023| 2019-07-25T18:25:36.141-0400 I INDEX [conn33] validating index _id_ on collection local.startup_log
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.144-0400 c20023| 2019-07-25T18:25:36.143-0400 I INDEX [conn33] Validation complete for collection local.startup_log (UUID: 785c5cbb-6c24-4c5f-8954-dab243d20759). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.152-0400 c20023| 2019-07-25T18:25:36.152-0400 I COMMAND [conn33] CMD: validate local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.152-0400 c20023| 2019-07-25T18:25:36.152-0400 I INDEX [conn33] validating collection local.system.replset (UUID: f4eee805-b714-4aa7-bd0d-b8813350f86d)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.154-0400 c20023| 2019-07-25T18:25:36.154-0400 I INDEX [conn33] validating index _id_ on collection local.system.replset
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.156-0400 c20023| 2019-07-25T18:25:36.156-0400 I INDEX [conn33] Validation complete for collection local.system.replset (UUID: f4eee805-b714-4aa7-bd0d-b8813350f86d). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.167-0400 c20023| 2019-07-25T18:25:36.167-0400 I COMMAND [conn33] CMD: validate local.system.rollback.id
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.168-0400 c20023| 2019-07-25T18:25:36.168-0400 I INDEX [conn33] validating collection local.system.rollback.id (UUID: 2ca40ffd-cdea-4e90-a3d1-9170709b257f)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.169-0400 c20023| 2019-07-25T18:25:36.169-0400 I INDEX [conn33] validating index _id_ on collection local.system.rollback.id
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.171-0400 c20023| 2019-07-25T18:25:36.171-0400 I INDEX [conn33] Validation complete for collection local.system.rollback.id (UUID: 2ca40ffd-cdea-4e90-a3d1-9170709b257f). No corruption found.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.174-0400 c20023| 2019-07-25T18:25:36.173-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated: 15), will terminate after current cmd ends
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.174-0400 c20023| 2019-07-25T18:25:36.174-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.174-0400 c20023| 2019-07-25T18:25:36.174-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20023.sock
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.174-0400 c20023| 2019-07-25T18:25:36.174-0400 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.174-0400 c20023| 2019-07-25T18:25:36.174-0400 I REPL [signalProcessingThread] shutting down replication subsystems
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.174-0400 c20023| 2019-07-25T18:25:36.174-0400 I REPL [signalProcessingThread] Stopping replication reporter thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.175-0400 c20023| 2019-07-25T18:25:36.174-0400 I REPL [signalProcessingThread] Stopping replication fetcher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.175-0400 c20023| 2019-07-25T18:25:36.174-0400 I REPL [signalProcessingThread] Stopping replication applier thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.175-0400 c20023| 2019-07-25T18:25:36.175-0400 I REPL [rsSync-0] Finished oplog application
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.362-0400 c20023| 2019-07-25T18:25:36.362-0400 I REPL [rsBackgroundSync] Stopping replication producer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.362-0400 c20023| 2019-07-25T18:25:36.362-0400 I REPL [signalProcessingThread] Stopping replication storage threads
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.363-0400 c20023| 2019-07-25T18:25:36.362-0400 I ASIO [RS] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.363-0400 c20023| 2019-07-25T18:25:36.363-0400 I ASIO [RS] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.363-0400 c20023| 2019-07-25T18:25:36.363-0400 I CONNPOOL [RS] Dropping all pooled connections to Jasons-MacBook-Pro.local:20022 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.363-0400 c20023| 2019-07-25T18:25:36.363-0400 I CONNPOOL [RS] Dropping all pooled connections to Jasons-MacBook-Pro.local:20021 due to ShutdownInProgress: Shutting down the connection pool
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.366-0400 c20023| 2019-07-25T18:25:36.366-0400 I ASIO [Replication] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.367-0400 c20023| 2019-07-25T18:25:36.367-0400 I ASIO [ReplicaSetMonitor-TaskExecutor] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.367-0400 c20023| 2019-07-25T18:25:36.367-0400 W SHARDING [shard-registry-reload] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.367-0400 c20023| 2019-07-25T18:25:36.367-0400 I ASIO [shard-registry-reload] Killing all outstanding egress activity.
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.367-0400 c20023| 2019-07-25T18:25:36.367-0400 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.371-0400 c20023| 2019-07-25T18:25:36.370-0400 I STORAGE [signalProcessingThread] Deregistering all the collections
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.371-0400 c20023| 2019-07-25T18:25:36.371-0400 I STORAGE [WTOplogJournalThread] Oplog journal thread loop shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.371-0400 c20023| 2019-07-25T18:25:36.371-0400 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.371-0400 c20023| 2019-07-25T18:25:36.371-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.387-0400 c20023| 2019-07-25T18:25:36.386-0400 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.387-0400 c20023| 2019-07-25T18:25:36.387-0400 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.387-0400 c20023| 2019-07-25T18:25:36.387-0400 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.429-0400 c20023| 2019-07-25T18:25:36.429-0400 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.429-0400 c20023| 2019-07-25T18:25:36.429-0400 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.429-0400 c20023| 2019-07-25T18:25:36.429-0400 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.984-0400 c20023| 2019-07-25T18:25:36.983-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.987-0400 c20023| 2019-07-25T18:25:36.987-0400 I CONTROL [signalProcessingThread] now exiting
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.987-0400 c20023| 2019-07-25T18:25:36.987-0400 I CONTROL [signalProcessingThread] shutting down with code:0
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.999-0400 2019-07-25T18:25:36.999-0400 I - [js] shell: stopped mongo program on port 20023
[js_test:configsvr_failover_repro] 2019-07-25T18:25:36.999-0400 ReplSetTest stop *** Mongod in port 20023 shutdown with code (0) ***
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.000-0400 ReplSetTest stopSet deleting all dbpaths
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.024-0400 2019-07-25T18:25:37.024-0400 I NETWORK [js] Removed ReplicaSetMonitor for replica set configsvr_failover_repro-configRS
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.024-0400 ReplSetTest stopSet *** Shut down repl set - test worked ****
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.024-0400 ShardingTest stop deleting all dbpaths
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.024-0400 *** ShardingTest configsvr_failover_repro completed successfully in 48.043 seconds ***
[MongoDFixture:job0] 2019-07-25T18:25:37.030-0400 I NETWORK [conn3] end connection 127.0.0.1:49459 (0 connections now open)
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.031-0400 2019-07-25T18:25:37.030-0400 I NETWORK [js] DBClientConnection failed to receive message from 127.0.0.1:20020 - HostUnreachable: Connection closed by peer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.032-0400 2019-07-25T18:25:37.031-0400 I QUERY [js] Failed to end session { id: UUID("5c6b2b51-4daa-4b01-98d8-ec4db4b4ba86") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host '127.0.0.1:20020'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.032-0400 2019-07-25T18:25:37.032-0400 I NETWORK [js] DBClientConnection failed to receive message from 127.0.0.1:20021 - HostUnreachable: Connection closed by peer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.032-0400 2019-07-25T18:25:37.032-0400 I QUERY [js] Failed to end session { id: UUID("9a454f36-648a-450a-9eb9-b85dea2fbf25") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host '127.0.0.1:20021'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.033-0400 2019-07-25T18:25:37.032-0400 I NETWORK [js] DBClientConnection failed to receive message from 127.0.0.1:20022 - HostUnreachable: Connection closed by peer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.033-0400 2019-07-25T18:25:37.032-0400 I QUERY [js] Failed to end session { id: UUID("44321189-0df7-4024-bb83-362f10fea9c6") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host '127.0.0.1:20022'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.033-0400 2019-07-25T18:25:37.033-0400 I NETWORK [js] DBClientConnection failed to receive message from 127.0.0.1:20023 - HostUnreachable: Connection reset by peer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.033-0400 2019-07-25T18:25:37.033-0400 I QUERY [js] Failed to end session { id: UUID("b5665848-5267-4c4e-b6b4-e996ea0e3245") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host '127.0.0.1:20023'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.033-0400 2019-07-25T18:25:37.033-0400 I NETWORK [js] DBClientConnection failed to receive message from localhost:20024 - HostUnreachable: Connection reset by peer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.033-0400 2019-07-25T18:25:37.033-0400 I QUERY [js] Failed to end session { id: UUID("5812bf1c-c7d6-4740-bac2-1299dde24fd5") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host 'localhost:20024'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.033-0400 2019-07-25T18:25:37.033-0400 I QUERY [js] Failed to end session { id: UUID("6a42a806-c274-46d1-b124-fa04d86e12f7") } due to ReplicaSetMonitorRemoved: ReplicaSetMonitor for set configsvr_failover_repro-rs0 is removed
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.034-0400 2019-07-25T18:25:37.034-0400 I NETWORK [js] DBClientConnection failed to receive message from localhost:20020 - HostUnreachable: Connection closed by peer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.034-0400 2019-07-25T18:25:37.034-0400 I QUERY [js] Failed to end session { id: UUID("cd3465f7-031b-41c3-9200-3661ee566747") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host 'localhost:20020'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.034-0400 2019-07-25T18:25:37.034-0400 I NETWORK [js] DBClientConnection failed to receive message from 127.0.0.1:20024 - HostUnreachable: Connection closed by peer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.034-0400 2019-07-25T18:25:37.034-0400 I QUERY [js] Failed to end session { id: UUID("806e05d3-7000-445f-820e-060646a14c47") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host '127.0.0.1:20024'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.035-0400 2019-07-25T18:25:37.035-0400 I NETWORK [js] DBClientConnection failed to receive message from localhost:20021 - HostUnreachable: Connection closed by peer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.035-0400 2019-07-25T18:25:37.035-0400 I QUERY [js] Failed to end session { id: UUID("30e31a33-c594-4e2b-acaa-21f3cf31e12f") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host 'localhost:20021'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.035-0400 2019-07-25T18:25:37.035-0400 I NETWORK [js] DBClientConnection failed to receive message from localhost:20022 - HostUnreachable: Connection closed by peer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.035-0400 2019-07-25T18:25:37.035-0400 I QUERY [js] Failed to end session { id: UUID("69441b01-eadb-45da-8892-0ef8c459fadd") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host 'localhost:20022'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.035-0400 2019-07-25T18:25:37.035-0400 I NETWORK [js] DBClientConnection failed to receive message from localhost:20023 - HostUnreachable: Connection closed by peer
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.035-0400 2019-07-25T18:25:37.035-0400 I QUERY [js] Failed to end session { id: UUID("9301e8fc-5e79-43de-91e1-25ebb4769863") } due to HostUnreachable: network error while attempting to run command 'endSessions' on host 'localhost:20023'
[js_test:configsvr_failover_repro] 2019-07-25T18:25:37.048-0400 JSTest jstests/sharding/configsvr_failover_repro.js finished.
[executor:js_test:job0] 2019-07-25T18:25:37.048-0400 configsvr_failover_repro.js ran in 48.73 seconds: no failures detected.
[executor:js_test:job0] 2019-07-25T18:25:37.049-0400 Running job0_fixture_teardown...
[js_test:job0_fixture_teardown] 2019-07-25T18:25:37.049-0400 Starting the teardown of MongoDFixture (Job #0).
[MongoDFixture:job0] Stopping mongod on port 20000 with pid 2741...
[executor] 2019-07-25T18:25:37.049-0400 Waiting for threads to complete
[MongoDFixture:job0] 2019-07-25T18:25:37.049-0400 I CONTROL [signalProcessingThread] got signal 15 (Terminated: 15), will terminate after current cmd ends
[MongoDFixture:job0] 2019-07-25T18:25:37.050-0400 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
[MongoDFixture:job0] 2019-07-25T18:25:37.050-0400 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-20000.sock
[MongoDFixture:job0] 2019-07-25T18:25:37.050-0400 I - [signalProcessingThread] Stopping further Flow Control ticket acquisitions.
[MongoDFixture:job0] 2019-07-25T18:25:37.050-0400 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
[MongoDFixture:job0] 2019-07-25T18:25:37.053-0400 I STORAGE [signalProcessingThread] Deregistering all the collections
[MongoDFixture:job0] 2019-07-25T18:25:37.053-0400 I STORAGE [signalProcessingThread] Timestamp monitor shutting down
[MongoDFixture:job0] 2019-07-25T18:25:37.053-0400 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
[MongoDFixture:job0] 2019-07-25T18:25:37.064-0400 I STORAGE [signalProcessingThread] Shutting down session sweeper thread
[MongoDFixture:job0] 2019-07-25T18:25:37.064-0400 I STORAGE [signalProcessingThread] Finished shutting down session sweeper thread
[MongoDFixture:job0] 2019-07-25T18:25:37.064-0400 I STORAGE [signalProcessingThread] Shutting down journal flusher thread
[MongoDFixture:job0] 2019-07-25T18:25:37.108-0400 I STORAGE [signalProcessingThread] Finished shutting down journal flusher thread
[MongoDFixture:job0] 2019-07-25T18:25:37.108-0400 I STORAGE [signalProcessingThread] Shutting down checkpoint thread
[MongoDFixture:job0] 2019-07-25T18:25:37.108-0400 I STORAGE [signalProcessingThread] Finished shutting down checkpoint thread
[MongoDFixture:job0] 2019-07-25T18:25:37.291-0400 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
[MongoDFixture:job0] 2019-07-25T18:25:37.294-0400 I CONTROL [signalProcessingThread] now exiting
[MongoDFixture:job0] 2019-07-25T18:25:37.294-0400 I CONTROL [signalProcessingThread] shutting down with code:0
[MongoDFixture:job0] Successfully stopped the mongod on port 20000.
[js_test:job0_fixture_teardown] 2019-07-25T18:25:37.300-0400 Finished the teardown of MongoDFixture (Job #0).
[executor:js_test:job0] 2019-07-25T18:25:37.300-0400 job0_fixture_teardown ran in 0.25 seconds: no failures detected.
[executor] 2019-07-25T18:25:37.300-0400 Threads are completed!
[executor] 2019-07-25T18:25:37.301-0400 Summary: All 3 test(s) passed in 50.04 seconds.
[resmoke] 2019-07-25T18:25:37.301-0400 ================================================================================
[resmoke] 2019-07-25T18:25:37.301-0400 Summary of with_server suite: All 3 test(s) passed in 50.04 seconds.
3 test(s) ran in 50.04 seconds (3 succeeded, 0 were skipped, 0 failed, 0 errored)
js_tests: All 3 test(s) passed in 50.04 seconds.
[resmoke] 2019-07-25T18:25:37.301-0400 Exiting with code: 0