text_index_limits.js has large inserts which run slowly enough to get killed by a stepdown before any attempt can finish on slower variants.
Looking at the test logs of the write_concern_majority_passthrough Evergreen task, inserting a document that has >400,000 unique terms takes ~30 seconds on the Windows 2008R2 DEBUG builder. The stepdowns attempts occur every 8 seconds so I agree it is impossible for this operation to ever succeed.
[ReplicaSetFixture:job14:primary] 2018-08-14T00:43:05.500+0000 I COMMAND [conn127] command test.text_index_limits appName: "MongoDB Shell" command: insert { insert: "text_index_limits", lsid: { id: UUID("f95a5144-11ce-4ff0-b50a-2b8e96a5ef40") }, $clusterTime: { clusterTime: Timestamp(1534207354, 3), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, writeConcern: { w: "majority", wtimeout: 300321.0 }, $db: "test" } ninserted:1 keysInserted:439752 numYields:0 reslen:230 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_msg 30652ms
|
...
|
[ReplicaSetFixture:job14:primary] 2018-08-14T00:43:09.895+0000 I COMMAND [conn127] command test.text_index_limits appName: "MongoDB Shell" command: insert { insert: "text_index_limits", lsid: { id: UUID("f95a5144-11ce-4ff0-b50a-2b8e96a5ef40") }, $clusterTime: { clusterTime: Timestamp(1534207354, 4), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } }, writeConcern: { w: "majority", wtimeout: 300321.0 }, $db: "test" } ninserted:1 keysInserted:16438 numYields:0 reslen:230 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 1 } } } protocol:op_msg 4276ms
|
https://evergreen.mongodb.com/task/mongodb_mongo_master_windows_64_2k8_debug_write_concern_majority_passthrough_ad1107c0daf14c4d2a457bf6fd89d41efe58e5b4_18_08_13_23_28_13
|