We don't currently do this and it means that each mongod will consider 5% of the avaiable disk space as its default oplog size. Other than
SERVER-27843, this seemingly hasn't been an issue in Evergreen. I believe this is because when we aren't setting the oplog size in the resmoke.py YAML suite, we are either (a) running with WiredTiger and taking advantage of disk compression, or (b) running with MMAPv1 but only using --jobs=1.
I think we should use a 511MB oplog as the default in ReplicaSetFixture to match what we're already doing in the replica_sets_jscore_passthrough.yml test suite. I suspect that if we were to try and match the 40MB oplog of ReplSetTest for data members or the 16MB oplog of ShardingTest replica set shards, then we'd end up making the work from
SERVER-26884 less useful because a smaller oplog would retain less history.