This may be related to
SERVER-16235, but filing as a separate ticket since my understanding is that a workaround is in place to prevent SERVER-16235 from impacting the oplog, plus the symptoms here are a bit different, so this may be a separate issue.
Tested on build from master this afternoon (365cca0c47566d192ca847f0b077cedef4b3430e).
- Test on single-node replica set repeatedly updates the document in a single-document collection in order to generate a lot of oplog entries while minimizing work to actually perform each op, in order to emphasize oplog performance.
- Yellow graph below shows declining performance over time (measured in number of updates done). Graph shows same decline starting at first oplog wraparound at about 90 k inserts as seen in
SERVER-16235, but then it recovers and begins a cycle of repeated declines and recoveries. But superimposed on this is a longer term downard trend, and that possibly distinguishes this issue from SERVER-16235. Not clear from this data whether the asymptote is >0.
- Red graph shows that the decline goes away for the same test in a standalone instance, confirming that this is an oplog issue.
- Blue graph shows that decline is not seen with mmapv1, confirming that this is a WT-specific issue.
- Restarting mongod (not shown) resets the behavior back to time 0.