Under heavy insert or update load the oplog can grow much larger than the configured size, possibly indefinitely. I've triggered this with a few different workloads, sometimes reproducibly and sometimes not. 100 threads of the following with a configured oplog size of 50 MB in a 1-node replica set seems to do it fairly reliably:
Oplog growth as reported by the above code for two different runs, in both cases going well past the configured 50 MB size:
In both cases the cache size was 5GB, but virtual memory grew to about 8GB, and mongod was killed by the OOM killer.
I think the issue is that there's no back-pressure in capped collection inserts to throttle inserts to prevent inserts from outrunning deletes.