[SERVER-19995] Performance drop-off when capped collection becomes full in WiredTiger Created: 17/Aug/15 Updated: 26/Oct/15 Resolved: 25/Oct/15 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | WiredTiger |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Ramon Fernandez Marina | Assignee: | Michael Cahill (Inactive) |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||
| Operating System: | ALL | ||||||||
| Steps To Reproduce: | See |
||||||||
| Participants: | |||||||||
| Description |
|
This is a continuation of |
| Comments |
| Comment by Alexander Gorrod [ 29/Sep/15 ] | |||||||||||||||||||||||||||||||||||||||||
|
I re-ran the test against the MongoDB 3.0 code base (commit ea2cc1388cf707512a04f4437def3aedd78c7211), so between 3.0.6 and 3.0.7 releases, there is somewhat of a degradation, but not the performance cliff that this issue describes:
Throughput appears to stabilise at around 52k inserts. I think this issue can be closed. bruce.lucas Do you agree? | |||||||||||||||||||||||||||||||||||||||||
| Comment by Alexander Gorrod [ 29/Sep/15 ] | |||||||||||||||||||||||||||||||||||||||||
|
I've re-run the original workload from Mongostat output from August 18th (git commit d4e4b25d8ca52f79781fc1fdd96d28bed08212cc):
Mongostat output from September 29th (git commit a998887902dfd9cb6f125233c86361064f80c57e):
The level of throughput is maintained for at least 15 minutes with the newer release. I've verified that the size of the collection has definitely stabilized during the recorded time period (i.e: the volume of inserts exceeds the cap size). | |||||||||||||||||||||||||||||||||||||||||
| Comment by Alexander Gorrod [ 28/Sep/15 ] | |||||||||||||||||||||||||||||||||||||||||
|
Is this an expected behavior? Once a capped collection becomes full work needs to be done to remove the obsolete data. That work isn't free, so I would expect performance to drop off. We could potentially investigate optimizing the code that maintains the size of capped collections - I'd call that an enhancement rather than a bug fix though. |