[SERVER-8161] locks leak Created: 14/Jan/13 Updated: 15/Feb/17 Resolved: 15/Feb/17 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Internal Code |
| Affects Version/s: | 2.2.1 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Maurizio Sambati | Assignee: | Unassigned |
| Resolution: | Done | Votes: | 0 |
| Labels: | triage | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||
| Operating System: | ALL | ||||||||||||||||
| Steps To Reproduce: | 1. create a replicaset |
||||||||||||||||
| Participants: | |||||||||||||||||
| Description |
|
Removed dbs in a replicaset makes stay in the lock table of the server status making it constantly grow if you create and remove many dbs. |
| Comments |
| Comment by Eric Milkie [ 15/Feb/17 ] |
|
This is no longer an issue as of the 3.0 release. |
| Comment by Andrew Morrow (Inactive) [ 15/Jan/13 ] |
|
I agree that it is a leak, but also one that is unlikely to have adverse behavior except in somewhat unusual circumstances (continual db instance churn), and for which there is apparently an effective workaround (using collections). Calling it a 'feature request' was a poor choice of words on my part: I just want to update the ticket to reflect that you have a way forward, but that this is a real problem that we should fix in a future release. |
| Comment by Maurizio Sambati [ 15/Jan/13 ] |
|
Hi Andrew, I'll do the switch soon. Thanks. |
| Comment by Andrew Morrow (Inactive) [ 15/Jan/13 ] |
|
Hi Maurizio - There really isn't a way to collect the per-db stats with the current code, except by restarting the server. Switching to creating and destroying collections may be a reasonable workaround. If you like, I'll update this ticket to a feature request for these locks be scoped to the lifetime of the db, so that in the future you could move back to using databases. |
| Comment by Maurizio Sambati [ 15/Jan/13 ] |
|
Hi Andrew, thank you for replying. In our current system we're creating and destroying a large amount of dbs (one every 5 mins). Is there any way to collect these locks in the meantime I write the migration? (also all these locks will last in the db forever...) |
| Comment by Andrew Morrow (Inactive) [ 14/Jan/13 ] |
|
Hi Maurizio - The behavior you are observing is consistent with the current design and implementation of the serverStatus command, and is independent of replication (you will observe the same behavior on a single node). The current implementation intentionally does not reap the lock statistics when a database is destroyed. However, I can see how if you are constantly creating and destroying databases that this behavior could be problematic. If you like, I can update this ticket to make it a feature request that we could address in a future release. |