[SERVER-5285] Either disallow or fix capped collections with more than 2^32 docs Created: 11/Mar/12 Updated: 11/Jul/16 Resolved: 26/Nov/12 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Storage |
| Affects Version/s: | 2.1.0 |
| Fix Version/s: | 2.3.2 |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Richard Kreuter (Inactive) | Assignee: | Eliot Horowitz (Inactive) |
| Resolution: | Done | Votes: | 2 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||||||
| Operating System: | ALL | ||||||||||||||||||||
| Participants: | |||||||||||||||||||||
| Description |
|
The max documents counter for a capped collection is an int, and it appears as if creating a very large capped collection and then inserting more than 2^31-1 documents into it will give a collection for which the document counters in collStats stop working. |
| Comments |
| Comment by auto [ 26/Nov/12 ] | ||||||||||||||||
|
Author: {u'date': u'2012-11-26T02:09:46Z', u'email': u'eliot@10gen.com', u'name': u'Eliot Horowitz'}Message: | ||||||||||||||||
| Comment by auto [ 26/Nov/12 ] | ||||||||||||||||
|
Author: {u'date': u'2012-11-20T19:26:09Z', u'email': u'eliot@10gen.com', u'name': u'Eliot Horowitz'}Message: | ||||||||||||||||
| Comment by auto [ 26/Nov/12 ] | ||||||||||||||||
|
Author: {u'date': u'2012-11-20T15:29:52Z', u'email': u'eliot@10gen.com', u'name': u'Eliot Horowitz'}Message: | ||||||||||||||||
| Comment by Thorn Roby [ 28/Sep/12 ] | ||||||||||||||||
|
SECONDARY> db.log.stats() , | ||||||||||||||||
| Comment by Thorn Roby [ 24/Sep/12 ] | ||||||||||||||||
|
We've been successfully running version 2.0.4a for several months with the interim fix for this issue, which as far as I know consists of ignoring the 32bit value (2Bn) when deciding to wrap a collection that exceeds this count. | ||||||||||||||||
| Comment by auto [ 09/Apr/12 ] | ||||||||||||||||
|
Author: {u'login': u'erh', u'name': u'Eliot Horowitz', u'email': u'eliot@10gen.com'}Message: use long long instead of int as part of capped alloc - part of | ||||||||||||||||
| Comment by auto [ 07/Apr/12 ] | ||||||||||||||||
|
Author: {u'login': u'erh', u'name': u'Eliot Horowitz', u'email': u'eliot@10gen.com'}Message: make NamespaceDetails::(capped|max) private to work on | ||||||||||||||||
| Comment by Daniel Crosta [ 06/Apr/12 ] | ||||||||||||||||
|
Also note: I exhaustively tested that the documents are no longer findable by doing
I ran this query after I began to notice that db.collection.find().sort({$natural: 1}).limit(1).next() was not finding the first element I inserted (where i == 0) | ||||||||||||||||
| Comment by Daniel Crosta [ 06/Apr/12 ] | ||||||||||||||||
|
This seems to be an actual issue where records are deleted "near" 2^31 - 1 records in a capped collection (I say near because I've not observed the count go to 2^31 - 1 and stay there in my testing). Here's how I've repro'd this: 1. Create a huge capped collection (mine is 100G), and do not specify "max" when creating (just specify size) Expected: As long as the "size" as reported in stats() is lower than the "storageSize", the first element should not change regardless of the count Actual: The first element begins "advancing" once "count" in stats() is within about 5% of "max" (which is 2^31 - 1). | ||||||||||||||||
| Comment by Richard Kreuter (Inactive) [ 12/Mar/12 ] | ||||||||||||||||
|
Note: I think this would be an on-disk format change, so might have to be handled carefully. |