[JAVA-591] Long-running tailable cursors consume too much memory Created: 29/Jun/12 Updated: 29/Jan/15 Resolved: 27/Oct/14 |
|
| Status: | Closed |
| Project: | Java Driver |
| Component/s: | Query Operations |
| Affects Version/s: | 2.7.2 |
| Fix Version/s: | 2.13.0 |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Adam Warski | Assignee: | Jeffrey Yemin |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||
| Description |
|
I already posted it on the mailing list, it didn't receive any replies, but I still think it's a bug. Please correct me if I'm wrong. A tailable cursor on a capped collection can live potentially for a long time and return multiple batches of data. The list is available via DBCursor.getSizes(). If the list grows and grows, especially if there's a lot of data added to the capped collection, eventually it will OOM. Is that right? UPDATE: Added DBCursor.disableBatchSizeTracking() to work around this problem. |
| Comments |
| Comment by Jeffrey Yemin [ 29/Jan/15 ] |
|
2.13.0 has been released. Closing issue. |
| Comment by Githook User [ 27/Oct/14 ] |
|
Author: {u'username': u'jyemin', u'name': u'Jeff Yemin', u'email': u'jeff.yemin@10gen.com'}Message: Added a way to disable batch size tracking on DBCursor in order to avoid the list which tracks each batch size from growing |
| Comment by Adam Warski [ 02/Jul/12 ] |
|
No, I'm not seeing this problem in practice yet, though the possibility is disturbing, especially that tailing cursors may live for a very long time, right? (I'm not an expert on mongo so I might be missing something here). I think using a list which holds last N batch sizes would be best. |
| Comment by Jeffrey Yemin [ 02/Jul/12 ] |
|
Are you seeing this problem in practice? It would take a lot of batches before you get to any significant size array. Couple of options I can see:
|
| Comment by Adam Warski [ 29/Jun/12 ] |
|
The forum entry is: |