[SERVER-42016] Add applyOps option to suppress the results array Created: 28/Jun/19 Updated: 06/Dec/22 |
|
| Status: | Backlog |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | David Golden | Assignee: | Backlog - Replication Team |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||
| Assigned Teams: |
Replication
|
||||
| Participants: | |||||
| Description |
|
In exploring mongomirror performance, we're looking into why we limit applyOps batches to 1000 entries and whether we can increase that (up to the 16MB document size limit) when applying many small operations. One consideration is that the response includes an array of results. We believe the 1000 op limit may have been chosen historically to avoid having the response in the case of error exceed 16 MB, possibly relating to the size of the 'results' array. However, mongomirror never looks at the 'results' array, only the 'ok' field. Could we consider adding (and backporting) an option for applyOps to suppress the 'results' field? I think that would allow packing many more small ops into a single applyOps command, which should improve throughput. |
| Comments |
| Comment by David Golden [ 29/Jun/19 ] |
|
Backlog is fine, thanks. |
| Comment by Siyuan Zhou [ 29/Jun/19 ] |
|
david.golden, thanks for the update. Do you mind us putting this ticket to the backlog since it's not blocking you? We can revisit this if the "results" array becomes the bottleneck for mongomirror. |
| Comment by David Golden [ 29/Jun/19 ] |
|
Update: I've done some spot testing and completed applyOps with up to 100k op entries. So I don't think this ticket is blocking us from increasing ops batch sizes, but even still, 10k or 100k true responses in a BSON array does take some space so eliminating it would reduce latency at the margin in receiving the response. |