[SERVER-24575] Don't block writing command replies to network Created: 14/Jun/16 Updated: 06/Dec/22 |
|
| Status: | Backlog |
| Project: | Core Server |
| Component/s: | Networking |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Mathias Stearn | Assignee: | Backlog - Service Architecture |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | sa-remove-fv-backlog-22 | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Assigned Teams: |
Service Arch
|
||||||||
| Sprint: | Integrate+Tuning 16 (06/24/16) | ||||||||
| Participants: | |||||||||
| Description |
|
This is most useful with exhaust cursors or pipelined queries because it allows preparing the next batch while the current one is being sent over the network. It will also have a small improvement for normal RPC-style requests because it will avoid the very short thread wakeup between sending the reply and reading the next request. |
| Comments |
| Comment by Ratika Gandhi [ 08/Oct/19 ] |
|
Considering that this may improve perf for sync when sync uses exhaust cursors. Consider this for PERF work. |
| Comment by Mathias Stearn [ 15/Jun/16 ] |
|
schwerin This ticket is specifically about sending replies from our ingress networking layer. I prototyped this change by doing the call to sock->say() through std::async when sending large replies (arbitrarily chosen as >1K). An ideal solution while keeping the current thread-per-connection model would be to do a non-blocking send on-thread, then if we get back EWOULDBLOCK, hand the rest of the send off to a dedicated async sending thread. The connection's thread will then immediately call into the blocking recv. I don't know if that is the solution we will go with for 3.3. That will depend largely on what the ingress networking refactor looks like. To be clear, I'm considering the change to make the OplogFetcher use pipelined getMores separate from this work, even though they enhance each other. |
| Comment by Andy Schwerin [ 15/Jun/16 ] |
|
redbeard0531, could you please add more detailed description of the proposed work? |