[SERVER-54581] Report the WT all_durable timestamp in serverStatus Created: 16/Feb/21 Updated: 06/Dec/22 Resolved: 07/Jan/22 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Storage |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Dianna Hohensee (Inactive) | Assignee: | Backlog - Storage Execution Team |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||
| Assigned Teams: |
Storage Execution
|
||||||||||||||||
| Participants: | |||||||||||||||||
| Description |
|
It may be helpful to report the "no holes" point for debug-ability. For example for flow control, where replication could be held up by an oplog hole or because of throttling. Note: replSetGetStatus already returns the last durable optime and walltime. Primaries periodically update last durable with the WT all_durable point, via the journal flushing logic. Secondaries, however, do not periodically fetch the WT all_durable timestamp. I'm envisioning either: the serverStatus section reports differently for primary mode vs secondary mode, taking advantage of the journal flushing lookups; or both primary and secondary mode fetch from the storage engine directly (not sure whether there's a perf consequence to doing so). |
| Comments |
| Comment by Louis Williams [ 07/Jan/22 ] |
|
dianna.hohensee can we close this? |