[SERVER-59615] Store constituent devices in FTDC metadata Created: 26/Aug/21 Updated: 22/Jan/24 |
|
| Status: | Open |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Kevin Arhelger | Assignee: | Brad Moore |
| Resolution: | Unresolved | Votes: | 3 |
| Labels: | former-quick-wins | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Attachments: |
|
||||||||||||
| Issue Links: |
|
||||||||||||
| Assigned Teams: |
Server Security
|
||||||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||||||
| Sprint: | Security 2023-10-02, Security 2023-10-16, Security 2023-10-30, Security 2023-11-13, Security 2023-12-25, Security 2024-01-08, Security 2024-02-19 | ||||||||||||
| Participants: | |||||||||||||
| Description |
|
Currently, FTDC metadata stores information about filesystem mounts. Being able to filter what disks contribute to mongod would be incredibly useful, especially on machines with dozens of disks. Storing something similar to lsblk output would greatly help in these scenarios. |
| Comments |
| Comment by Kevin Arhelger [ 31/Aug/21 ] | |||||||||||||||||||||||||||||||
|
Hello Mark, Thanks for the feedback. 1. The script should enumerate all devices for a single filesystem. I only really care about the devices making up any filesystems used by the mongo* process (logpath, dbpath, auditDestination, diagnosticDataCollectionDirectoryPath, etc). If it's not an issue to list the potentially dozens of filesystems underlying device(s) I see no reason not to include it, but minimized the script output in case this was a concern. | |||||||||||||||||||||||||||||||
| Comment by Mark Benvenuto [ 31/Aug/21 ] | |||||||||||||||||||||||||||||||
|
kevin.arhelger, your sample script is very helpful. Some clarifying questions 1. Your script enumerates just one device for a file system, do you want FTDC to include all the device information? Example:
Would you want more information then that like the sizes of the constituent devices? | |||||||||||||||||||||||||||||||
| Comment by Bruce Lucas (Inactive) [ 27/Aug/21 ] | |||||||||||||||||||||||||||||||
|
Thanks for clarifying the request is for additional metadata (not additional metrics). I'll pass this on to the appropriate team. | |||||||||||||||||||||||||||||||
| Comment by Kevin Arhelger [ 26/Aug/21 ] | |||||||||||||||||||||||||||||||
|
Thanks for the comments Bruce. This suggestion is just for changes to FTDC metadata. The main pain point is systems with many RAID or LVM volumes. An Ops Manager Head DB is one example where there could be dozens of raid disks, but only one or two of the physical disks are backing the dbpath. The end goal would allow tools to automatically highlight which devices apply to the monitored process. Today: bsondump metrics.2021-08-26T21-08-58Z-00000 2> /dev/null | head -1 | jq '.doc.hostInfo.extra.mountInfo[-1]'
I have no way of knowing what /dev/mapper/test_vg-vg0 is (in this case its /dev/loop0 and /dev/loop1).
There are a few different options that would all work: | |||||||||||||||||||||||||||||||
| Comment by Bruce Lucas (Inactive) [ 26/Aug/21 ] | |||||||||||||||||||||||||||||||
|
Can you flesh this proposal out a bit perhaps by a simple example showing what's in ftdc today and what would be in ftdc for the same system after this is implemented? Also, can you clarify whether your talking about disk metrics or metadata (since you mention metadata in the opening comment I'm a little uncertain.) |