<!-- 
RSS generated by JIRA (9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66) at Thu Feb 08 05:02:05 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>MongoDB Jira</title>
    <link>https://jira.mongodb.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.7.1</version>
        <build-number>970001</build-number>
        <build-date>13-04-2023</build-date>
    </build-info>


<item>
            <title>[SERVER-43038] Commit point can be stale on slaveDelay nodes and cause memory pressure</title>
                <link>https://jira.mongodb.org/browse/SERVER-43038</link>
                <project id="10000" key="SERVER">Core Server</project>
                    <description>&lt;p&gt;After restarting a stale slaveDelay node, the learned higher OpTime in the same term can be forgotten. If the commit point is in a higher term on all other nodes, the slaveDelay node can only advance its commit point on getMore responses due to &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-39831&quot; title=&quot;Never update commit point beyond last applied if learned from sync source&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-39831&quot;&gt;&lt;del&gt;SERVER-39831&lt;/del&gt;&lt;/a&gt;. On slaveDelay nodes, it&apos;s likely the buffer is already full, so the applier has to apply 16MB worth of oplog entries to make the room for bgsync to insert the last fetched batch and call another getMore. Applying 16MB oplog entires may be enough to trigger memory pressure, causing evictions.&lt;/p&gt;

&lt;p&gt;The issue will resolve when the slaveDelay node starts to apply oplog entries from the latest term. Memory pressure and evictions on slaveDelay nodes are undesired but not harmful.&lt;/p&gt;

&lt;p&gt;The same issue can happen without restart. Let&apos;s say an election happens in term 8 at time T0, but the node delays by 5 days and is still applying entries from term 7. At T0 + 2 days, another election occurs in term 9. Now the commit point is in term 9. At T0 + 5 days, when the delayed node starts to apply entries in term 8, it cannot advance its commit point beyond its last applied. Eventually, when the node starts to apply entries from term 9, everything&apos;s fine again.&lt;/p&gt;

&lt;p&gt;=======================================&lt;br/&gt;
Original title and description:&lt;br/&gt;
WT eviction threads consume a lot of CPU even when there is no apparent cache pressure&lt;/p&gt;

&lt;p&gt;After upgrading from 3.6 to 4.0.12 we encountered an overly high CPU consumption on our slave-delayed hidden replica set member. Restarting the member doesn&apos;t help, the CPU consumption goes down, but then goes up after some time.&lt;br/&gt;
We recorded some logs, perf traces and statistics snapshots, see attached files. Also included are FTDC files for the relevant interval and some graphs from our monitoring system.&lt;/p&gt;

&lt;p&gt;&quot;Before&quot; means before the CPU spike, &quot;after&quot; &amp;#8211; after it (occured about 15:47:31 +/- 5s).&lt;/p&gt;

&lt;p&gt;When CPU consumption is high, according to `perf report` about 96% of time is spent in `__wt_evict` (see `mongod-after.perf.txt` and `mongod-after.perf.data`). This coincides with `cache overflow score` metric jumping up from 0 to 100 (see `caches-before.log` and `caches-after.log`), despite the `bytes currently in the cache` (5703522791) being much smaller than `maximum bytes configured` (8589934592).&lt;/p&gt;

&lt;p&gt;This is a hidden delayed secondary, so there should be next to no load except replicating writes which are pretty low-volume. Before upgrading to 4.0 we did not have any issues regarding this service.&lt;/p&gt;</description>
                <environment>Ubuntu 16.04.6 LTS&lt;br/&gt;
Linux scorpius 4.15.0-58-generic #64~16.04.1-Ubuntu SMP Wed Aug 7 14:10:35 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux&lt;br/&gt;
&lt;br/&gt;
db version v4.0.12&lt;br/&gt;
git version: 5776e3cbf9e7afe86e6b29e22520ffb6766e95d4&lt;br/&gt;
OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016&lt;br/&gt;
allocator: tcmalloc&lt;br/&gt;
modules: none&lt;br/&gt;
build environment:&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;distmod: ubuntu1604&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;distarch: x86_64&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;target_arch: x86_64&lt;br/&gt;
&lt;br/&gt;
2xIntel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz&lt;br/&gt;
&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;total        used        free      shared  buff/cache   available&lt;br/&gt;
Mem:           125G        372M         61G        2.5M         64G        124G&lt;br/&gt;
Swap:           29G          0B         29G&lt;br/&gt;
</environment>
        <key id="906686">SERVER-43038</key>
            <summary>Commit point can be stale on slaveDelay nodes and cause memory pressure</summary>
                <type id="1" iconUrl="https://jira.mongodb.org/secure/viewavatar?size=xsmall&amp;avatarId=14703&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.mongodb.org/images/icons/priorities/major.svg">Major - P3</priority>
                        <status id="6" iconUrl="https://jira.mongodb.org/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="2">Won&apos;t Fix</resolution>
                                        <assignee username="backlog-server-repl">Backlog - Replication Team</assignee>
                                    <reporter username="onyxmaster">Aristarkh Zagorodnikov</reporter>
                        <labels>
                            <label>caching</label>
                            <label>wiredtiger</label>
                    </labels>
                <created>Mon, 26 Aug 2019 14:41:29 +0000</created>
                <updated>Tue, 6 Dec 2022 02:49:41 +0000</updated>
                            <resolved>Fri, 3 Jan 2020 21:01:03 +0000</resolved>
                                    <version>4.0.12</version>
                                                    <component>WiredTiger</component>
                                        <votes>1</votes>
                                    <watches>15</watches>
                                                                                                                <comments>
                            <comment id="2703789" author="onyxmaster" created="Sat, 4 Jan 2020 09:25:42 +0000"  >&lt;p&gt;Tess, thank you for the clarification!&lt;/p&gt;</comment>
                            <comment id="2703353" author="tess.avitabile" created="Fri, 3 Jan 2020 21:18:19 +0000"  >&lt;p&gt;Apologies, I accidentally posted my comment as internal-only. We are working on a project to make the server much more resilient to commit point lag in general, so we are not going to address this particular bug.&lt;/p&gt;</comment>
                            <comment id="2703340" author="onyxmaster" created="Fri, 3 Jan 2020 21:12:25 +0000"  >&lt;p&gt;Not even a single line about why this is not going to be fixed at all?&lt;/p&gt;</comment>
                            <comment id="2505668" author="onyxmaster" created="Mon, 28 Oct 2019 21:15:33 +0000"  >&lt;p&gt;Well, I&apos;m glad you got to the bottom of this, and thank you for keeping us updated on the inner details.&lt;/p&gt;

&lt;p&gt;Looking forward to the permanent fix (even if it would take some time for it to be released).&lt;/p&gt;</comment>
                            <comment id="2505382" author="daniel.hatcher" created="Mon, 28 Oct 2019 18:52:05 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=onyxmaster&quot; class=&quot;user-hover&quot; rel=&quot;onyxmaster&quot;&gt;onyxmaster&lt;/a&gt;, at this point, we&apos;re fairly confident in our following analysis. Because this is due to our internal mechanisms, I have attempted to condense the information but please feel free to let us know if you&apos;re looking for more detail.&lt;/p&gt;

&lt;p&gt;The high CPU usage is due to high cache pressure which triggers application threads to be used for data eviction. This cache pressure was caused by lag in our read concern &quot;majority&quot; mechanism. This lag was caused by having a two-term difference between the replica set and the delayed operations.&lt;/p&gt;

&lt;p&gt;You&apos;ve noticed the problem after restarts for two reasons:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;Restarts can trigger elections which change term&lt;/li&gt;
	&lt;li&gt;The node loses its in-memory representation of some metadata&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;As long as there were two elections during the slaveDelay, the node would have reached this state even without being restarted.&lt;/p&gt;

&lt;p&gt;However, the good news is this scenario will automatically recover once the delayed node starts applying operations from the same term. This situation is not desirable behavior but because so many factors have to occur we will be treating it as a lower priority for a permanent fix. &lt;/p&gt;</comment>
                            <comment id="2484285" author="onyxmaster" created="Wed, 16 Oct 2019 08:44:36 +0000"  >&lt;p&gt;Daniel, unfortunately we don&apos;t have this data, since it was rolled over a long time ago. Currently our delayed secondaries do not experience this issue, but I restarted them, attempting to trigger it. Sorry I couldn&apos;t help much &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.mongodb.org/images/icons/emoticons/sad.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;</comment>
                            <comment id="2483692" author="daniel.hatcher" created="Tue, 15 Oct 2019 20:15:30 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=onyxmaster&quot; class=&quot;user-hover&quot; rel=&quot;onyxmaster&quot;&gt;onyxmaster&lt;/a&gt;, apologies for the delay in response; this has been a tough problem to crack. We think we have a solid idea but are missing a key piece of evidence. Earlier in the ticket I asked for the logs and diagnostic.data of the shard Primary. It looks like the diagnostics are actually from a different replica set Primary. I know it&apos;s been a while but do you still have the diagnostics for the Primary for the &quot;driveFS-files-1&quot; shard covering September 11th through 13th?&lt;/p&gt;

&lt;p&gt;If this data has rolled over but you can see the same symptoms of large CPU usage on another delayed Secondary, could you please upload logs and diagnostic.data for both the delayed node and the Primary for that shard? &lt;/p&gt;</comment>
                            <comment id="2433309" author="bruce.lucas@10gen.com" created="Wed, 25 Sep 2019 18:23:15 +0000"  >&lt;p&gt;Summary (from above and as discussed in meeting): for some reason the delayed secondary stops updating its timestamps, causing data to be pinned, which causes the CPU symptom the customer observed. This was not happening before the node was restarted, but began happening repeatedly after the node was restarted.&lt;/p&gt;

&lt;p&gt;Regarding the &quot;Restarting oplog query due to error: CursorNotFound&quot; behavior specifically, we see that happening repeatedly both before and after the restart:&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/231769/231769_overview.png&quot; width=&quot;100%&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;I think the reason for this is illustrated by the cycle from A to B.&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;At A we establish a new oplog query (&quot;readersCreate&quot;) and fetch some oplog entries (&quot;getmores num&quot;) until we fill the replication buffer (&quot;repl buffer sizeBytes&quot;).&lt;/li&gt;
	&lt;li&gt;Then from A to B we are applying ops and draining the buffer, but not fetching any new data (note &quot;repl buffer sizeBytes is declining from A to B), I imagine because the repl buffer is &quot;full enough&quot;.&lt;/li&gt;
	&lt;li&gt;At B we hit a lower bound threshold (I hypothesize) and decide to fetch more bytes. But by this time since we haven&apos;t used the oplog query cursor for several hours since A it has timed out on the primary (I hypothesize), so we get CursorNotFound and have to restart the oplog query.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;This cycle happens repeatedly, but since that is happening both during the good state before the restart at C and after, this is not itself the cause of the problem after the restart.&lt;/p&gt;

&lt;p&gt;The underlying issue is that for some reason after the restart at C we are no longer regularly updating our timestamps (&quot;set timestamp calls&quot;) even though we are regularly applying batches. However we do occasionally update the timestamp (e.g D and E), and as you noted this occurs at the same time that we fetch some more oplog entries (as described above).&lt;/p&gt;

&lt;p&gt;Interestingly if we zoom in on the few seconds after restart we see something similar:&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/231770/231770_zoom.png&quot; width=&quot;100%&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;From C to D there is a burst of oplog application because the node has been down for a bit and we have to catch up to the delay lag of 5 days. During this time we are in fact updating our timestamps and are not pinning data. Because we&apos;re applying ops faster than we do in steady state we are also doing regular getmores. But at D we&apos;re finished &quot;catching up&quot; and no longer need to do regular getmores for a while (a couple of hours actually). This is also when we stop updating our timestamps, even though we are still regularly applying ops and batches.&lt;/p&gt;

&lt;p&gt;This is the same behavior that we see later: when we next need to do getmores, we also update the timestamps.&lt;/p&gt;

&lt;p&gt;To summarize it looks like&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;after the restart we appear to only be updating timestamps when we are doing getmores to replenish the buffer, even though we are actively apply ops and batches&lt;/li&gt;
	&lt;li&gt;but before the restart we were updating the timestamps even when we didn&apos;t need to do getmores for several hours but were actively applying ops that we had buffered&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;I couldn&apos;t spot what might have changed after the restart that might have triggered this behavior. Maybe some race condition? Suggest forwarding to replicationt team.&lt;/p&gt;</comment>
                            <comment id="2422551" author="onyxmaster" created="Tue, 17 Sep 2019 09:22:15 +0000"  >&lt;p&gt;Attaching logs:&#160;&lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/attachment/230993/230993_mongod-new-logs-2.zip&quot; title=&quot;mongod-new-logs-2.zip attached to SERVER-43038&quot;&gt;mongod-new-logs-2.zip&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.mongodb.org/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&#160;and metrics (11th September got rotated out):&#160;&lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/attachment/230994/230994_metrics-csrs-primary-2.zip&quot; title=&quot;metrics-csrs-primary-2.zip attached to SERVER-43038&quot;&gt;metrics-csrs-primary-2.zip&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.mongodb.org/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;</comment>
                            <comment id="2421722" author="daniel.hatcher" created="Mon, 16 Sep 2019 17:02:28 +0000"  >&lt;p&gt;Thank you for recovering the logs; they do appear complete. It would be very helpful to maintain the MongoDB default logs for the length of this investigation but I understand if that&apos;s not possible.&lt;/p&gt;

&lt;p&gt;I can see that the logical session cache is attempting a refresh on the secondary but it seems to do nothing until the node shuts down a day later. The config metrics and logs that you provided were the original Primary of the config replica set but at the same time as the start of the problem (09-12T08:31) it is restarted and &quot;c2.fs.drive.bru&quot; becomes the Primary. Do you have the logs and metrics for that node as well?&lt;/p&gt;</comment>
                            <comment id="2419722" author="onyxmaster" created="Fri, 13 Sep 2019 16:01:59 +0000"  >&lt;p&gt;I converted the logs from our system to something that looks like raw logs:&#160;&lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/attachment/230747/230747_mongod-new-logs.zip&quot; title=&quot;mongod-new-logs.zip attached to SERVER-43038&quot;&gt;mongod-new-logs.zip&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.mongodb.org/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Please take note that the timestamp in logs is different, most of the times here are local (UTC+3), but logs are in UTC time.&lt;/p&gt;</comment>
                            <comment id="2419692" author="onyxmaster" created="Fri, 13 Sep 2019 15:44:58 +0000"  >&lt;p&gt;Daniel, that&apos;s my bad, slave delay is enabled and is 5 days (&quot;slaveDelay&quot; : NumberLong(432000)). I turned it off when diagnosing, my colleague turned it on back (we use this node as a fat-finger DR copy), but I forgot about it and assumed it was still turned off. Anyways, it was turned on before restart. It&apos;s about 35 hours since restart and it still spikes CPU.&lt;/p&gt;

&lt;p&gt;Attaching shard primary metrics:&#160;&lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/attachment/230741/230741_metrics-shard-primary.zip&quot; title=&quot;metrics-shard-primary.zip attached to SERVER-43038&quot;&gt;metrics-shard-primary.zip&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.mongodb.org/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&#160;and CSRS primary metrics:&#160;&lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/attachment/230742/230742_metrics-csrs-primary.zip&quot; title=&quot;metrics-csrs-primary.zip attached to SERVER-43038&quot;&gt;metrics-csrs-primary.zip&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.mongodb.org/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;.&lt;/p&gt;

&lt;p&gt;I&apos;ll try to get logs, but we do not store raw logs for more than 24 hours and ship them to a log processing facility which has no easy way of recovering raw logs.&lt;/p&gt;</comment>
                            <comment id="2419606" author="daniel.hatcher" created="Fri, 13 Sep 2019 15:12:54 +0000"  >&lt;p&gt;Can you please confirm that there is no slaveDelay configured in the replica set config? I see consistent 5 day &quot;lag&quot; for this node which would typically indicate a 5 day slaveDelay.&lt;/p&gt;

&lt;p&gt;Once the node restarted, it triggered a full refresh of the sharding metadata. 24 hours does seem like a long time for that refresh to be running but if the sharded collection is incredibly large it could take a while. Could you please provide the following covering the restart until now?&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;&lt;tt&gt;mongod&lt;/tt&gt; log + diagnostic.data for the node having the problem&lt;/li&gt;
	&lt;li&gt;&lt;tt&gt;mongod&lt;/tt&gt; log + diagnostic.data for the shard Primary&lt;/li&gt;
	&lt;li&gt;&lt;tt&gt;mongod&lt;/tt&gt; log + diagnostic.data for the config Primary&lt;/li&gt;
&lt;/ul&gt;
</comment>
                            <comment id="2419109" author="onyxmaster" created="Fri, 13 Sep 2019 09:30:24 +0000"  >&lt;p&gt;It appears that upgrade has nothing to do with slave delay. Reloading the service may put it into this &quot;bad&quot; state.&lt;/p&gt;

&lt;p&gt;I&apos;ve attached&#160;&lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/attachment/230696/230696_mongo-metrics.zip&quot; title=&quot;mongo-metrics.zip attached to SERVER-43038&quot;&gt;mongo-metrics.zip&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.mongodb.org/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&#160;for the specified interval (see graph below). The server was restarted at 11:31 and the issue was back again. There is no slaveDelay configured this time, and no upgrade (4.0.12 before and after the restart). &lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/230697/230697_scorpius-cpu-new.png&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;</comment>
                            <comment id="2397896" author="onyxmaster" created="Wed, 28 Aug 2019 15:37:24 +0000"  >&lt;p&gt;Okay.&lt;/p&gt;</comment>
                            <comment id="2397520" author="daniel.hatcher" created="Wed, 28 Aug 2019 14:46:07 +0000"  >&lt;p&gt;I can see a rise in CPU but the timeframe in the diagnostic.data provided is too short to draw any real conclusions as to a root cause.  I propose that we close this ticket as Incomplete and when you encounter the problem again you can leave a comment here with the relevant data.&lt;/p&gt;</comment>
                            <comment id="2393793" author="onyxmaster" created="Wed, 28 Aug 2019 04:54:31 +0000"  >&lt;p&gt;Unfortunately we encountered this issue more than a week ago, and the node is already resynced and slave delay is turned off (we&#8217;re going to try turning it on a bit later), so there are no relevant FTDC data files that we can provide.&lt;/p&gt;</comment>
                            <comment id="2391346" author="daniel.hatcher" created="Tue, 27 Aug 2019 20:56:01 +0000"  >&lt;p&gt;Thank you for providing those files. In order for us to compare MongoDB&apos;s internal metrics over time, could you please attach a few hours of diagnostic.data covering before the upgrade of the node and a few hours after the upgrade?&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Depends</name>
                                            <outwardlinks description="depends on">
                                                        </outwardlinks>
                                                        </issuelinktype>
                            <issuelinktype id="10012">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="939901">SERVER-43632</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="231641" name="Screen Shot 2019-09-24 at 1.23.47 PM.png" size="192012" author="daniel.hatcher@mongodb.com" created="Tue, 24 Sep 2019 17:23:55 +0000"/>
                            <attachment id="233246" name="Screen Shot 2019-10-09 at 12.47.34 PM.png" size="288325" author="daniel.hatcher@mongodb.com" created="Wed, 9 Oct 2019 16:48:19 +0000"/>
                            <attachment id="228405" name="caches.log" size="1483638" author="onyxmaster" created="Mon, 26 Aug 2019 14:29:07 +0000"/>
                            <attachment id="230994" name="metrics-csrs-primary-2.zip" size="62537044" author="onyxmaster" created="Tue, 17 Sep 2019 09:22:22 +0000"/>
                            <attachment id="230742" name="metrics-csrs-primary.zip" size="87130838" author="onyxmaster" created="Fri, 13 Sep 2019 15:41:59 +0000"/>
                            <attachment id="230741" name="metrics-shard-primary.zip" size="95403120" author="onyxmaster" created="Fri, 13 Sep 2019 15:41:30 +0000"/>
                            <attachment id="228404" name="metrics.2019-08-16T12-02-49Z-00000" size="659378" author="onyxmaster" created="Mon, 26 Aug 2019 14:29:07 +0000"/>
                            <attachment id="230696" name="mongo-metrics.zip" size="78801213" author="onyxmaster" created="Fri, 13 Sep 2019 09:18:22 +0000"/>
                            <attachment id="228402" name="mongod-after.perf.data" size="29342488" author="onyxmaster" created="Mon, 26 Aug 2019 14:29:24 +0000"/>
                            <attachment id="228398" name="mongod-after.perf.txt" size="31062" author="onyxmaster" created="Mon, 26 Aug 2019 14:31:00 +0000"/>
                            <attachment id="228399" name="mongod-after.perf.txt" size="31062" author="onyxmaster" created="Mon, 26 Aug 2019 14:30:57 +0000"/>
                            <attachment id="228401" name="mongod-before.perf.data" size="1825628" author="onyxmaster" created="Mon, 26 Aug 2019 14:29:10 +0000"/>
                            <attachment id="230993" name="mongod-new-logs-2.zip" size="568289" author="onyxmaster" created="Tue, 17 Sep 2019 09:21:32 +0000"/>
                            <attachment id="230747" name="mongod-new-logs.zip" size="5191608" author="onyxmaster" created="Fri, 13 Sep 2019 16:01:39 +0000"/>
                            <attachment id="228403" name="mongod.log" size="303221" author="onyxmaster" created="Mon, 26 Aug 2019 14:29:08 +0000"/>
                            <attachment id="228400" name="mongodb-driveFS-files-1.conf" size="599" author="onyxmaster" created="Mon, 26 Aug 2019 14:30:43 +0000"/>
                            <attachment id="231769" name="overview.png" size="312710" author="bruce.lucas@mongodb.com" created="Wed, 25 Sep 2019 18:20:10 +0000"/>
                            <attachment id="228408" name="scorpius-cache.png" size="24630" author="onyxmaster" created="Mon, 26 Aug 2019 14:28:44 +0000"/>
                            <attachment id="230697" name="scorpius-cpu-new.png" size="53054" author="onyxmaster" created="Fri, 13 Sep 2019 09:20:09 +0000"/>
                            <attachment id="228407" name="scorpius-cpu.png" size="25012" author="onyxmaster" created="Mon, 26 Aug 2019 14:28:44 +0000"/>
                            <attachment id="228406" name="scorpius-cpu2.png" size="36209" author="onyxmaster" created="Mon, 26 Aug 2019 14:28:44 +0000"/>
                            <attachment id="231770" name="zoom.png" size="263812" author="bruce.lucas@mongodb.com" created="Wed, 25 Sep 2019 18:20:14 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                <customfield id="customfield_10050" key="com.atlassian.jira.toolkit:comments">
                        <customfieldname># Replies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>18.0</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                <customfield id="customfield_12751" key="com.atlassian.jira.plugin.system.customfieldtypes:multiselect">
                        <customfieldname>Assigned Teams</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="25128"><![CDATA[Replication]]></customfieldvalue>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_10055" key="com.atlassian.jira.ext.charting:firstresponsedate">
                        <customfieldname>Date of 1st Reply</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Tue, 27 Aug 2019 20:56:01 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10052" key="com.atlassian.jira.toolkit:dayslastcommented">
                        <customfieldname>Days since reply</customfieldname>
                        <customfieldvalues>
                                        4 years, 5 weeks, 4 days ago
    
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_18254" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Dependencies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue><![CDATA[<s><a href='https://jira.mongodb.org/browse/PM-1249'>PM-1249</a></s>]]></customfieldvalue>


                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_15850" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_10057" key="com.atlassian.jira.toolkit:lastusercommented">
                        <customfieldname>Last comment by Customer</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>true</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10056" key="com.atlassian.jira.toolkit:lastupdaterorcommenter">
                        <customfieldname>Last commenter</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>alexander.golin@mongodb.com</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_11151" key="com.atlassian.jira.toolkit:LastCommentDate">
                        <customfieldname>Last public comment date</customfieldname>
                        <customfieldvalues>
                            4 years, 5 weeks, 4 days ago
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                    <customfield id="customfield_10032" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Operating System</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10026"><![CDATA[ALL]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_10051" key="com.atlassian.jira.toolkit:participants">
                        <customfieldname>Participants</customfieldname>
                        <customfieldvalues>
                                        <customfieldvalue>onyxmaster</customfieldvalue>
            <customfieldvalue>backlog-server-repl</customfieldvalue>
            <customfieldvalue>bruce.lucas@mongodb.com</customfieldvalue>
            <customfieldvalue>daniel.hatcher@mongodb.com</customfieldvalue>
            <customfieldvalue>tess.avitabile@mongodb.com</customfieldvalue>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_14254" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Product Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hvmtc7:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_12550" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>2|hvblfz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10558" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_10053" key="com.atlassian.jira.ext.charting:timeinstatus">
                        <customfieldname>Time In Status</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_22870" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Triagers</customfieldname>
                        <customfieldvalues>
                                

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_14350" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>serverRank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hvmflj:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                    </customfields>
    </item>
</channel>
</rss>