<!-- 
RSS generated by JIRA (9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66) at Thu Feb 08 04:41:35 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>MongoDB Jira</title>
    <link>https://jira.mongodb.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.7.1</version>
        <build-number>970001</build-number>
        <build-date>13-04-2023</build-date>
    </build-info>


<item>
            <title>[SERVER-35958] Big CPU load increase (&#215;4) on secondary by upgrading 3.4.15 &#8594; 3.6.5</title>
                <link>https://jira.mongodb.org/browse/SERVER-35958</link>
                <project id="10000" key="SERVER">Core Server</project>
                    <description>&lt;p&gt;I just tried upgrading one of my RS from 3.4.15 to 3.6.5, thinking as 4.0 is now released 3.6 might be mature/stable enough now, but I had a very bad surprise, the CPU load increase about 4 times on the secondary (I didn&apos;t try the primary of course, I don&apos;t want to loose my cluster yet). As you can see on the chart bellow, my mongo usage is very stable and the secondary usually stays at 5% of total server CPU on 3.4.15, as soon as I upgrade to 3.6.5 it jumps to 15-25%. The load stayed exactly the same on the primary, and so does the CPU usage.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;When I saw that I thought: maybe I should try re-syncing the secondary to benefit from the improvement of 3.6, so I did this, the first time it filled the disk during the initial sync and crashed mongo so I tried again and the second time it worked, but the load is exactly the same.&lt;/p&gt;

&lt;p&gt;I also tried putting back 3.6.0 and 3.6.1 to see where the regression is, but both didn&apos;t boot for unknown reasons yet.&lt;/p&gt;

&lt;p&gt;Is this expected? are you aware of any change that would cause this huge regression?&lt;/p&gt;

&lt;p&gt;This is my mongo cloud manager account monitoring the RS if it helps: &lt;a href=&quot;https://cloud.mongodb.com/v2/5012a0ac87d1d86fa8c22e64#metrics/replicaSet/5414cfabe4b0ce23e21b4b3b/overview&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://cloud.mongodb.com/v2/5012a0ac87d1d86fa8c22e64#metrics/replicaSet/5414cfabe4b0ce23e21b4b3b/overview&lt;/a&gt;&#160;&lt;/p&gt;</description>
                <environment></environment>
        <key id="567709">SERVER-35958</key>
            <summary>Big CPU load increase (&#215;4) on secondary by upgrading 3.4.15 &#8594; 3.6.5</summary>
                <type id="1" iconUrl="https://jira.mongodb.org/secure/viewavatar?size=xsmall&amp;avatarId=14703&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.mongodb.org/images/icons/priorities/major.svg">Major - P3</priority>
                        <status id="6" iconUrl="https://jira.mongodb.org/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="9">Done</resolution>
                                        <assignee username="geert.bosch@mongodb.com">Geert Bosch</assignee>
                                    <reporter username="bigbourin@gmail.com">Adrien Jarthon</reporter>
                        <labels>
                            <label>nonnyc</label>
                            <label>st-triage</label>
                            <label>storage-engines</label>
                    </labels>
                <created>Wed, 4 Jul 2018 07:15:50 +0000</created>
                <updated>Mon, 25 Mar 2019 22:24:26 +0000</updated>
                            <resolved>Mon, 25 Mar 2019 22:24:26 +0000</resolved>
                                    <version>3.6.5</version>
                                                    <component>Replication</component>
                    <component>WiredTiger</component>
                                        <votes>0</votes>
                                    <watches>24</watches>
                                                                                                                <comments>
                            <comment id="2191277" author="geert.bosch" created="Mon, 25 Mar 2019 22:23:53 +0000"  >&lt;p&gt;I went through all changes from 3.6.7 to 3.6.8, and cannot see any that might explain the performance improvement in this case. Anyway, glad to see that the issue has been resolved for you.&lt;/p&gt;</comment>
                            <comment id="2057911" author="bigbourin@gmail.com" created="Sat, 10 Nov 2018 09:29:06 +0000"  >&lt;p&gt;Hello, I just tried again the new version (3.6.8) on my secondary to see if it would improve thing for this matter, and I was glad to see it did:&lt;br/&gt;
&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/200661/200661_screenshot-9.png&quot; width=&quot;100%&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;It&apos;s not obvious when I changed from 3.4.15 to 3.6.8 on this chart and it&apos;s good (I changed on Nov 8 @ 9am), my 11am spike load is a bit higher but nowhere near where it was in 3.6.5 to 3.6.7 and the load during the rest of day is now slightly lower, so this is now totally acceptable to me.&lt;/p&gt;

&lt;p&gt;Good job, the question now is how did you manage to fix this? because there were no update in this ticket nor in &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-36221&quot; title=&quot;[3.6] Performance regression on small updates to large documents&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-36221&quot;&gt;&lt;del&gt;SERVER-36221&lt;/del&gt;&lt;/a&gt;. Do you know which change could explain this?&lt;/p&gt;</comment>
                            <comment id="2011966" author="bigbourin@gmail.com" created="Mon, 24 Sep 2018 15:11:02 +0000"  >&lt;p&gt;Haha, I&apos;m not sure I get how there can be such huge regressions in majors more than 6 months old and how this could sound ok to release a &quot;stable&quot; version if you know about it but oh well... At least it&apos;s tracked now.&lt;/p&gt;

&lt;p&gt;For the CPU load on the secondary well I would love to try 4.0.2 but I can&apos;t until my cluster is in 3.6 compatibility mode can I? And I can&apos;t really switch to 3.6 mode until it&apos;s stable so I&apos;ll have to wait until you fix 3.6 first. Or until it&apos;s back ported by I can&apos;t see BACKPORT-3280 so I don&apos;t know if it&apos;s deleted or private.&lt;/p&gt;</comment>
                            <comment id="2011920" author="bruce.lucas@10gen.com" created="Mon, 24 Sep 2018 14:50:08 +0000"  >&lt;p&gt;Thanks for the update Adrien.&lt;/p&gt;

&lt;p&gt;Regarding the increased i/o you are seeing, this is an issue that we are aware of and are working on, tracked by &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-37233&quot; title=&quot;Increase in disk i/o for writes to replica set	&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-37233&quot;&gt;&lt;del&gt;SERVER-37233&lt;/del&gt;&lt;/a&gt;. It is caused by a higher rate of log (journal) flushes in 3.6, and we see that in your data in the &quot;log flush operations&quot;:&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/196987/196987_flush.png&quot; width=&quot;100%&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;The flushes are occurring as much as 1000 times per second, about once per write operation, resulting in a comparable number of separate i/o operations at the disk level, whereas in 3.4 flushes were occurring during comparable periods at about 30-50 times per second. You can follow &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-37233&quot; title=&quot;Increase in disk i/o for writes to replica set	&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-37233&quot;&gt;&lt;del&gt;SERVER-37233&lt;/del&gt;&lt;/a&gt; for updates on this issue.&lt;/p&gt;

&lt;p&gt;Regarding the CPU utilization issue you reported, we speculated previously that that could be partly due to  &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-36221&quot; title=&quot;[3.6] Performance regression on small updates to large documents&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-36221&quot;&gt;&lt;del&gt;SERVER-36221&lt;/del&gt;&lt;/a&gt;, and this issue looks to be fixed in 4.0.2. If you are in a position to test that version and compare results with 3.4 that would be helpful.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Bruce&lt;/p&gt;</comment>
                            <comment id="2007204" author="bigbourin@gmail.com" created="Tue, 18 Sep 2018 22:07:38 +0000"  >&lt;p&gt;Hello it&apos;s me again, after a SSD issue on my primary which killed the performance and thus my entire service as Mongo doesn&apos;t fallback to secondary in this case (unless with &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-29947&quot; title=&quot;Implement Storage Node Watchdog&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-29947&quot;&gt;&lt;del&gt;SERVER-29947&lt;/del&gt;&lt;/a&gt; maybe but that&apos;s out of reach of course) I was forced to stepDown manually an reboot the machine for SSD replacement, so I though I would take this occasion to test 3.6.7 on my Primary to see how bad the performance regression from 3.4 was on the Primary this time, and the result is as expected pretty bad:&lt;/p&gt;

&lt;p&gt;We can see a multiplication of the number of disk writes by about 6 times which caused in my setup an increase of the global Disk IO usage from 5-8% to 17-22%:&lt;br/&gt;
&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/196571/196571_screenshot-7.png&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;This is of course under the exact same load:&lt;br/&gt;
&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/196572/196572_screenshot-8.png&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;This machine is more powerful than my secondary so it doesn&apos;t struggle as much and the CPU stays about the same, but we can clearly that by upgrading to 3.6 I just lost 3/4 of my server capacity, which is totally unacceptable for a software upgrade which doesn&apos;t bring anything worth a 400% load increase in return.&lt;/p&gt;

&lt;p&gt;You should still have the diagnostic.data for my primary (web1) before the upgrade, I&apos;ve just uploaded a diagnostic.data for today under 3.6.7 so you can compare. Before 3pm UTC there was some exceptional task running or replication stuff but after 3pm UTC (when the load settles down on the chart) it should be comparable to a previous day on 3.4.&lt;/p&gt;</comment>
                            <comment id="1958677" author="bruce.lucas@10gen.com" created="Fri, 27 Jul 2018 16:10:44 +0000"  >&lt;p&gt;Thanks for the confirmation Adrien. We&apos;re continuing to look at these issues and will get back to you.&lt;/p&gt;</comment>
                            <comment id="1958557" author="bigbourin@gmail.com" created="Fri, 27 Jul 2018 15:10:42 +0000"  >&lt;p&gt;Indeed this chart shows pretty well what I&apos;m noticing. About &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-36221&quot; title=&quot;[3.6] Performance regression on small updates to large documents&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-36221&quot;&gt;&lt;del&gt;SERVER-36221&lt;/del&gt;&lt;/a&gt; it might be part of the reason as some of my writes are small updates to big documents (100k - 5M). If there are any options or experimental build to try let me know.&lt;/p&gt;</comment>
                            <comment id="1958355" author="bruce.lucas@10gen.com" created="Fri, 27 Jul 2018 13:34:02 +0000"  >&lt;p&gt;Here&apos;s a side-by-side comparison showing 3.4.15, 3.6.5 (maximum batch size 50,000), and 3.6.6 (at a batch size of 1000), showing about 14 hours, in each case including the daily update batch job.&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/192803/192803_comparison.png&quot; width=&quot;100%&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;This confirms that 3.6.6 with the reduced replication batch size has decreased the CPU utilization during the daily update job. However overall CPU utilization remains 2-3x that of 3.4.15, and is erratic.&lt;/p&gt;

&lt;p&gt;It appears that 3.6.6 may be worse than 3.6.5, so I wonder if this is related to &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-36221&quot; title=&quot;[3.6] Performance regression on small updates to large documents&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-36221&quot;&gt;&lt;del&gt;SERVER-36221&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="1957885" author="bruce.lucas@10gen.com" created="Thu, 26 Jul 2018 20:20:43 +0000"  >&lt;p&gt;Thanks &lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=bigbourin%40gmail.com&quot; class=&quot;user-hover&quot; rel=&quot;bigbourin@gmail.com&quot;&gt;bigbourin@gmail.com&lt;/a&gt;, we can see the increased and more erratic CPU utilization and are taking a look and discussing it internally. We&apos;ll get back to you with our findings.&lt;/p&gt;</comment>
                            <comment id="1955963" author="bigbourin@gmail.com" created="Wed, 25 Jul 2018 07:26:00 +0000"  >&lt;p&gt;Yes of course you can&apos;t compare the two, but you can compare the secondary with previous days running other version as the load is the same each day as said earlier.&lt;/p&gt;

&lt;p&gt;The spikes every hour on web1 are mongodumps, read only load that why it&apos;s not on web2.&lt;br/&gt;
The problem is still that 3.6.6 as a secondary consumes 3 to 4 times more CPU than 3.4.15 for the same queries load, which is a huge regression.&lt;/p&gt;</comment>
                            <comment id="1955930" author="alexander.gorrod" created="Wed, 25 Jul 2018 05:57:33 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=bigbourin%40gmail.com&quot; class=&quot;user-hover&quot; rel=&quot;bigbourin@gmail.com&quot;&gt;bigbourin@gmail.com&lt;/a&gt; it&apos;s difficult to compare the web1 and web2 nodes you uploaded diagnostic data for, since one is a primary and the other a secondary. On the other hand - when I load the data it shows the node running 3.6.6 (secondary, called web2) as using consistently less CPU. The node running 3.4.9 has a spiky CPU usage pattern - but it increases along with an increase in queries that isn&apos;t seen on the secondary.&lt;/p&gt;

&lt;p&gt;The following picture shows the statistics:&lt;/p&gt;

&lt;p&gt; &lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/192595/192595_server35958.png&quot; width=&quot;100%&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt; &lt;/p&gt;
</comment>
                            <comment id="1955697" author="bigbourin@gmail.com" created="Tue, 24 Jul 2018 21:45:43 +0000"  >&lt;p&gt;Ok here is some feedback, so as said earlier I tried 3.6.6, and the tried with a batchsize of 1000 and then with 200 to see, here is the schedule of the changes:&lt;br/&gt;
07/13 17:46 (UTC+2) &#8594; 5000 batch size (3.6.6)&lt;br/&gt;
07/17 18:35 (UTC+2) &#8594; 1000 batch size&lt;br/&gt;
07/23 09:35 (UTC+2) &#8594; 200 batch size&lt;/p&gt;

&lt;p&gt;The diagnostic.data has been uploaded to the portal (though it doesn&apos;t goes back to 07/13 but you have that data in the previous upload.&lt;br/&gt;
Here is the CPU chart:&lt;br/&gt;
&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;a id=&quot;192581_thumb&quot; href=&quot;https://jira.mongodb.org/secure/attachment/192581/192581_screenshot-5.png&quot; title=&quot;screenshot-5.png&quot; file-preview-type=&quot;image&quot; file-preview-id=&quot;192581&quot; file-preview-title=&quot;screenshot-5.png&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/thumbnail/192581/_thumb_192581.png&quot; style=&quot;border: 0px solid black&quot; role=&quot;presentation&quot;/&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;We can see 3.6.6 had little as said earlier, when changing to 1000 batch size, the impact is more visible, the load during the 11am spike is lower, though the load during the rest of the day seems less stable and a bit higher. decreasing even further to 200 batch has little to no visible impact IMO.&lt;/p&gt;

&lt;p&gt;We&apos;re still far from the low and stable load of 3.4.15 &#9785;&lt;/p&gt;</comment>
                            <comment id="1949331" author="bruce.lucas@10gen.com" created="Tue, 17 Jul 2018 16:57:43 +0000"  >&lt;p&gt;Thanks.&lt;/p&gt;

&lt;p&gt;The parameter isn&apos;t relevant to this issue because in 3.4.15 we didn&apos;t save history in cache during replication batch application like we do in 3.6.&lt;/p&gt;</comment>
                            <comment id="1949319" author="bigbourin@gmail.com" created="Tue, 17 Jul 2018 16:45:00 +0000"  >&lt;p&gt;Yep, will do! what was the value for this parameter in 3.4.15? or maybe it&apos;s not relevant?&lt;/p&gt;</comment>
                            <comment id="1949312" author="bruce.lucas@10gen.com" created="Tue, 17 Jul 2018 16:39:05 +0000"  >&lt;p&gt;Thanks Adrien. Please also remember to upload the diagnostic.data covering both that and replBatchLimitOperations=1000 run when that has finished.&lt;/p&gt;</comment>
                            <comment id="1949308" author="bigbourin@gmail.com" created="Tue, 17 Jul 2018 16:36:17 +0000"  >&lt;p&gt;Ok here is the result with 3.6.6 (2G cache), it&apos;s better than 3.6.5 as the secondary now survives the 11am burst, we can see that it lasts less time with this version, because it manages to follow the primary faster and accumulates less lag. But unfortunately it&apos;s still saturating at 60%+ CPU and generating some lag compared to 3.4.15 which tops at 20%.&lt;br/&gt;
&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/191945/191945_screenshot-4.png&quot; width=&quot;100%&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;I&apos;ll now try with BatchLimitOperations=1000:&lt;/p&gt;
&lt;p/&gt;
&lt;div id=&quot;syntaxplugin&quot; class=&quot;syntaxplugin&quot; style=&quot;border: 1px dashed #bbb; border-radius: 5px !important; overflow: auto; max-height: 30em;&quot;&gt;
&lt;table cellspacing=&quot;0&quot; cellpadding=&quot;0&quot; border=&quot;0&quot; width=&quot;100%&quot; style=&quot;font-size: 1em; line-height: 1.4em !important; font-weight: normal; font-style: normal; color: black;&quot;&gt;
		&lt;tbody &gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;  margin-top: 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;&amp;gt; db.adminCommand({getParameter: 1, &quot;replBatchLimitOperations&quot;: 1})&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;{ &quot;replBatchLimitOperations&quot; : 1000, &quot;ok&quot; : 1 }&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   margin-bottom: 10px;  width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;rs0:SECONDARY&amp;gt;&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
			&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p/&gt;</comment>
                            <comment id="1946993" author="bigbourin@gmail.com" created="Fri, 13 Jul 2018 17:48:49 +0000"  >&lt;p&gt;No problem &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.mongodb.org/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/p&gt;

&lt;p&gt;Indeed that&apos;s a good lead, I&apos;ve just installed 3.6.6 (with 2G cache limit to have comparable metrics) and will let you know!&lt;/p&gt;</comment>
                            <comment id="1946984" author="bruce.lucas@10gen.com" created="Fri, 13 Jul 2018 17:38:41 +0000"  >&lt;p&gt;Hi Adrien,&lt;/p&gt;

&lt;p&gt;Sorry about that, I was misreading the data and your rs config as you say indicates you are already on pv1, so that potential issue is ruled out.&lt;/p&gt;

&lt;p&gt;The next issue we suspect is &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-34938&quot; title=&quot;Secondary slowdown or hang due to content pinned in cache by single oplog batch&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-34938&quot;&gt;&lt;del&gt;SERVER-34938&lt;/del&gt;&lt;/a&gt;. We have put a mitigation for this issue in place in 3.6.6, which is to reduce the maximum replication batch size from 50,000 to 5,000 operations. Based on the metrics it appears that you have indeed built up sufficient lag that replication batches have reached the 50,000 operation limit, so you may see a significant improvement in 3.6.6 with the lower limit. Would you be able to do the following:&lt;/p&gt;
&lt;ul&gt;
	&lt;li&gt;Try again with a secondary on 3.6.6 at the default setting, particularly being sure to catch the daily update job, and let us know your observations, as well as uploading diagnostic.data.&lt;/li&gt;
	&lt;li&gt;Then on a subsequent day try again with 3.6.6 specifying --replBatchLimitOperations=1000 to reduce the maximum batch size further, again giving us your observations and diagnostic.data.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;We want to determine whether the smaller batch size addresses the problem you&apos;re seeing, and also verify that it doesn&apos;t create any other problems.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Bruce&lt;/p&gt;</comment>
                            <comment id="1945910" author="bigbourin@gmail.com" created="Thu, 12 Jul 2018 18:48:14 +0000"  >&lt;p&gt;You&apos;re welcome.&lt;/p&gt;

&lt;p&gt;First of all about my test with more memory (3.5G) showed that it&apos;s a bit better, the secondary (3.6.5) survived my 11am burst for the first time, but it&apos;s still wayyy slower than 3.4.15:&lt;br/&gt;
 &lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/191701/191701_screenshot-3.png&quot; width=&quot;100%&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;About the protocol version, it seems I&apos;m already on pv1:&lt;/p&gt;
&lt;p/&gt;
&lt;div id=&quot;syntaxplugin&quot; class=&quot;syntaxplugin&quot; style=&quot;border: 1px dashed #bbb; border-radius: 5px !important; overflow: auto; max-height: 30em;&quot;&gt;
&lt;table cellspacing=&quot;0&quot; cellpadding=&quot;0&quot; border=&quot;0&quot; width=&quot;100%&quot; style=&quot;font-size: 1em; line-height: 1.4em !important; font-weight: normal; font-style: normal; color: black;&quot;&gt;
		&lt;tbody &gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;  margin-top: 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;rs0:PRIMARY&amp;gt; rs.config().protocolVersion&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   margin-bottom: 10px;  width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt; NumberLong(&lt;/span&gt;&lt;span style=&quot;color: #009900; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;1&lt;/span&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;)&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
			&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p/&gt;
&lt;p&gt;Here is the secondary log if you want to check:&#160;&lt;span class=&quot;nobr&quot;&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/attachment/191703/191703_mongodb.log.gz&quot; title=&quot;mongodb.log.gz attached to SERVER-35958&quot;&gt;mongodb.log.gz&lt;sup&gt;&lt;img class=&quot;rendericon&quot; src=&quot;https://jira.mongodb.org/images/icons/link_attachment_7.gif&quot; height=&quot;7&quot; width=&quot;7&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;&lt;/sup&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;</comment>
                            <comment id="1945647" author="bruce.lucas@10gen.com" created="Thu, 12 Jul 2018 15:51:15 +0000"  >&lt;p&gt;Thanks for the additional data Adrien.&lt;/p&gt;

&lt;p&gt;I think you are correct that cache pressure is an issue here, and would not be surprised if increasing the cache size improves the behavior. There are a few factors that may be contributing to cache pressure, and we would like to eliminate those one at a time to see whether it improves the behavior.&lt;/p&gt;

&lt;p&gt;The first relates to the use of protocol version 0 (pv0) with 3.6. This is not an ideal combination, as it requires keeping additional update history in the cache on the secondary, and could be the cause of what you&apos;re seeing. The default protocol version for new 3.6 replica sets is pv1, but replica sets created under prior versions will continue to use pv0 until explicitly upgraded. You can confirm that your replica set is using pv0 by looking in the mognod log for the 3.6 node; there will be a startup warning. If you don&apos;t see that, can you please upload the mongod log file for 3.6 so we can take a look? If you do see the warning can you please &lt;a href=&quot;https://docs.mongodb.com/manual/reference/replica-set-protocol-versions/#modify-replica-set-protocol-version&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;upgrade to pv1&lt;/a&gt; and try running the secondary on 3.6 again and then upload diagnostic.data once the update job has finished? Ideally this would use the same cache size as before to get a fair comparison.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
 Bruce&lt;/p&gt;</comment>
                            <comment id="1944472" author="bigbourin@gmail.com" created="Wed, 11 Jul 2018 16:07:36 +0000"  >&lt;p&gt;Ah yes indeed, I just uploaded 2 new archives.&lt;br/&gt;
For the record I noticed that the CPU load in my last test with 3.6.5 was lower than in my first tests, and I believe this is because I increased the WT cache size limit from 1G to 2G so I&apos;m currently trying with a higher cache size (~4G) to see if it&apos;s any better. This could maybe help pinpoint a memory usage increase causing higher CPU load due to cache stress (eviction, etc..) I&apos;ll let you know tomorrow after the 11am burst if it survives =p&lt;/p&gt;
</comment>
                            <comment id="1944433" author="bruce.lucas@10gen.com" created="Wed, 11 Jul 2018 15:44:10 +0000"  >&lt;p&gt;Hi Adrien,&lt;/p&gt;

&lt;p&gt;The two-day comparison on different versions will be useful since as you say the load is constant. Can you please upload the most recent diagnostic.data from primary and secondary, which should include July 9th? The latest upload we have is from July 8th so it doesn&apos;t include that data.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Bruce&lt;/p&gt;</comment>
                            <comment id="1944109" author="bigbourin@gmail.com" created="Wed, 11 Jul 2018 12:32:15 +0000"  >&lt;p&gt;Thanks for having a look, I can&apos;t show you with two secondaries because I only have one (third member is an arbiter). But I can clearly show you the difference between both version as my workload on the primary is extremely stable, it&apos;s the same each day. (due to the nature of the service, it&apos;s updown.io, a monitoring service so the mongo traffic is 99% period requests)&lt;/p&gt;

&lt;p&gt;I made a new chart by overlapping two day (Jul 8 and Jul 9) showing the CPU usage difference between 3.4 and 3.6 during the night and during the 11am spike:&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/191573/191573_mongo-performance.png&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;br/&gt;
&lt;em&gt;Ignore the 50% flat blue line on the right, it&apos;s a ruby process which loops when mongo stops and have the same blue color as the mongo server when I overlap both charts.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Before 2am it&apos;s different because it was re-syncing but after 2am you can see the CPU difference during regular load, which as you said is about 2 times and &lt;b&gt;much&lt;/b&gt; less stable.&lt;/p&gt;

&lt;p&gt;Then at 11am we can see the CPU activity during a higher load, the workload on the primary is the same and here the CPU activity on the secondary is not only &lt;b&gt;4 times higher&lt;/b&gt; (~18% &#8594; 64%), but it &lt;b&gt;doesn&apos;t even manage to follow&lt;/b&gt; the primary, gets out of sync and stops. Which means that it would require even &lt;b&gt;more&lt;/b&gt; than 4 times the CPU to be able to follow the same load. &lt;em&gt;(the reason why the machine is saturating at ~70% CPU is because it&apos;s a VM with stolen CPU)&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You can see this in the diagnostic.data files of July 8 for 3.6.5 and July 9 for 3.4.15, the load spike starts at 11am UTC+2 every day and lasts for about 1h. And you can see the workload charts in my cloud manager account I guess? (&lt;a href=&quot;https://cloud.mongodb.com/v2/5012a0ac87d1d86fa8c22e64#host/replicaSet/5414cfabe4b0ce23e21b4b3b&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://cloud.mongodb.com/v2/5012a0ac87d1d86fa8c22e64#host/replicaSet/5414cfabe4b0ce23e21b4b3b&lt;/a&gt;), otherwise here is a chart showing the stability of the primary workload during July 8 and 9 (spike is the 11am burst):&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/191574/191574_screenshot-2.png&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;</comment>
                            <comment id="1942337" author="bruce.lucas@10gen.com" created="Mon, 9 Jul 2018 19:30:32 +0000"  >&lt;p&gt;Hi Adrien,&lt;/p&gt;

&lt;p&gt;Thanks for the data you have collected so far, and thanks for the offer to continue helping us diagnose what you are seeing.&lt;/p&gt;

&lt;p&gt;In the data that you&apos;ve uploaded the most notable notable correlate with higher CPU utilization is secondary lag:&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/191414/191414_lag.png&quot; width=&quot;100%&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;This is seen at A-B (following initial sync) and C-D and E-F (following some amount of downtime). When the secondary is lagged it will be doing extra work to catch up, which we can see in the &quot;repl apply ops&quot; metric, and this will require more CPU.&lt;/p&gt;

&lt;p&gt;After the lag catch-up CPU utilization has been accounted for we still see a residual CPU increase of about double, going from 4-5% to 8-10%. In absolute terms this is not a large increase, but we will look into it further.&lt;/p&gt;

&lt;p&gt;However you have indicated the possibility of some other effects which are larger, e.g. related to the daily update job, but I don&apos;t think we have clean enough data to investigate that. Would you be willing to collect data with one secondary on 3.6.5 and the other on 3.4.15 for comparison, for at least 24 hours? If you need to restart a secondary to get it on 3.6.5 please try to minimize the downtime between restarts to minimize the amount of lag that the secondary has to catch up to. Also please try to restart it during a period of steady load so it has plenty of time to stabilize before any unusual load such as the daily update job. If there are any events you want to call our attention to please give us a timeline with as accurate times as you can, including dates and timezone.&lt;/p&gt;

&lt;p&gt;Once you have 24 hours of data on 3.6.5 please upload the content of diagostic.data from all three nodes. Also if you can upload the CPU charts from both secondaries that would be useful - as you can see from above we collect CPU information in diagnostic.data, but it will be good to have independent confirmation of that from your charts.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Bruce&lt;/p&gt;</comment>
                            <comment id="1941436" author="bigbourin@gmail.com" created="Sun, 8 Jul 2018 22:44:21 +0000"  >&lt;p&gt;Ok to help you pinpoint the issue I erased the data and went back to 3.4.15 to make the upgrade again one version at a time and see the impact. I uploaded the new diagnostic.data files covering all these changes to the portal. Here are the upgrades I made since last time:&lt;br/&gt;
jul 05 @ 00:47: 3.6.1&lt;br/&gt;
jul 05 @ 08:50: 3.6.2&lt;br/&gt;
jul 05 @ 13:20: 3.6.3&lt;br/&gt;
jul 05 @ 20:00: 3.6.4&lt;br/&gt;
Jul 06 @ 10:34: 3.6.5&lt;br/&gt;
Jul 06 @ 18:50: 3.4.15&lt;br/&gt;
Jul 06 @ 21:31: 3.6.5&lt;br/&gt;
Jul 09 @ 00:30: 3.4.15&lt;/p&gt;

&lt;p&gt;Here&apos;s the CPU chart:&lt;br/&gt;
&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/191352/191352_screenshot-1.png&quot; width=&quot;100%&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;All these upgrade showed that the regression seems to be entirely in 3.4.15 &#8594; 3.6.0, as the CPU level is basically the same between this version, it looks a bit lower than in my first tests (Jul 04), I&apos;m not sure why as the Jul 04 test was from re-synced data already, but it&apos;s definitely way higher than in 3.4.15, the re-sync is slower and takes much more CPU, and the update burst I have every morning at 11am which used to up the secondary CPU to 20% is enough in 3.6.x to get the CPU to 100% and loose the secondary out of sync...&lt;/p&gt;

&lt;p&gt;Let me know if I can do anything else to help you diagnose this, in the meanwhile I&apos;m going back to 3.4.15 as the upgrade is not possible.&lt;/p&gt;</comment>
                            <comment id="1938893" author="bigbourin@gmail.com" created="Wed, 4 Jul 2018 19:24:54 +0000"  >&lt;p&gt;Sure, that&apos;s uploaded on the portal. For the secondary I forgot to keep the diagnostic folder when I resynced today so it only starts at ~13:00 but that&apos;s enough to see 3.4.15 and 3.6.0. &lt;/p&gt;</comment>
                            <comment id="1938891" author="ramon.fernandez" created="Wed, 4 Jul 2018 19:14:23 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=bigbourin%40gmail.com&quot; class=&quot;user-hover&quot; rel=&quot;bigbourin@gmail.com&quot;&gt;bigbourin@gmail.com&lt;/a&gt;, will you please upload the contents of the &lt;tt&gt;diagnostic.data&lt;/tt&gt; directories for the primary and the affected secondary? You can attach them to the ticket if they&apos;re not too large, or upload them via this &lt;a href=&quot;https://10gen-httpsupload.s3.amazonaws.com/upload_forms/f0ae78c6-8497-44ee-b6a5-308f6544f398.html&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;secure portal&lt;/a&gt;. Please comment on the ticket when you&apos;re done so we can investigate.&lt;/p&gt;

&lt;p&gt;Thanks,&lt;br/&gt;
Ram&#243;n.&lt;/p&gt;</comment>
                            <comment id="1938884" author="bigbourin@gmail.com" created="Wed, 4 Jul 2018 18:56:58 +0000"  >&lt;p&gt;Here is a couple more info, I let 3.6.5 run for a while and this morning at 11am I have a cron which does a lot of mongo updates, this usually take the secondary CPU load from 6% to ~15%, you can see on the following graph that with 3.6.5 it took it from 20% to 80%, the secondary started lagging as it couldn&apos;t keep up and at some point lost the oplog race and I had to resync it&lt;/p&gt;

&lt;p&gt;To help you find the regression more easily I decided to resync it with 3.4.15, first to see if I obverse the same load difference, and the to do the upgrade one version at a time. You can see at&#160;13:30 I started the resync, which finished at 15:00 and the load was back to more &quot;normal&quot; and stable levels as I&apos;m used to. I then tried upgrading to 3.6.0 at 18:00, you can see the resync took a lot more CPU, and then the load on the server is lower than on 3.6.5 but still higher and much less stable than in 3.4.15.&lt;/p&gt;

&lt;p&gt;I&apos;ll let it stabilize a bit to get better average numbers and then continue with 3.6.1&lt;/p&gt;

&lt;p&gt;I can provider the diagnostic.data if that is of any help.&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/191149/191149_image-2018-07-04-20-49-54-349.png&quot; width=&quot;100%&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;BTW I had some trouble starting 3.6.0 due to a bug with &quot;bindIpAll&quot; which makes the server start but not answer any connection (and the log says it couldn&apos;t bind the port and shutdown, but it&apos;s still running...), this seems not to be present in 3.6.5 though so I&apos;m considering this a fixed bug ^^&lt;/p&gt;</comment>
                            <comment id="1938632" author="bigbourin@gmail.com" created="Wed, 4 Jul 2018 07:17:41 +0000"  >&lt;p&gt;Here is the image that I couldn&apos;t upload at ticket creation:&lt;br/&gt;
&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/191119/191119_mongo-3.6-upgrade.png&quot; width=&quot;100%&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;And another one showing the I/O doubled too (same bandwidth but twice the IOPS):&lt;br/&gt;
&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/attachment/191120/191120_Screenshot+from+2018-07-04+08-53-16.png&quot; style=&quot;border: 0px solid black&quot; /&gt;&lt;/span&gt;&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Depends</name>
                                            <outwardlinks description="depends on">
                                        <issuelink>
            <issuekey id="542268">SERVER-34938</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="492584">WT-3894</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                            <issuelinktype id="10012">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="574166">SERVER-36221</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="607740">SERVER-37233</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="191120" name="Screenshot from 2018-07-04 08-53-16.png" size="30114" author="bigbourin@gmail.com" created="Wed, 4 Jul 2018 07:16:28 +0000"/>
                            <attachment id="192803" name="comparison.png" size="368160" author="bruce.lucas@mongodb.com" created="Fri, 27 Jul 2018 13:29:56 +0000"/>
                            <attachment id="191419" name="eviction.png" size="171153" author="bruce.lucas@mongodb.com" created="Mon, 9 Jul 2018 19:39:21 +0000"/>
                            <attachment id="196987" name="flush.png" size="285059" author="bruce.lucas@mongodb.com" created="Mon, 24 Sep 2018 14:49:05 +0000"/>
                            <attachment id="191149" name="image-2018-07-04-20-49-54-349.png" size="56697" author="bigbourin@gmail.com" created="Wed, 4 Jul 2018 18:49:55 +0000"/>
                            <attachment id="191414" name="lag.png" size="125730" author="bruce.lucas@mongodb.com" created="Mon, 9 Jul 2018 19:29:35 +0000"/>
                            <attachment id="191119" name="mongo-3.6-upgrade.png" size="59201" author="bigbourin@gmail.com" created="Wed, 4 Jul 2018 07:15:59 +0000"/>
                            <attachment id="191573" name="mongo-performance.png" size="52336" author="bigbourin@gmail.com" created="Wed, 11 Jul 2018 12:16:03 +0000"/>
                            <attachment id="191703" name="mongodb.log.gz" size="1147625" author="bigbourin@gmail.com" created="Thu, 12 Jul 2018 18:48:09 +0000"/>
                            <attachment id="191623" name="psa.png" size="379626" author="bruce.lucas@mongodb.com" created="Wed, 11 Jul 2018 19:22:40 +0000"/>
                            <attachment id="192835" name="puzzle.png" size="426876" author="bruce.lucas@mongodb.com" created="Fri, 27 Jul 2018 16:07:24 +0000"/>
                            <attachment id="191352" name="screenshot-1.png" size="71536" author="bigbourin@gmail.com" created="Sun, 8 Jul 2018 22:39:00 +0000"/>
                            <attachment id="191574" name="screenshot-2.png" size="16154" author="bigbourin@gmail.com" created="Wed, 11 Jul 2018 12:30:12 +0000"/>
                            <attachment id="191701" name="screenshot-3.png" size="53594" author="bigbourin@gmail.com" created="Thu, 12 Jul 2018 18:39:35 +0000"/>
                            <attachment id="191945" name="screenshot-4.png" size="56275" author="bigbourin@gmail.com" created="Tue, 17 Jul 2018 16:26:41 +0000"/>
                            <attachment id="192581" name="screenshot-5.png" size="70104" author="bigbourin@gmail.com" created="Tue, 24 Jul 2018 21:26:50 +0000"/>
                            <attachment id="196570" name="screenshot-6.png" size="25799" author="bigbourin@gmail.com" created="Tue, 18 Sep 2018 21:47:55 +0000"/>
                            <attachment id="196571" name="screenshot-7.png" size="37955" author="bigbourin@gmail.com" created="Tue, 18 Sep 2018 21:49:03 +0000"/>
                            <attachment id="196572" name="screenshot-8.png" size="24291" author="bigbourin@gmail.com" created="Tue, 18 Sep 2018 21:51:13 +0000"/>
                            <attachment id="200661" name="screenshot-9.png" size="21640" author="bigbourin@gmail.com" created="Sat, 10 Nov 2018 09:22:10 +0000"/>
                            <attachment id="192595" name="server35958.png" size="258291" author="alexander.gorrod@mongodb.com" created="Wed, 25 Jul 2018 05:57:24 +0000"/>
                            <attachment id="191465" name="timestamps_pinned.png" size="124143" author="sulabh.mahajan@mongodb.com" created="Tue, 10 Jul 2018 07:28:03 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                <customfield id="customfield_10050" key="com.atlassian.jira.toolkit:comments">
                        <customfieldname># Replies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>29.0</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_18555" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname># of Sprints</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1.0</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_10055" key="com.atlassian.jira.ext.charting:firstresponsedate">
                        <customfieldname>Date of 1st Reply</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Wed, 4 Jul 2018 19:14:23 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10052" key="com.atlassian.jira.toolkit:dayslastcommented">
                        <customfieldname>Days since reply</customfieldname>
                        <customfieldvalues>
                                        4 years, 46 weeks, 2 days ago
    
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_18254" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Dependencies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue><![CDATA[<s><a href='https://jira.mongodb.org/browse/WT-3894'>WT-3894</a></s>, <s><a href='https://jira.mongodb.org/browse/SERVER-34938'>SERVER-34938</a></s>]]></customfieldvalue>


                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_15850" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_10057" key="com.atlassian.jira.toolkit:lastusercommented">
                        <customfieldname>Last comment by Customer</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>true</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10056" key="com.atlassian.jira.toolkit:lastupdaterorcommenter">
                        <customfieldname>Last commenter</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>geert.bosch@mongodb.com</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_11151" key="com.atlassian.jira.toolkit:LastCommentDate">
                        <customfieldname>Last public comment date</customfieldname>
                        <customfieldvalues>
                            4 years, 46 weeks, 2 days ago
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                    <customfield id="customfield_10032" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Operating System</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10026"><![CDATA[ALL]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_10051" key="com.atlassian.jira.toolkit:participants">
                        <customfieldname>Participants</customfieldname>
                        <customfieldvalues>
                                        <customfieldvalue>bigbourin@gmail.com</customfieldvalue>
            <customfieldvalue>alexander.gorrod@mongodb.com</customfieldvalue>
            <customfieldvalue>bruce.lucas@mongodb.com</customfieldvalue>
            <customfieldvalue>geert.bosch@mongodb.com</customfieldvalue>
            <customfieldvalue>ramon.fernandez@mongodb.com</customfieldvalue>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_14254" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Product Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hu1zkf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_12550" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>2|htsq9b:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10558" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_23361" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Requested By</customfieldname>
                        <customfieldvalues>
                                

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10557" key="com.pyxis.greenhopper.jira:gh-sprint">
                        <customfieldname>Sprint</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue id="2762">Storage NYC 2019-02-25</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10750" key="com.atlassian.jira.plugin.system.customfieldtypes:textarea">
                        <customfieldname>Steps To Reproduce</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>&lt;p&gt;Upgrade RS secondary from 3.4.15 to 3.6.5 and watch CPU usage&lt;/p&gt;</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    <customfield id="customfield_10053" key="com.atlassian.jira.ext.charting:timeinstatus">
                        <customfieldname>Time In Status</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_22870" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Triagers</customfieldname>
                        <customfieldvalues>
                                    <customfieldvalue><![CDATA[dmitry.agranat@mongodb.com]]></customfieldvalue>
        <customfieldvalue><![CDATA[bruce.lucas@mongodb.com]]></customfieldvalue>
    

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_14350" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>serverRank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hu1ltr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                    </customfields>
    </item>
</channel>
</rss>