<!-- 
RSS generated by JIRA (9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66) at Thu Feb 08 05:03:40 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>MongoDB Jira</title>
    <link>https://jira.mongodb.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.7.1</version>
        <build-number>970001</build-number>
        <build-date>13-04-2023</build-date>
    </build-info>


<item>
            <title>[SERVER-43632] Possible memory leak in 4.0</title>
                <link>https://jira.mongodb.org/browse/SERVER-43632</link>
                <project id="10000" key="SERVER">Core Server</project>
                    <description>&lt;p&gt;Good day.&lt;/p&gt;

&lt;p&gt;After upgrading to 4.0 we&apos;ve encountered some sort of memory leak in 2 cases:&lt;/p&gt;

&lt;p&gt;1) In cluster environment described in &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-43038&quot; title=&quot;Commit point can be stale on slaveDelay nodes and cause memory pressure&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-43038&quot;&gt;&lt;del&gt;SERVER-43038&lt;/del&gt;&lt;/a&gt;. I don&apos;t have diagnostic.data from that time, but possibly can extract some logs from our logging system.&lt;br/&gt;
 Memory leak occured after we&apos;ve shutdown our slaveDelay instances and lasted until we&apos;ve started them again.&lt;/p&gt;

&lt;p&gt;2) Recently in replica set configuration (i attached mongodb memory graphs from two servers (each one was primary at some time), server logs, diagnostic.data and configuration file).&lt;/p&gt;

&lt;p&gt;Spike around 2-3pm 19.09 on db1-1 was restoration of &apos;drive&apos; database with mongorestore. Also i did a rs.stepdown() and restart server intance db1-1 at 4:30pm 21.09 due to memory pressure (memory leak has moved to the new primary). At 11:25pm 21.09 we&apos;ve disabled most of the processes that work with that replica set.&lt;/p&gt;

&lt;p&gt;After i&apos;ve started workload again i cannot reproduce memory leak. On the contrary you can see resident memory decreases on db1-2 from 3pm 23.09 and up to date)&lt;/p&gt;</description>
                <environment>Linux 4.15.0-55-generic #60~16.04.2-Ubuntu SMP Thu Jul 4 09:03:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux&lt;br/&gt;
&lt;br/&gt;
db version v4.0.11&lt;br/&gt;
git version: 417d1a712e9f040d54beca8e4943edce218e9a8c&lt;br/&gt;
OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016&lt;br/&gt;
allocator: tcmalloc&lt;br/&gt;
modules: none&lt;br/&gt;
build environment:&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;distmod: ubuntu1604&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;distarch: x86_64&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;target_arch: x86_64&lt;br/&gt;
&lt;br/&gt;
Intel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz&lt;br/&gt;
32GB of RAM&lt;br/&gt;
&lt;br/&gt;
&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;total        used        free      shared  buff/cache   available&lt;br/&gt;
Mem:       32899580     4143572    23439176        1328     5316832    28245712&lt;br/&gt;
Swap:       7812092         768     7811324</environment>
        <key id="939901">SERVER-43632</key>
            <summary>Possible memory leak in 4.0</summary>
                <type id="1" iconUrl="https://jira.mongodb.org/secure/viewavatar?size=xsmall&amp;avatarId=14703&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.mongodb.org/images/icons/priorities/major.svg">Major - P3</priority>
                        <status id="6" iconUrl="https://jira.mongodb.org/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="3">Duplicate</resolution>
                                        <assignee username="ben.caimano@mongodb.com">Benjamin Caimano</assignee>
                                    <reporter username="vergil@drive.net">Alexander Ivanes</reporter>
                        <labels>
                    </labels>
                <created>Wed, 25 Sep 2019 16:21:10 +0000</created>
                <updated>Mon, 15 Nov 2021 16:22:06 +0000</updated>
                            <resolved>Thu, 12 Dec 2019 21:25:27 +0000</resolved>
                                    <version>4.0.11</version>
                                                    <component>Networking</component>
                                        <votes>0</votes>
                                    <watches>12</watches>
                                                                                                                <comments>
                            <comment id="2594501" author="ben.caimano" created="Mon, 9 Dec 2019 19:15:30 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=vergil&quot; class=&quot;user-hover&quot; rel=&quot;vergil&quot;&gt;vergil&lt;/a&gt;, I was able to reproduce the issue by repeatedly restarting a secondary inside one of our test harnesses. I believe that this memory accumulation bug is fixed with &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-44567&quot; title=&quot;Reimplement CommandState destructors for v4.0&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-44567&quot;&gt;&lt;del&gt;SERVER-44567&lt;/del&gt;&lt;/a&gt; which should be released with r4.0.14. If you have any further concerns, please feel free to reopen or submit a new ticket.&lt;/p&gt;</comment>
                            <comment id="2535921" author="vergil@drive.net" created="Tue, 12 Nov 2019 09:19:43 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=ben.caimano&quot; class=&quot;user-hover&quot; rel=&quot;ben.caimano&quot;&gt;ben.caimano&lt;/a&gt;, thank you for an update. We look forward to this.&lt;/p&gt;</comment>
                            <comment id="2529914" author="ben.caimano" created="Mon, 11 Nov 2019 21:58:09 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=vergil&quot; class=&quot;user-hover&quot; rel=&quot;vergil&quot;&gt;vergil&lt;/a&gt;, we think that perhaps you&apos;ve run into a memory accumulation bug in the networking layer that we introduced in v4.0 and fixed in v4.2. While the v4.2 commit itself is not exactly something we can backport, we believe that we can chunk out the necessary pieces and reapply them to v4.0. I&apos;m tracking this effort with &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-44567&quot; title=&quot;Reimplement CommandState destructors for v4.0&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-44567&quot;&gt;&lt;del&gt;SERVER-44567&lt;/del&gt;&lt;/a&gt;.&lt;/p&gt;</comment>
                            <comment id="2487669" author="siyuan.zhou@10gen.com" created="Thu, 17 Oct 2019 21:09:47 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=matthew.russotto&quot; class=&quot;user-hover&quot; rel=&quot;matthew.russotto&quot;&gt;matthew.russotto&lt;/a&gt;, I thought when restarting heartbeats, we always cancel existing ones since we always track them. Could you please post a link to the code where it behaves unexpectedly?&lt;/p&gt;</comment>
                            <comment id="2487482" author="matthew.russotto" created="Thu, 17 Oct 2019 19:16:54 +0000"  >&lt;p&gt;It appears we&apos;re accumulating the heartbeat callback lambda until we reconnect.&lt;/p&gt;</comment>
                            <comment id="2440556" author="vergil@drive.net" created="Mon, 30 Sep 2019 15:36:57 +0000"  >&lt;p&gt;Thanks for an explanation.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Regarding scenario #1: looks like i&apos;ve disabled slaveDelay because of&#160;&lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-43038&quot; title=&quot;Commit point can be stale on slaveDelay nodes and cause memory pressure&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-43038&quot;&gt;&lt;del&gt;SERVER-43038&lt;/del&gt;&lt;/a&gt;&#160;and forgot about that.&lt;/p&gt;

&lt;p&gt;So it&apos;s just an 4th member of replica set (hidden).&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="2440397" author="daniel.hatcher" created="Mon, 30 Sep 2019 15:23:31 +0000"  >&lt;p&gt;Thanks!&lt;/p&gt;

&lt;p&gt;Your scenario #2 is a known issue in MongoDB. As each query adds a field, the query shape changes. Thus, we need to go through the query planning stage to determine what index to use and we store that info in our query plan cache. As you&apos;ve noticed, that plan cache can take up significant system memory if you&apos;re constantly adding larger and larger queries. It is not normally an issue except in cases like this where you had an application bug always creating larger queries. We have several tickets in our backlog to address this problem; most notably &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-40361&quot; title=&quot;Reduce memory footprint of plan cache entries&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-40361&quot;&gt;&lt;del&gt;SERVER-40361&lt;/del&gt;&lt;/a&gt; is open as a general ticket about reducing the overall memory footprint.&lt;/p&gt;

&lt;p&gt;Regarding scenario #1, can you please confirm how many nodes you expect to see in the replica set and which ones are delayed? From the data I have, there are four nodes configured but none of them have a delay set.&lt;/p&gt;</comment>
                            <comment id="2440358" author="vergil@drive.net" created="Mon, 30 Sep 2019 15:05:06 +0000"  >&lt;p&gt;Yes, sorry.&lt;/p&gt;

&lt;p&gt;Case #1: Dorado1 &#8211; machine name, driveFS-18 &#8211; mongod instance name. Log time for driveFS-18 in UTC+0, graphs &#8211; UTC+3.&lt;/p&gt;

&lt;p&gt;Case #2: Logs and graphs in UTC+3, all files with name drive1-db1*.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Again, sorry for the difference in timezones, for driveFS-18 i was forced to convert logs from our logging system. (For drive1-db1 i just took log file as you recommended earlier).&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;

&lt;p&gt;Update: all time marks in previous message was in UTC+3.&lt;/p&gt;</comment>
                            <comment id="2440316" author="daniel.hatcher" created="Mon, 30 Sep 2019 14:54:39 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=vergil&quot; class=&quot;user-hover&quot; rel=&quot;vergil&quot;&gt;vergil&lt;/a&gt;, can you please clarify which file names apply to which scenario? Also, are all the times you mention in UTC? I want to make sure we&apos;re on the same page.&lt;/p&gt;</comment>
                            <comment id="2438549" author="vergil@drive.net" created="Sun, 29 Sep 2019 14:25:08 +0000"  >&lt;p&gt;I&apos;ve successfully reproduced both cases.&lt;/p&gt;

&lt;p&gt;1. After shutdown of all slave delayed shards (around 2-3 pm 27.09) memory usage goes up. I&apos;ve attached diagnostic.data, logs, memory chart (both server and mongod instance, this behavior was on all 23 instances of mongod on this machine). Heap profiler was enabled on that instance. I returned slave delayed instance back around 3 am 29.09.&lt;/p&gt;

&lt;p&gt;2. This one was a little bit tricky. It turns out there was a bug in our code that monotonically increase the size of the search query. We&apos;ve fixed it some time ago, so i&apos;ve asked to return the bug so we can try to reproduce this behavior: each consecutive query adds another &lt;/p&gt;
&lt;p/&gt;
&lt;div id=&quot;syntaxplugin&quot; class=&quot;syntaxplugin&quot; style=&quot;border: 1px dashed #bbb; border-radius: 5px !important; overflow: auto; max-height: 30em;&quot;&gt;
&lt;table cellspacing=&quot;0&quot; cellpadding=&quot;0&quot; border=&quot;0&quot; width=&quot;100%&quot; style=&quot;font-size: 1em; line-height: 1.4em !important; font-weight: normal; font-style: normal; color: black;&quot;&gt;
		&lt;tbody &gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;  margin-top: 10px;   margin-bottom: 10px;  width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: blue; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;&apos;{ dir: { $ne: &quot;video&quot; } }&apos;&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
			&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p/&gt;
&lt;p&gt;part and memory usage grows with each query. After some time query can be relatively large (about 90KB).&lt;/p&gt;

&lt;p&gt;I don&apos;t know if this is an intended behavior or not, so please look at attached logs.&lt;/p&gt;

&lt;p&gt;We&apos;ve published &apos;bugged&apos; version at 5:30 pm 27.09 and unpublished it at 10 am next day.&lt;/p&gt;</comment>
                            <comment id="2435070" author="daniel.hatcher" created="Thu, 26 Sep 2019 19:12:49 +0000"  >&lt;p&gt;Thanks for the clarifications. I believe we&apos;ll have to wait for the heap profiler output to obtain any more useful information so I will put this ticket into Waiting status until then.&lt;/p&gt;</comment>
                            <comment id="2434614" author="vergil@drive.net" created="Thu, 26 Sep 2019 15:07:45 +0000"  >&lt;p&gt;Thanks. I&apos;ve misread and uploaded file to your portal too, sorry &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.mongodb.org/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt;.&lt;/p&gt;

&lt;p&gt;&#160;&lt;/p&gt;</comment>
                            <comment id="2434541" author="daniel.hatcher" created="Thu, 26 Sep 2019 14:43:14 +0000"  >&lt;p&gt;Apologies, I meant to set up an upload portal for you to use. You can upload any future files relevant to this case via &lt;a href=&quot;https://10gen-httpsupload.s3.amazonaws.com/upload_forms/d5dd1433-62df-4f34-9f23-b8782d2249ec.html&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;this link&lt;/a&gt;. Only MongoDB engineers will be able to access them and they will automatically be deleted after 90 days. I&apos;ve grabbed the log file link you just sent and uploaded it to our portal so feel free to remove it from your S3.&lt;/p&gt;</comment>
                            <comment id="2434337" author="vergil@drive.net" created="Thu, 26 Sep 2019 13:40:37 +0000"  >&lt;p&gt;Quick update for the first case: memory leak happened on all shards in sharded cluster, primary and secondary.&lt;/p&gt;

&lt;p&gt;I&apos;ve attached memory graph from one of the shards and shared &lt;a href=&quot;https://drive-public-eu.s3.eu-central-1.amazonaws.com/mongodb/driveFS-2.log&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;the link&lt;/a&gt; to the log file (notify me then you no longer need one so i can remove the file from S3 storage). Make note that time in the log in UTC+0, but graph shows UTC+3. &lt;/p&gt;

&lt;p&gt;We&apos;ve shutdowned servers around 6:30 am 18.08 (9:30 am on the graph)&lt;/p&gt;

&lt;p&gt;&lt;span class=&quot;image-wrap&quot; style=&quot;&quot;&gt;&lt;a id=&quot;231835_thumb&quot; href=&quot;https://jira.mongodb.org/secure/attachment/231835/231835_driveFS-2-memory.png&quot; title=&quot;driveFS-2-memory.png&quot; file-preview-type=&quot;image&quot; file-preview-id=&quot;231835&quot; file-preview-title=&quot;driveFS-2-memory.png&quot;&gt;&lt;img src=&quot;https://jira.mongodb.org/secure/thumbnail/231835/_thumb_231835.png&quot; style=&quot;border: 0px solid black&quot; role=&quot;presentation&quot;/&gt;&lt;/a&gt;&lt;/span&gt;&lt;/p&gt;</comment>
                            <comment id="2433445" author="vergil@drive.net" created="Wed, 25 Sep 2019 19:40:22 +0000"  >&lt;p&gt;Daniel, thanks for the info.&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;&#160;For the first case you mentioned, we haven&apos;t seen evidence of a memory leak in &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-43038&quot; title=&quot;Commit point can be stale on slaveDelay nodes and cause memory pressure&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-43038&quot;&gt;&lt;del&gt;SERVER-43038&lt;/del&gt;&lt;/a&gt;. However, we have been looking into when slaveDelay is configured and you are saying that the rise in memory happened when it was disabled.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;I will clarify that in that case memory leak appeared after there was unreachable 4th server in the replica set (at first i was thinking that was the leak in logging processor in mongod, because of significantly increased number of messages). I will check tomorrow if it was only on primary or at secondaries too.&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;I am not sure how your logging facility will parse the information so it would be very useful if you could write to a file instead. Additionally, there will be a performance hit to enabling this feature so I recommend only leaving it on while we are troubleshooting.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Fortunately this is a test instance of DB, so i can easily enable heap profiling setting. I also will try to recreate workload as close as possible.&lt;/p&gt;</comment>
                            <comment id="2433397" author="daniel.hatcher" created="Wed, 25 Sep 2019 19:13:07 +0000"  >&lt;p&gt;For the first case you mentioned, we haven&apos;t seen evidence of a memory leak in &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-43038&quot; title=&quot;Commit point can be stale on slaveDelay nodes and cause memory pressure&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-43038&quot;&gt;&lt;del&gt;SERVER-43038&lt;/del&gt;&lt;/a&gt;. However, we have been looking into when slaveDelay is configured and you are saying that the rise in memory happened when it was disabled.&lt;/p&gt;

&lt;p&gt;For the second case, I do see the long rise in resident memory on &quot;rainbow1-16&quot;. Unfortunately, there&apos;s not enough information in the data we have now as to what caused it. &lt;/p&gt;

&lt;p&gt;I recommend that you enable the following on the nodes in the environment that you think is most likely to experience the rise in memory again:&lt;/p&gt;
&lt;p/&gt;
&lt;div id=&quot;syntaxplugin&quot; class=&quot;syntaxplugin&quot; style=&quot;border: 1px dashed #bbb; border-radius: 5px !important; overflow: auto; max-height: 30em;&quot;&gt;
&lt;table cellspacing=&quot;0&quot; cellpadding=&quot;0&quot; border=&quot;0&quot; width=&quot;100%&quot; style=&quot;font-size: 1em; line-height: 1.4em !important; font-weight: normal; font-style: normal; color: black;&quot;&gt;
		&lt;tbody &gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;  margin-top: 10px;   margin-bottom: 10px;  width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;--setParameter heapProfilingEnabled=true&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
			&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p/&gt;

&lt;p&gt;I am not sure how your logging facility will parse the information so it would be very useful if you could write to a file instead. Additionally, there will be a performance hit to enabling this feature so I recommend only leaving it on while we are troubleshooting. &lt;/p&gt;

&lt;p&gt;If you do see a rise in memory on a node with this setting enabled, please provide the full &quot;diagnostic.data&quot; and &lt;tt&gt;mongod&lt;/tt&gt; logs for the relevant node.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10010">
                    <name>Duplicate</name>
                                            <outwardlinks description="duplicates">
                                        <issuelink>
            <issuekey id="997141">SERVER-44567</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                            <issuelinktype id="10012">
                    <name>Related</name>
                                            <outwardlinks description="related to">
                                        <issuelink>
            <issuekey id="759266">SERVER-41031</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="906686">SERVER-43038</issuekey>
        </issuelink>
                            </outwardlinks>
                                                        </issuelinktype>
                    </issuelinks>
                <attachments>
                            <attachment id="232146" name="Screen Shot 2019-09-30 at 12.15.18 PM.png" size="164119" author="daniel.hatcher@mongodb.com" created="Mon, 30 Sep 2019 16:16:28 +0000"/>
                            <attachment id="231752" name="diagnostic.data-drive1-db1-1.tar" size="130969600" author="vergil" created="Wed, 25 Sep 2019 16:15:20 +0000"/>
                            <attachment id="231751" name="diagnostic.data-drive1-db1-2.tar" size="116960256" author="vergil" created="Wed, 25 Sep 2019 16:13:56 +0000"/>
                            <attachment id="232063" name="dorado1-memory-update1.png" size="32769" author="vergil" created="Sun, 29 Sep 2019 14:24:53 +0000"/>
                            <attachment id="231750" name="drive1-db1-1-memory.png" size="38570" author="vergil" created="Wed, 25 Sep 2019 16:06:44 +0000"/>
                            <attachment id="231746" name="drive1-db1-1-part1.log" size="156103696" author="vergil" created="Wed, 25 Sep 2019 16:18:11 +0000"/>
                            <attachment id="231745" name="drive1-db1-1-part2.log" size="78111804" author="vergil" created="Wed, 25 Sep 2019 16:20:54 +0000"/>
                            <attachment id="231748" name="drive1-db1-2-memory.png" size="41647" author="vergil" created="Wed, 25 Sep 2019 16:06:44 +0000"/>
                            <attachment id="231749" name="drive1-db1-2.log" size="43937148" author="vergil" created="Wed, 25 Sep 2019 16:07:46 +0000"/>
                            <attachment id="232064" name="drive1-db1-memory-update1.png" size="33596" author="vergil" created="Sun, 29 Sep 2019 14:24:53 +0000"/>
                            <attachment id="232065" name="driveFS-18-memory-update1.png" size="34890" author="vergil" created="Sun, 29 Sep 2019 14:24:53 +0000"/>
                            <attachment id="231835" name="driveFS-2-memory.png" size="52766" author="vergil" created="Thu, 26 Sep 2019 13:33:07 +0000"/>
                            <attachment id="231747" name="mongodb-drive1-db1.conf" size="458" author="vergil" created="Wed, 25 Sep 2019 16:06:44 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                <customfield id="customfield_10050" key="com.atlassian.jira.toolkit:comments">
                        <customfieldname># Replies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>16.0</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_18555" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname># of Sprints</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>4.0</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_13552" key="com.go2group.jira.plugin.crm:crm_generic_field">
                        <customfieldname>Case</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue><![CDATA[[5002K00000oeaDjQAI]]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10055" key="com.atlassian.jira.ext.charting:firstresponsedate">
                        <customfieldname>Date of 1st Reply</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Wed, 25 Sep 2019 19:13:07 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10052" key="com.atlassian.jira.toolkit:dayslastcommented">
                        <customfieldname>Days since reply</customfieldname>
                        <customfieldvalues>
                                        4 years, 9 weeks, 2 days ago
    
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_18254" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Dependencies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue><![CDATA[]]></customfieldvalue>


                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_15850" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_10057" key="com.atlassian.jira.toolkit:lastusercommented">
                        <customfieldname>Last comment by Customer</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>true</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10056" key="com.atlassian.jira.toolkit:lastupdaterorcommenter">
                        <customfieldname>Last commenter</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>luke.bonanomi@mongodb.com</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_11151" key="com.atlassian.jira.toolkit:LastCommentDate">
                        <customfieldname>Last public comment date</customfieldname>
                        <customfieldvalues>
                            4 years, 9 weeks, 2 days ago
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                    <customfield id="customfield_10032" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Operating System</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10026"><![CDATA[ALL]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_10051" key="com.atlassian.jira.toolkit:participants">
                        <customfieldname>Participants</customfieldname>
                        <customfieldvalues>
                                        <customfieldvalue>vergil@drive.net</customfieldvalue>
            <customfieldvalue>ben.caimano@mongodb.com</customfieldvalue>
            <customfieldvalue>daniel.hatcher@mongodb.com</customfieldvalue>
            <customfieldvalue>matthew.russotto@mongodb.com</customfieldvalue>
            <customfieldvalue>siyuan.zhou@mongodb.com</customfieldvalue>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_14254" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Product Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hvsfh3:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_12550" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>2|hvh0zr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10558" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_23361" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Requested By</customfieldname>
                        <customfieldvalues>
                                

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10557" key="com.pyxis.greenhopper.jira:gh-sprint">
                        <customfieldname>Sprint</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue id="3261">Repl 2019-10-21</customfieldvalue>
    <customfieldvalue id="3380">Service Arch 2019-11-18</customfieldvalue>
    <customfieldvalue id="3381">Service Arch 2019-12-02</customfieldvalue>
    <customfieldvalue id="3382">Service Arch 2019-12-16</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_10053" key="com.atlassian.jira.ext.charting:timeinstatus">
                        <customfieldname>Time In Status</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_22870" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Triagers</customfieldname>
                        <customfieldvalues>
                                

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_14350" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>serverRank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hvs1qf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                    </customfields>
    </item>
</channel>
</rss>