<!-- 
RSS generated by JIRA (9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66) at Thu Feb 08 05:00:03 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>MongoDB Jira</title>
    <link>https://jira.mongodb.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.7.1</version>
        <build-number>970001</build-number>
        <build-date>13-04-2023</build-date>
    </build-info>


<item>
            <title>[SERVER-42273] Introduce a &quot;force&quot; option to `moveChunk` to allow migrating jumbo chunks</title>
                <link>https://jira.mongodb.org/browse/SERVER-42273</link>
                <project id="10000" key="SERVER">Core Server</project>
                    <description>&lt;p&gt;Currently, if a chunk is larger than &lt;a href=&quot;https://github.com/mongodb/mongo/blob/a5d4ab967af9cbba17e6aa5afadca35927bd74c1/src/mongo/s/balancer_configuration.cpp#L87&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;64MB by default&lt;/a&gt; or &lt;a href=&quot;https://github.com/mongodb/mongo/blob/a5d4ab967af9cbba17e6aa5afadca35927bd74c1/src/mongo/s/balancer_configuration.cpp#L410&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;1GB max&lt;/a&gt;, the balancer will mark it as jumbo and will refuse to move it.&lt;/p&gt;

&lt;p&gt;It is possible to manually issue a &lt;a href=&quot;https://docs.mongodb.com/manual/reference/command/moveChunk/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;&lt;tt&gt;moveChunk&lt;/tt&gt;&lt;/a&gt; command and pass the unsupported and undocumented &lt;a href=&quot;https://github.com/mongodb/mongo/blob/a5d4ab967af9cbba17e6aa5afadca35927bd74c1/src/mongo/s/commands/cluster_move_chunk_cmd.cpp#L130&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;&lt;tt&gt;maxChunkSizeBytes&lt;/tt&gt;&lt;/a&gt; parameter, which will override the check for max chunk size, but even with this, given sufficient write load to the chunk being migrated, the memory usage on the donor shard could &lt;a href=&quot;https://github.com/mongodb/mongo/blob/a5d4ab967af9cbba17e6aa5afadca35927bd74c1/src/mongo/db/s/migration_chunk_cloner_source_legacy.cpp#L366&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;exceed 500MB&lt;/a&gt; in which case migration will still fail.&lt;/p&gt;

&lt;p&gt;This ticket proposes adding a new &lt;tt&gt;forceJumbo&lt;/tt&gt; option to the &lt;tt&gt;moveChunk&lt;/tt&gt; command in order to allow large chunks to be migrated at the possible expense of blocking writes to the owning collection on the shard in question. The option will have the following deviation from the way it currently operates:&lt;/p&gt;
&lt;ol&gt;
	&lt;li&gt;It will skip the step, which &lt;a href=&quot;https://github.com/mongodb/mongo/blob/a5d4ab967af9cbba17e6aa5afadca35927bd74c1/src/mongo/db/s/migration_chunk_cloner_source_legacy.cpp#L801&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;sorts the cloned chunk&apos;s document ids&lt;/a&gt; and will instead give out the chunks in the order of the shard key (this means it will never return a &apos;jumbo chunk&apos; error)&lt;/li&gt;
	&lt;li&gt;Instead of failing the migration, if the memory usage &lt;a href=&quot;https://github.com/mongodb/mongo/blob/a5d4ab967af9cbba17e6aa5afadca35927bd74c1/src/mongo/db/s/migration_chunk_cloner_source_legacy.cpp#L366&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;exceeds 500MB&lt;/a&gt;, it will instead &lt;a href=&quot;https://github.com/mongodb/mongo/blob/a5d4ab967af9cbba17e6aa5afadca35927bd74c1/src/mongo/db/s/migration_chunk_cloner_source_legacy.cpp#L296&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;enter the critical section&lt;/a&gt; (this means that writes to the collection being migrated will possibly block for longer period of time)&lt;/li&gt;
&lt;/ol&gt;
</description>
                <environment></environment>
        <key id="860720">SERVER-42273</key>
            <summary>Introduce a &quot;force&quot; option to `moveChunk` to allow migrating jumbo chunks</summary>
                <type id="4" iconUrl="https://jira.mongodb.org/secure/viewavatar?size=xsmall&amp;avatarId=14710&amp;avatarType=issuetype">Improvement</type>
                                            <priority id="3" iconUrl="https://jira.mongodb.org/images/icons/priorities/major.svg">Major - P3</priority>
                        <status id="6" iconUrl="https://jira.mongodb.org/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="13201">Fixed</resolution>
                                        <assignee username="janna.golden@mongodb.com">Janna Golden</assignee>
                                    <reporter username="ratika.gandhi@mongodb.com">Ratika Gandhi</reporter>
                        <labels>
                    </labels>
                <created>Thu, 18 Jul 2019 16:04:21 +0000</created>
                <updated>Sun, 29 Oct 2023 22:18:59 +0000</updated>
                            <resolved>Tue, 5 Nov 2019 18:20:15 +0000</resolved>
                                                    <fixVersion>4.3.1</fixVersion>
                                                        <votes>1</votes>
                                    <watches>16</watches>
                                                                                                                <comments>
                            <comment id="2518503" author="janna.golden" created="Tue, 5 Nov 2019 18:28:10 +0000"  >&lt;p&gt;The following behavior changes were made as a part of this ticket:&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Changes to moveChunk command:&lt;/b&gt;&lt;br/&gt;
A new optional boolean parameter &apos;forceJumbo&apos; that defaults to false. If set to true and the chunk would otherwise have been deemed too large to move, the donor shard will enter the critical section early and writes will be blocked during the cloning phase. This is important to note as it can cause a long period of time where ops are blocked on this collection.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Changes to balancer configuration settings:&lt;/b&gt;&lt;br/&gt;
A new field &apos;attemptToBalanceJumboChunks&apos; in the &apos;balancer&apos; document in the config.settings collection. This a boolean field that defaults to false. This document will now look something like &lt;p/&gt;
&lt;div id=&quot;syntaxplugin&quot; class=&quot;syntaxplugin&quot; style=&quot;border: 1px dashed #bbb; border-radius: 5px !important; overflow: auto; max-height: 30em;&quot;&gt;
&lt;table cellspacing=&quot;0&quot; cellpadding=&quot;0&quot; border=&quot;0&quot; width=&quot;100%&quot; style=&quot;font-size: 1em; line-height: 1.4em !important; font-weight: normal; font-style: normal; color: black;&quot;&gt;
		&lt;tbody &gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;  margin-top: 10px;   margin-bottom: 10px;  width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;{&quot;_id&quot;: &quot;balancer&quot;, &quot;mode&quot;: &quot;full&quot;, &quot;stopped&quot;: false, &quot;attemptToBalanceJumboChunks&quot;: false}&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
			&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p/&gt; &lt;/p&gt;

&lt;p&gt;If &apos;attemptToBalanceJumboChunks&apos; is set to true, the balancer will schedule migrations that attempt to move large chunks as long as the chunk is not marked &apos;jumbo&apos; in config.chunks. A chunk is marked &apos;jumbo&apos; only after an attempt to split or move a large chunk has failed because of its size or the size of the transfer mods queue. The balancer should not continually try to schedule the migration of a chunk that has failed for either of these reasons previously to avoid the risk of forever scheduling the same migration. A user can run &apos;clearJumboFlag&apos; so that the balancer with schedule this migration in the future, or they can choose to use the moveChunk command to manually move the chunk.&lt;/p&gt;

&lt;p&gt;Unlike the new behavior of the moveChunk command above, the donor shard will not enter the critical section early, and if the transfer mods queue (queue of writes that modify any documents being migrated) surpasses 500MB of memory the migration will fail. This is to avoid unintended &quot;down time&quot; in the case a user was unaware that moving a large chunk can cause a long period of time where ops are blocked on this collection.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Changes to shard removal:&lt;/b&gt;&lt;br/&gt;
If a shard is in draining mode, meaning it has been removed, the balancer will also attempt to schedule migrations of any large chunks currently belonging to this shard. The balancer will behave the same as if &apos;attemptToBalanceJumboChunks&apos; is set to true (described above).&lt;/p&gt;</comment>
                            <comment id="2518308" author="xgen-internal-githook" created="Tue, 5 Nov 2019 17:02:50 +0000"  >&lt;p&gt;Author:&lt;/p&gt;
{&apos;username&apos;: &apos;jannaerin&apos;, &apos;email&apos;: &apos;janna.golden@mongodb.com&apos;, &apos;name&apos;: &apos;Janna Golden&apos;}
&lt;p&gt;Message: &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-42273&quot; title=&quot;Introduce a &amp;quot;force&amp;quot; option to `moveChunk` to allow migrating jumbo chunks&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-42273&quot;&gt;&lt;del&gt;SERVER-42273&lt;/del&gt;&lt;/a&gt; Introduce &apos;force&apos; option to &apos;moveChunk&apos; to allow migrating jumbo chunks&lt;br/&gt;
Branch: master&lt;br/&gt;
&lt;a href=&quot;https://github.com/mongodb/mongo/commit/c150b588cb4400e0324becd916de2a699988af99&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://github.com/mongodb/mongo/commit/c150b588cb4400e0324becd916de2a699988af99&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="2336397" author="alyson.cabral" created="Mon, 22 Jul 2019 13:57:47 +0000"  >&lt;p&gt;Yes, I agree with everything you said. But for my clarity, this is less about how big the chunk is and more about the write throughput on the chunk, correct?&lt;/p&gt;</comment>
                            <comment id="2336320" author="kaloian.manassiev" created="Mon, 22 Jul 2019 13:33:24 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=alyson.cabral&quot; class=&quot;user-hover&quot; rel=&quot;alyson.cabral&quot;&gt;alyson.cabral&lt;/a&gt;, correct. To be more specific here are the trade-offs:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;Entering the critical section &lt;b&gt;too early&lt;/b&gt; means that too many write operations will get blocked for possibly unbounded amount of time (e.g., 1TB jumbo chunk for example could take a day to migrate).&lt;/li&gt;
	&lt;li&gt;Entering the critical section &lt;b&gt;too late&lt;/b&gt; means that the write modifications which accrue in memory could exceed the amount of available memory on the server and cause an OOM crash&lt;/li&gt;
&lt;/ul&gt;


&lt;blockquote&gt;&lt;p&gt;I&apos;d like us to attempt to automatically move the chunk during shard removal and only require the manual move chunk if you need to enter the critical section early.&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;To make sure I understand what you are suggesting - &lt;tt&gt;moveChunk&lt;/tt&gt; as part of shard removal should ignore the &quot;jumbo&quot; flag and not skip jumbo chunks, but if as part of migration it is discovered that the in-memory usage of the change log to the chunks has exceeded 500MB, still fail the migration, which would require manual intervention? This effectively requires a third state of that option, which is something like &quot;forceJumbo But If Chunk Is Not Too Big&quot;.&lt;/p&gt;</comment>
                            <comment id="2336197" author="alyson.cabral" created="Mon, 22 Jul 2019 11:49:53 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=kaloian.manassiev&quot; class=&quot;user-hover&quot; rel=&quot;kaloian.manassiev&quot;&gt;kaloian.manassiev&lt;/a&gt; this is most impactful when you enter the critical section early because you&apos;re queueing too many writes to that chunk, right? Stopping all writes to the collection.&lt;/p&gt;

&lt;p&gt;I&apos;d like us to attempt to automatically move the chunk during shard removal and only require the manual move chunk if you need to enter the critical section early.&lt;/p&gt;</comment>
                            <comment id="2336127" author="kaloian.manassiev" created="Mon, 22 Jul 2019 09:49:51 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=josef.ahmad&quot; class=&quot;user-hover&quot; rel=&quot;josef.ahmad&quot;&gt;josef.ahmad&lt;/a&gt;/&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=alyson.cabral&quot; class=&quot;user-hover&quot; rel=&quot;alyson.cabral&quot;&gt;alyson.cabral&lt;/a&gt;/&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=cailin.nelson&quot; class=&quot;user-hover&quot; rel=&quot;cailin.nelson&quot;&gt;cailin.nelson&lt;/a&gt;, for this proposal to be used, it still requires the &lt;tt&gt;moveChunk&lt;/tt&gt; command to be manually issued with the &lt;tt&gt;forceJumbo&lt;/tt&gt;&#160;parameter, which means that shard removal scenarios will still not work only with the balancer (because it will not send that option by default).&lt;/p&gt;

&lt;p&gt;In order to make remove shard work fully in the presence of jumbo chunks, we can do two things:&lt;/p&gt;
&lt;ol&gt;
	&lt;li&gt;(Atlas-only change): Make Atlas manually move any leftover jumbo chunks by passing this parameter&lt;/li&gt;
	&lt;li&gt;(Server + possibly Atlas change): Make the &apos;forceJumbo&apos; parameter configurable under &lt;tt&gt;config.settings&lt;/tt&gt; so that the balancer can pick it up&lt;/li&gt;
	&lt;li&gt;(Server-only change): Make the balancer send &lt;tt&gt;forceJumbo&lt;/tt&gt; for any chunks, which reside on a shard, which is being removed&lt;/li&gt;
&lt;/ol&gt;


&lt;p&gt;I don&apos;t particularly like options (2) and (3), because they give opportunity for customers to unknowingly expose themselves to long stalls. Do you think implementing option (1) makes sense with possibly some checkbox to warn/opt-in users to this behaviour with the warning that it may cause stalls?&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10011">
                    <name>Depends</name>
                                                                <inwardlinks description="is depended on by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10320">
                    <name>Documented</name>
                                                                <inwardlinks description="is documented by">
                                        <issuelink>
            <issuekey id="990269">DOCS-13200</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10012">
                    <name>Related</name>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="992854">SERVER-44476</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                <customfield id="customfield_10050" key="com.atlassian.jira.toolkit:comments">
                        <customfieldname># Replies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>6.0</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_18555" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname># of Sprints</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>5.0</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10011" key="com.atlassian.jira.plugin.system.customfieldtypes:radiobuttons">
                        <customfieldname>Backwards Compatibility</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10038"><![CDATA[Fully Compatible]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                    <customfield id="customfield_13552" key="com.go2group.jira.plugin.crm:crm_generic_field">
                        <customfieldname>Case</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue><![CDATA[[5002K00000pDvNvQAK]]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10055" key="com.atlassian.jira.ext.charting:firstresponsedate">
                        <customfieldname>Date of 1st Reply</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Mon, 22 Jul 2019 09:49:51 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10052" key="com.atlassian.jira.toolkit:dayslastcommented">
                        <customfieldname>Days since reply</customfieldname>
                        <customfieldvalues>
                                        4 years, 14 weeks, 1 day ago
    
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_18254" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Dependencies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue><![CDATA[]]></customfieldvalue>


                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_15850" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_17052" key="com.atlassian.jira.plugin.system.customfieldtypes:textarea">
                        <customfieldname>Downstream Changes Summary</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>This ticket changes the behavior of chunk migration such that it is now possible to &#8220;force&#8221; a jumbo chunk to be migrated. There are changes to both the &#8216;moveChunk&#8217; command as well as balancer configuration settings.&lt;br/&gt;
&lt;br/&gt;
Changes to moveChunk command:&lt;br/&gt;
A new optional boolean parameter &amp;#39;forceJumbo&amp;#39; that defaults to false. If set to true and the chunk would otherwise have been deemed too large to move, the donor shard will enter the critical section early and writes will be blocked during the cloning phase. The migration will *not* fail even if the transfer mods queue (queue of writes that modify any documents being migrated) surpasses 500MB as it does normally. This is important to note as if the queue is very large, it can cause a long period of time where ops are blocked on this collection.&lt;br/&gt;
&lt;br/&gt;
Changes to balancer configuration settings:&lt;br/&gt;
A new field &amp;#39;attemptToBalanceJumboChunks&amp;#39; in the &amp;#39;balancer&amp;#39; document in the config.settings collection. This a boolean field that defaults to false. This document will now look something like {&amp;quot;_id&amp;quot;: &amp;quot;balancer&amp;quot;, &amp;quot;mode&amp;quot;: &amp;quot;full&amp;quot;, &amp;quot;stopped&amp;quot;: false, &amp;quot;attemptToBalanceJumboChunks&amp;quot;: false}. &lt;br/&gt;
&lt;br/&gt;
If &amp;#39;attemptToBalanceJumboChunks&amp;#39; is set to true, the balancer will schedule migrations that attempt to move large chunks as long as the chunk is *not* marked &amp;#39;jumbo&amp;#39; in config.chunks. A chunk is marked &amp;#39;jumbo&amp;#39; only after an attempt to split or move a large chunk has failed because of its size or the size of the transfer mods queue. The balancer should not continually try to schedule the migration of a chunk that has failed for either of these reasons previously to avoid the risk of forever scheduling the same migration. A user can run &amp;#39;clearJumboFlag&amp;#39; so that the balancer with schedule this migration in the future, or they can choose to use the moveChunk command to manually move the chunk.&lt;br/&gt;
&lt;br/&gt;
Unlike the new behavior of the moveChunk command above, the donor shard will *not* enter the critical section early, and if the transfer mods queue surpasses 500MB of memory the migration *will* fail. This is to avoid unintended &amp;quot;down time&amp;quot; in the case a user was unaware that moving a large chunk can cause a long period of time where ops are blocked on this collection.&lt;br/&gt;
&lt;br/&gt;
Changes to shard removal:&lt;br/&gt;
If a shard is in draining mode, meaning it has been removed, the balancer will also attempt to schedule migrations of any large chunks currently belonging to this shard. The balancer will behave the same as if &amp;#39;attemptToBalanceJumboChunks&amp;#39; is set to true (described above).&lt;br/&gt;
&lt;br/&gt;
</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_17050" key="com.atlassian.jira.plugin.system.customfieldtypes:radiobuttons">
                        <customfieldname>Downstream Team Attention</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="16942"><![CDATA[Needed]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10857" key="com.pyxis.greenhopper.jira:gh-epic-link">
                        <customfieldname>Epic Link</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>PM-1406</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                <customfield id="customfield_10057" key="com.atlassian.jira.toolkit:lastusercommented">
                        <customfieldname>Last comment by Customer</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>true</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10056" key="com.atlassian.jira.toolkit:lastupdaterorcommenter">
                        <customfieldname>Last commenter</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>luke.bonanomi@mongodb.com</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_11151" key="com.atlassian.jira.toolkit:LastCommentDate">
                        <customfieldname>Last public comment date</customfieldname>
                        <customfieldvalues>
                            4 years, 14 weeks, 1 day ago
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    <customfield id="customfield_10051" key="com.atlassian.jira.toolkit:participants">
                        <customfieldname>Participants</customfieldname>
                        <customfieldvalues>
                                        <customfieldvalue>alyson.cabral@mongodb.com</customfieldvalue>
            <customfieldvalue>xgen-internal-githook</customfieldvalue>
            <customfieldvalue>janna.golden@mongodb.com</customfieldvalue>
            <customfieldvalue>kaloian.manassiev@mongodb.com</customfieldvalue>
            <customfieldvalue>ratika.gandhi@mongodb.com</customfieldvalue>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_14254" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Product Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hveypb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_12550" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>2|hvcxqv:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10558" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_23361" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Requested By</customfieldname>
                        <customfieldvalues>
                                

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10557" key="com.pyxis.greenhopper.jira:gh-sprint">
                        <customfieldname>Sprint</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue id="3199">Sharding 2019-09-23</customfieldvalue>
    <customfieldvalue id="3305">Sharding 2019-10-07</customfieldvalue>
    <customfieldvalue id="3306">Sharding 2019-10-21</customfieldvalue>
    <customfieldvalue id="3307">Sharding 2019-11-04</customfieldvalue>
    <customfieldvalue id="3308">Sharding 2019-11-18</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    <customfield id="customfield_17051" key="com.atlassian.jira.plugin.system.customfieldtypes:multicheckboxes">
                        <customfieldname>Teams Impacted</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="16943"><![CDATA[Cloud]]></customfieldvalue>
    <customfieldvalue key="16944"><![CDATA[Docs]]></customfieldvalue>
    <customfieldvalue key="16945"><![CDATA[Drivers]]></customfieldvalue>
    <customfieldvalue key="16946"><![CDATA[Triage and Release]]></customfieldvalue>
    
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10053" key="com.atlassian.jira.ext.charting:timeinstatus">
                        <customfieldname>Time In Status</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_22870" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Triagers</customfieldname>
                        <customfieldvalues>
                                

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_14350" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>serverRank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hvekyn:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                    </customfields>
    </item>
</channel>
</rss>