<!-- 
RSS generated by JIRA (9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66) at Thu Feb 08 04:48:43 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>MongoDB Jira</title>
    <link>https://jira.mongodb.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.7.1</version>
        <build-number>970001</build-number>
        <build-date>13-04-2023</build-date>
    </build-info>


<item>
            <title>[SERVER-38356] Forbid dropping oplog in standalone mode on storage engines that support replSetResizeOplog</title>
                <link>https://jira.mongodb.org/browse/SERVER-38356</link>
                <project id="10000" key="SERVER">Core Server</project>
                    <description>&lt;p&gt;This ticket banned dropping the oplog in standalone mode entirely on storage engines that support the &lt;tt&gt;replSetResizeOplog&lt;/tt&gt; command.&lt;/p&gt;

&lt;h3&gt;&lt;a name=&quot;OriginalDescription&quot;&gt;&lt;/a&gt;Original Description&lt;/h3&gt;

&lt;p&gt;Currently the oplog cannot be dropped while running in replset mode, but can be dropped as standalone. Until recently the procedure to resize the oplog included dropping the oplog while in standalone, however, doing this procedure on an uncleanly shutdown 4.0 mongod causes committed writes to be lost (because they only existed in the oplog, and the resize preserves only the final oplog entry, see &lt;a href=&quot;https://jira.mongodb.org/browse/DOCS-12230&quot; title=&quot;Manual oplog resize in 4.0 after unclean shutdown can lose committed writes&quot; class=&quot;issue-link&quot; data-issue-key=&quot;DOCS-12230&quot;&gt;&lt;del&gt;DOCS-12230&lt;/del&gt;&lt;/a&gt; and &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-38174&quot; title=&quot;Starting replica set member standalone can lose committed writes starting in MongoDB 4.0&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-38174&quot;&gt;&lt;del&gt;SERVER-38174&lt;/del&gt;&lt;/a&gt; for more details). It would be much better if attempting this procedure in 4.0 did not result in oplog entries being lost, eg. if dropping the oplog failed.&lt;/p&gt;

&lt;p&gt;Completely forbidding oplog drop (even when standalone) would interfere with the use case of restoring a filesystem snapshot as a test standalone. A better alternative would be to forbid dropping the oplog only if local.system.replset contains documents. This way, users who are sure they want to drop the oplog can do so by first removing the documents from local.system.replset (which can&apos;t be dropped, but can have its contents removed) and then restarting the standalone. Whereas users who are just trying to perform a manual oplog resize will be stopped before any data loss.&lt;/p&gt;

&lt;p&gt;If we choose not to do this, then at the very least we should improve the &quot;standalone-but-replset-config-exists&quot; startup warning to specifically warn against to manually resizing the oplog.&lt;/p&gt;</description>
                <environment></environment>
        <key id="641799">SERVER-38356</key>
            <summary>Forbid dropping oplog in standalone mode on storage engines that support replSetResizeOplog</summary>
                <type id="4" iconUrl="https://jira.mongodb.org/secure/viewavatar?size=xsmall&amp;avatarId=14710&amp;avatarType=issuetype">Improvement</type>
                                            <priority id="3" iconUrl="https://jira.mongodb.org/images/icons/priorities/major.svg">Major - P3</priority>
                        <status id="6" iconUrl="https://jira.mongodb.org/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="13201">Fixed</resolution>
                                        <assignee username="vishnu.kaushik@mongodb.com">Vishnu Kaushik</assignee>
                                    <reporter username="kevin.pulo@mongodb.com">Kevin Pulo</reporter>
                        <labels>
                    </labels>
                <created>Mon, 3 Dec 2018 03:36:52 +0000</created>
                <updated>Sun, 29 Oct 2023 22:26:10 +0000</updated>
                            <resolved>Mon, 8 Jul 2019 20:19:25 +0000</resolved>
                                    <version>4.0.4</version>
                                    <fixVersion>4.2.1</fixVersion>
                    <fixVersion>4.3.1</fixVersion>
                                    <component>Replication</component>
                                        <votes>0</votes>
                                    <watches>17</watches>
                                                                                                                <comments>
                            <comment id="3041048" author="xgen-internal-githook" created="Wed, 15 Apr 2020 17:42:49 +0000"  >&lt;p&gt;Author:&lt;/p&gt;
{&apos;name&apos;: &apos;Tess Avitabile&apos;, &apos;email&apos;: &apos;tess.avitabile@mongodb.com&apos;, &apos;username&apos;: &apos;tessavitabile&apos;}
&lt;p&gt;Message: Revert &quot;&lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-38356&quot; title=&quot;Forbid dropping oplog in standalone mode on storage engines that support replSetResizeOplog&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-38356&quot;&gt;&lt;del&gt;SERVER-38356&lt;/del&gt;&lt;/a&gt; added functionality to forbid dropping the oplog, modified tests to get around Evergreen issue&quot;&lt;/p&gt;

&lt;p&gt;This reverts commit 58e4edb8237288f45f55cd8a59ea96a955489353.&lt;br/&gt;
Branch: v4.0&lt;br/&gt;
&lt;a href=&quot;https://github.com/mongodb/mongo/commit/3715b6221884b30b15f183f813675e27f30123eb&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://github.com/mongodb/mongo/commit/3715b6221884b30b15f183f813675e27f30123eb&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="2405770" author="xgen-internal-githook" created="Tue, 3 Sep 2019 17:54:37 +0000"  >&lt;p&gt;Author:&lt;/p&gt;
{&apos;name&apos;: &apos;Suganthi Mani&apos;, &apos;username&apos;: &apos;smani87&apos;, &apos;email&apos;: &apos;suganthi.mani@mongodb.com&apos;}
&lt;p&gt;Message: &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-38356&quot; title=&quot;Forbid dropping oplog in standalone mode on storage engines that support replSetResizeOplog&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-38356&quot;&gt;&lt;del&gt;SERVER-38356&lt;/del&gt;&lt;/a&gt; Fix copydb_illegal_collections.js to not create&lt;br/&gt;
local.oplog.rs collection.&lt;br/&gt;
Branch: v4.0&lt;br/&gt;
&lt;a href=&quot;https://github.com/mongodb/mongo/commit/d4ccbcfad2b7b47593054c3319f80b9ca922e066&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://github.com/mongodb/mongo/commit/d4ccbcfad2b7b47593054c3319f80b9ca922e066&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="2402554" author="xgen-internal-githook" created="Fri, 30 Aug 2019 21:39:20 +0000"  >&lt;p&gt;Author:&lt;/p&gt;
{&apos;name&apos;: &apos;Suganthi Mani&apos;, &apos;username&apos;: &apos;smani87&apos;, &apos;email&apos;: &apos;suganthi.mani@mongodb.com&apos;}
&lt;p&gt;Message: &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-38356&quot; title=&quot;Forbid dropping oplog in standalone mode on storage engines that support replSetResizeOplog&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-38356&quot;&gt;&lt;del&gt;SERVER-38356&lt;/del&gt;&lt;/a&gt; added functionality to forbid dropping the oplog, modified tests to get around Evergreen issue&lt;/p&gt;

&lt;p&gt;(cherry picked from commit a3244d8ac0ae530e2394248e72aadb27241adba3)&lt;br/&gt;
Branch: v4.0&lt;br/&gt;
&lt;a href=&quot;https://github.com/mongodb/mongo/commit/58e4edb8237288f45f55cd8a59ea96a955489353&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://github.com/mongodb/mongo/commit/58e4edb8237288f45f55cd8a59ea96a955489353&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="2397924" author="xgen-internal-githook" created="Wed, 28 Aug 2019 15:41:59 +0000"  >&lt;p&gt;Author:&lt;/p&gt;
{&apos;name&apos;: &apos;Suganthi Mani&apos;, &apos;username&apos;: &apos;smani87&apos;, &apos;email&apos;: &apos;suganthi.mani@mongodb.com&apos;}
&lt;p&gt;Message: &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-38356&quot; title=&quot;Forbid dropping oplog in standalone mode on storage engines that support replSetResizeOplog&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-38356&quot;&gt;&lt;del&gt;SERVER-38356&lt;/del&gt;&lt;/a&gt; added functionality to forbid dropping the oplog, modified tests to get around Evergreen issue&lt;/p&gt;

&lt;p&gt;(cherry picked from commit a3244d8ac0ae530e2394248e72aadb27241adba3)&lt;br/&gt;
Branch: v4.2&lt;br/&gt;
&lt;a href=&quot;https://github.com/mongodb/mongo/commit/86584a342319393bd0cf68624f8738b94c721201&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://github.com/mongodb/mongo/commit/86584a342319393bd0cf68624f8738b94c721201&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="2315760" author="xgen-internal-githook" created="Mon, 8 Jul 2019 19:07:13 +0000"  >&lt;p&gt;Author:&lt;/p&gt;
{&apos;name&apos;: &apos;Vishnu Kaushik&apos;, &apos;username&apos;: &apos;kauboy26&apos;, &apos;email&apos;: &apos;vishnu.kaushik@mongodb.com&apos;}
&lt;p&gt;Message: &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-38356&quot; title=&quot;Forbid dropping oplog in standalone mode on storage engines that support replSetResizeOplog&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-38356&quot;&gt;&lt;del&gt;SERVER-38356&lt;/del&gt;&lt;/a&gt; added functionality to forbid dropping the oplog, modified tests to get around Evergreen issue&lt;br/&gt;
Branch: master&lt;br/&gt;
&lt;a href=&quot;https://github.com/mongodb/mongo/commit/a3244d8ac0ae530e2394248e72aadb27241adba3&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://github.com/mongodb/mongo/commit/a3244d8ac0ae530e2394248e72aadb27241adba3&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="2289866" author="judah.schvimer" created="Wed, 19 Jun 2019 14:57:19 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=suganthi.mani&quot; class=&quot;user-hover&quot; rel=&quot;suganthi.mani&quot;&gt;suganthi.mani&lt;/a&gt;, thanks for the detailed write up. I agree with it all.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt; Do we need to document this behavior?&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;I think we should file a docs ticket and let the docs team decide.&lt;/p&gt;</comment>
                            <comment id="2289041" author="suganthi.mani" created="Tue, 18 Jun 2019 20:50:21 +0000"  >&lt;p&gt;Below is the chart shows about oplog drop supportability for standalone nodes if we plan to implement as mentioned &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-38356?focusedCommentId=2108512&amp;amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-2108512&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;div class=&apos;table-wrap&apos;&gt;
&lt;table class=&apos;confluenceTable&apos;&gt;&lt;tbody&gt;
&lt;tr&gt;
&lt;th class=&apos;confluenceTh&apos;&gt;Version&lt;/th&gt;
&lt;th class=&apos;confluenceTh&apos;&gt;Mmapv1&lt;/th&gt;
&lt;th class=&apos;confluenceTh&apos;&gt;*WT + *EMRC false&lt;/th&gt;
&lt;th class=&apos;confluenceTh&apos;&gt;*WT + *EMRC true&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;4.0&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;Yes&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;Yes&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;4.2&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;Not Applicable&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;No&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;No&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;


&lt;p&gt;&#160;*EMRC - enableMajorityReadConcern&lt;br/&gt;
 *WT - WiredTiger.&lt;/p&gt;

&lt;p&gt;As mentioned in this&#160;&lt;a href=&quot;https://jira.mongodb.org/browse/DOCS-12230&quot; title=&quot;Manual oplog resize in 4.0 after unclean shutdown can lose committed writes&quot; class=&quot;issue-link&quot; data-issue-key=&quot;DOCS-12230&quot;&gt;&lt;del&gt;DOCS-12230&lt;/del&gt;&lt;/a&gt;, the problem is that if we allow dropping of oplog to perform manual resizing of oplog collection, then it can lead to missing entries while replaying oplog entries during startup recovery, leading to data inconsistencies between nodes. Consider the below case&#160;&lt;br/&gt;
 1) Lets say we have 2 node replica set (Primary &amp;amp; Secondary).&lt;br/&gt;
 3) Secondary node gets killed in the middle of applying oplog batch (i.e. Unclean shut down). Let&apos;s assume, the ops got written to oplog but not yet applied. And, assume, the oplog has below 3 entries and the entries is for foo collection.&lt;/p&gt;
&lt;div class=&apos;table-wrap&apos;&gt;
&lt;table class=&apos;confluenceTable&apos;&gt;&lt;tbody&gt;
&lt;tr&gt;
&lt;th class=&apos;confluenceTh&apos;&gt;old.1 &lt;br/&gt;
 (storageRecoveryTs - EMRC true/&lt;br/&gt;
 AppliedThroughTs -EMRC false)&lt;/th&gt;
&lt;th class=&apos;confluenceTh&apos;&gt;old.2&lt;br/&gt;
 (unapplied)&lt;/th&gt;
&lt;th class=&apos;confluenceTh&apos;&gt;old.3&lt;br/&gt;
 (unapplied)&lt;/th&gt;
&lt;th class=&apos;confluenceTh&apos;&gt;old.4&lt;br/&gt;
 (unapplied)&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;{ts:1, op:&quot;i&quot;, o:{_id:1}}&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;{ts:2, op:&quot;i&quot;, o:{_id:2}}&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;{ts:3, op:&quot;i&quot;, o:{_id:3}}&lt;/td&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;{ts:4, op:&quot;i&quot;, o:{_id:4}}&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;


&lt;p&gt;3) Secondary node gets restarted as standalone.&lt;br/&gt;
 4) As a result of manual oplog resizing, our oplog now contains only below entries.&lt;/p&gt;
&lt;div class=&apos;table-wrap&apos;&gt;
&lt;table class=&apos;confluenceTable&apos;&gt;&lt;tbody&gt;
&lt;tr&gt;
&lt;th class=&apos;confluenceTh&apos;&gt;new.1&lt;br/&gt;
 (unapplied)&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td class=&apos;confluenceTd&apos;&gt;{ts:4, op:&quot;i&quot;, o:{_id:4}}&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;/div&gt;


&lt;p&gt;5) Restart the secondary node again with --replSet. This means for&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;&lt;b&gt;4.0 with WiredTiger Storage engine&lt;/b&gt;
	&lt;ul&gt;
		&lt;li&gt;with &lt;b&gt;EMRC=True&lt;/b&gt; (stable checkpoint), we would be &lt;b&gt;replaying oplog entries&lt;/b&gt; greater than &lt;b&gt;storage recoveryTimestamp (/stable checkpointTimestamp) to top of the oplog&lt;/b&gt;.&lt;/li&gt;
		&lt;li&gt;with &lt;b&gt;EMRC=False&lt;/b&gt; (unstable check point), we would be &lt;b&gt;replaying oplog&lt;/b&gt; &lt;b&gt;entries&lt;/b&gt; from greater than &lt;b&gt;AppliedThroughTimestamp to top of the oplog.&lt;/b&gt;&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
&lt;/ul&gt;


&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;&lt;b&gt;4.2 with WiredTiger Storage engine&lt;/b&gt;
	&lt;ul&gt;
		&lt;li&gt;Regardless of EMRC value, we would be replaying oplog entries greater than storage recoveryTimestamp (/stable checkpointTimestamp) to top of the oplog.&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;&#160; &#160; &#160; &#160;This means, we would miss applying the oplog entries in slot old.2 &amp;amp; old.3 (mentioned in step 3) during startup recovery. This would lead to data inconsistencies between this node and other nodes in the replica set.&lt;/p&gt;

&lt;p&gt;I was trying to reproduce this problem. I was expecting startup recovery (replaying entries from oplog) would be successful and I would see data inconsistency (as per &lt;a href=&quot;https://jira.mongodb.org/browse/DOCS-12230&quot; title=&quot;Manual oplog resize in 4.0 after unclean shutdown can lose committed writes&quot; class=&quot;issue-link&quot; data-issue-key=&quot;DOCS-12230&quot;&gt;&lt;del&gt;DOCS-12230&lt;/del&gt;&lt;/a&gt;). Instead, the server crashed with &lt;a href=&quot;https://github.com/mongodb/mongo/blob/8f4b0b3817fbf48cc0025632802aec37d21946da/src/mongo/db/repl/replication_recovery.cpp#L134-L138&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;fatal assertion&lt;/a&gt; while &lt;a href=&quot;https://github.com/mongodb/mongo/blob/8f4b0b3817fbf48cc0025632802aec37d21946da/src/mongo/db/repl/replication_recovery.cpp#L370&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;trying to replay oplog entries&lt;/a&gt; during startup recovery as the old.1 entry was missing. And, its good we are not silently missing the data.&#160; &lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=kevin.pulo&quot; class=&quot;user-hover&quot; rel=&quot;kevin.pulo&quot;&gt;kevin.pulo&lt;/a&gt;,&#160;let me know if I am missing something.&lt;/p&gt;

&lt;p&gt;&lt;b&gt;Thoughts:&lt;/b&gt;&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;Anyways, whether it leads to data inconsistency or server crash, we should fix the problem. So, I would suggest &lt;b&gt;banning of oplog drop from 4.0 onwards for wiredTiger storage engine regardless of enableMajorityReadConcern value and standalone mode&lt;/b&gt;.
	&lt;ul&gt;
		&lt;li&gt;To implement it, on 4.0 &amp;amp; 4.2, we can just check&#160;&lt;a href=&quot;https://github.com/mongodb/mongo/blob/8f4b0b3817fbf48cc0025632802aec37d21946da/src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp#L1714&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;supportsRecoveryTimestamp()&lt;/a&gt; which returns true for WT storage engine regardless of EMRC value. And for mmapv1, it&#160; returns false.&#160;&lt;/li&gt;
		&lt;li&gt;I am also going to file a storage ticket to expose a storage Interface method which tells the support of&#160;replSetResizeOplog cmd for that storage engine.&#160;&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
	&lt;li&gt;Since replSetResizeOplog command support is not available to mmapv1, only way to resize the oplog is by dropping the oplog. This means , for &lt;b&gt;Mmapv1 storage engine&lt;/b&gt;, there is a possibility to see above &lt;b&gt;server crash for unclean shutdowns&lt;/b&gt; as they also replay oplog entries from AppliedThroughTimestamp to top of oplog during startup recovery. And, we are ok with it.&#8211;&amp;gt;&lt;font color=&quot;#de350b&quot;&gt; Do we need to document this behavior?&lt;/font&gt;&lt;/li&gt;
	&lt;li&gt;One more concern with the approach mentioned &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-38356?focusedCommentId=2108512&amp;amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-2108512&quot; class=&quot;external-link&quot; rel=&quot;nofollow&quot;&gt;here&lt;/a&gt; is that, for 4.0, if we have a scenario for wiredTiger storage engine where 1) we start the node with --replSet &amp;amp; EMRC = true 2) Restart the node as standalone &amp;amp; EMRC = false, then supportsRecoverToStableTimestamp() returns &lt;a href=&quot;https://github.com/mongodb/mongo/blob/8f4b0b3817fbf48cc0025632802aec37d21946da/src/mongo/db/storage/wiredtiger/wiredtiger_kv_engine.cpp#L1708-L1710&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;false&lt;/a&gt;. This means, we would be able to drop the oplog. 3) Restart the node again with with --replSet &amp;amp; EMRC = true. so, on 4.0, its better to ban the oplog drop entirely for WT storage engine.&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Let me know if anyone has any concerns on banning the oplog drop entirely for WiredTiger storage engine (that supports&#160;replSetResizeOplog cmd).&lt;/p&gt;</comment>
                            <comment id="2273310" author="tess.avitabile" created="Wed, 5 Jun 2019 22:53:44 +0000"  >&lt;blockquote&gt;&lt;p&gt;Is that intentional that on 4.0 for standalone nodes with enableMajorityReadConcern=false (supportsRecoverToStableTimestamp() is false) should not perform startup recovery by applying oplog entries from the recovery timestamp?&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;Good point. This behavior may not be correct if the user has just toggled enableMajorityReadConcern. On 4.0, when enableMajorityReadConcern=false, the server takes unstable checkpoints, so it should not perform startup recovery by applying oplog entries. In this case, it is correct that standalone nodes with enableMajorityReadConcern=false do not perform startup recovery by applying oplog entries. However, if the user was running with enableMajorityReadConcern=true, then restarted in standalone mode with enableMajorityReadConcern=false and recoverFromOplogAsStandalone, then it will start up from a stable checkpoint, in which case it should perform recovery by applying oplog entries. We should probably make the decision of whether to apply oplog entries when enableMajorityReadConcern=false and recoverFromOplogAsStandalon=true based on the type of checkpoint we start up from, so it sounds like this may be a bug.&lt;/p&gt;
&lt;blockquote&gt;&lt;p&gt;As far as I can tell supporstRecoverToStableTimestamp() and supportsRecoveryTimestamp() are essentially the same on 4.2 and 4.0. William Schultz or Daniel Gottlieb do you know what Tess had in mind?&lt;/p&gt;&lt;/blockquote&gt;
&lt;p&gt;We have these two predicates to distinguish between the ability to perform rollback using RTT (which we never do when enableMajorityReadConcern=false) and the ability to start up from a stable checkpoint (which we essentially always do on 4.2 when enableMajorityReadConcern=false, and we do on 4.0 when enableMajorityReadConcern=false only if the server had been shut down with enableMajorityReadConcern=true).&lt;/p&gt;</comment>
                            <comment id="2273170" author="judah.schvimer" created="Wed, 5 Jun 2019 21:12:26 +0000"  >&lt;p&gt;The concern here is that if on clean restart the node has not applied all of its oplog entries, then we do not want to allow dropping the oplog. All storage engines that allow a clean restart to not have applied all oplog entries also support the &lt;tt&gt;replSetResizeOplog&lt;/tt&gt; command, so they do not need to allow dropping the oplog. As far as I can tell &lt;tt&gt;supportsRecoverToStableTimestamp()&lt;/tt&gt; and &lt;tt&gt;supportsRecoveryTimestamp()&lt;/tt&gt; are essentially the same on 4.2 and 4.0. &lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=william.schultz&quot; class=&quot;user-hover&quot; rel=&quot;william.schultz&quot;&gt;william.schultz&lt;/a&gt; or &lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=daniel.gottlieb&quot; class=&quot;user-hover&quot; rel=&quot;daniel.gottlieb&quot;&gt;daniel.gottlieb&lt;/a&gt; do you know what Tess had in mind?&lt;/p&gt;</comment>
                            <comment id="2272933" author="suganthi.mani" created="Wed, 5 Jun 2019 19:21:39 +0000"  >&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=tess.avitabile&quot; class=&quot;user-hover&quot; rel=&quot;tess.avitabile&quot;&gt;tess.avitabile&lt;/a&gt;/&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=judah.schvimer&quot; class=&quot;user-hover&quot; rel=&quot;judah.schvimer&quot;&gt;judah.schvimer&lt;/a&gt;&#160;Just wanted to clarify on the solution for 4.0, why can&apos;t we have the same check (supportsRecoveryTimestamp() is true) as 4.2 on 4.0?&lt;/p&gt;

&lt;p&gt;And, other thing, I noticed is that if a node is standalone and if server parameter&#160;recoverFromOplogAsStandalone is set to true, we perform startup recovery by applying oplog entries from the recovery timestamp, provided&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;On 4.2 if&#160;&lt;a href=&quot;https://github.com/mongodb/mongo/blob/423e78f7908f0fe7e01c1f843d983f7dfd6bef3f/src/mongo/db/repl/replication_coordinator_impl.cpp#L759&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;supportsRecoveryTimestamp()&#160; returns true&lt;/a&gt; ( This was introduced by&#160;&lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-39377&quot; title=&quot;Make efficient hot backup work with enableMajorityReadConcern=false&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-39377&quot;&gt;&lt;del&gt;SERVER-39377&lt;/del&gt;&lt;/a&gt; to support hot backups on 4.2)&lt;/li&gt;
	&lt;li&gt;On 4.0&#160;&#160;if &lt;a href=&quot;https://github.com/mongodb/mongo/blob/v4.0/src/mongo/db/repl/replication_coordinator_impl.cpp#L782&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;supportsRecoverToStableTimestamp() is true&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;Is that intentional that on 4.0 for standalone nodes with enableMajorityReadConcern=false (supportsRecoverToStableTimestamp() is false) should not perform startup recovery by applying oplog entries from the recovery timestamp?&lt;/p&gt;</comment>
                            <comment id="2108512" author="tess.avitabile" created="Tue, 8 Jan 2019 14:13:56 +0000"  >&lt;p&gt;Sounds good. We can forbid dropping local.oplog.rs on 4.0 if supportsRecoverToStableTimestamp() is true and on 4.2 if supportsRecoveryTimestamp() is true (on 4.2 with enableMajorityReadConcern=false, supportsRecoverToStableTimestamp() is false, but we still perform startup recovery by applying oplog entries from the recovery timestamp). I&apos;ll put this into the quick wins for next quarter.&lt;/p&gt;</comment>
                            <comment id="2108213" author="kevin.pulo@10gen.com" created="Tue, 8 Jan 2019 04:32:00 +0000"  >&lt;p&gt;The main problem with completely forbidding dropping the oplog is that it wouldn&apos;t be backportable to 4.0, because it&apos;s still the only way to resize the oplog in MMAPv1.  But this whole issue only exists for storage engines that support recovery to timestamp.  So how about we prevent dropping local.oplog.rs if &lt;tt&gt;supportsRecoverToStableTimestamp()&lt;/tt&gt; is true?&lt;/p&gt;</comment>
                            <comment id="2098732" author="asya" created="Fri, 21 Dec 2018 18:41:40 +0000"  >&lt;p&gt;Why not forbid dropping the oplog entirely?&lt;/p&gt;

&lt;p&gt;I don&apos;t see a need for force:true because if you know what you are doing you can drop it anyway.&lt;/p&gt;

&lt;p&gt;If you are converting the replica backup to a standalone you should just drop the local database which avoids any sort of inconsistency issue.&lt;/p&gt;</comment>
                            <comment id="2096778" author="kevin.pulo@10gen.com" created="Thu, 20 Dec 2018 06:34:30 +0000"  >&lt;p&gt;I&apos;m surprised by the aversion to adding &lt;tt&gt;force: true&lt;/tt&gt;.  Although &lt;tt&gt;drop&lt;/tt&gt; is a DDL command, the situation we&apos;re talking about &amp;#8212; dropping the oplog (already a special internal system collection) while in a special state (standalone after unclean shutdown) &amp;#8212; is maintenance, not a regular operation.  This is compounded by the strong potential for unexpected data loss in this situation.  There are several other maintenance commands (including within repl) which use &lt;tt&gt;force: true&lt;/tt&gt; (and have for a long time) when we want safe behavior by default, but still need to permit risky operations in rare maintenance situations:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;replSetReconfig&lt;/li&gt;
	&lt;li&gt;replSetStepDown&lt;/li&gt;
	&lt;li&gt;compact&lt;/li&gt;
	&lt;li&gt;shutdown&lt;/li&gt;
	&lt;li&gt;splitVector&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;For a startup warning to have a chance of being noticed, it would need to be a separate new warning from the existing ones, and would need to specifically call out that dropping the oplog while in this state (standalone after unclean shutdown) is likely to result in data loss, and that the supported method of resizing the oplog has changed, with a link to the relevant docs.  As previously mentioned, in addition to not being noticed, there are other failure modes for this approach, eg. a pre-existing mongo shell will not re-check startup warnings when reconnecting (I&apos;ve just filed &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-38718&quot; title=&quot;mongo shell does not re-check for startup warnings on reconnect&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-38718&quot;&gt;&lt;del&gt;SERVER-38718&lt;/del&gt;&lt;/a&gt; for this).&lt;/p&gt;</comment>
                            <comment id="2093192" author="greg.mckeon" created="Mon, 17 Dec 2018 18:44:06 +0000"  >&lt;p&gt;We&apos;re worried about adding a &quot;force&quot; parameter for only a single command - this would be inconsistent with our other DDL ops.&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=arnie.listhaus&quot; class=&quot;user-hover&quot; rel=&quot;arnie.listhaus&quot;&gt;arnie.listhaus&lt;/a&gt; also suggested doing replication recovery at startup by default when in standalone mode.  We don&apos;t want to do this because it interferes with maintenance that is performed in standalone mode, such as truncating the oplog for point-in-time backups and diagnosing the cache pressure of replication recovery.&lt;/p&gt;

&lt;p&gt;Adding startup warning letting users know that they no longer need to drop the oplog to resize it is our preferred option - do you think this would be noticed enough by users to be effective, &lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=kevin.pulo&quot; class=&quot;user-hover&quot; rel=&quot;kevin.pulo&quot;&gt;kevin.pulo&lt;/a&gt; &lt;a href=&quot;https://jira.mongodb.org/secure/ViewProfile.jspa?name=arnie.listhaus&quot; class=&quot;user-hover&quot; rel=&quot;arnie.listhaus&quot;&gt;arnie.listhaus&lt;/a&gt;?&lt;/p&gt;</comment>
                            <comment id="2088041" author="kevin.pulo@10gen.com" created="Wed, 12 Dec 2018 05:25:18 +0000"  >&lt;p&gt;Ok, that&apos;s fair enough.&lt;/p&gt;

&lt;p&gt;How about instead requiring a &lt;tt&gt;force: true&lt;/tt&gt; parameter to the &lt;tt&gt;drop&lt;/tt&gt; command, when in this state?  The error message could educate the admin about this issue, refer them to the docs and the replSetResizeOplog command, etc.  And that if they &lt;em&gt;&lt;b&gt;really&lt;/b&gt;&lt;/em&gt; want to drop the oplog, they can re-run the drop command with &lt;tt&gt;force: true&lt;/tt&gt;.&lt;/p&gt;

&lt;p&gt;This should prevent any accidents before they actually happen, while also still allowing arbitrary maintenance in the rare cases it might be necessary, and without being a huge development burden.&lt;/p&gt;</comment>
                            <comment id="2085913" author="greg.mckeon" created="Mon, 10 Dec 2018 18:41:00 +0000"  >&lt;p&gt;We want to enable users to do arbitrary maintenance in standalone mode, so we don&apos;t want to ban dropping the oplog.  We don&apos;t think adding a startup warning would be helpful, because it doesn&apos;t occur at the same time the user performs the drop.  If you feel strongly about the warning, let us know.&lt;/p&gt;</comment>
                    </comments>
                <issuelinks>
                            <issuelinktype id="10420">
                    <name>Backports</name>
                                            <outwardlinks description="backported by">
                                                        </outwardlinks>
                                                        </issuelinktype>
                            <issuelinktype id="10011">
                    <name>Depends</name>
                                                                <inwardlinks description="is depended on by">
                                                        </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10320">
                    <name>Documented</name>
                                                                <inwardlinks description="is documented by">
                                        <issuelink>
            <issuekey id="838475">DOCS-12863</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                            <issuelinktype id="10520">
                    <name>Problem/Incident</name>
                                            <outwardlinks description="causes">
                                                        </outwardlinks>
                                                        </issuelinktype>
                            <issuelinktype id="10012">
                    <name>Related</name>
                                            <outwardlinks description="related to">
                                        <issuelink>
            <issuekey id="634991">SERVER-38174</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="845467">SERVER-42129</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="845730">SERVER-42131</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="990418">SERVER-44440</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="641798">DOCS-12230</issuekey>
        </issuelink>
                            </outwardlinks>
                                                                <inwardlinks description="is related to">
                                        <issuelink>
            <issuekey id="805156">SERVER-41792</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="839402">TOOLS-2332</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="807006">SERVER-41818</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="1315885">SERVER-47558</issuekey>
        </issuelink>
            <issuelink>
            <issuekey id="1316232">SERVER-47567</issuekey>
        </issuelink>
                            </inwardlinks>
                                    </issuelinktype>
                    </issuelinks>
                <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                <customfield id="customfield_10050" key="com.atlassian.jira.toolkit:comments">
                        <customfieldname># Replies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>17.0</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_18555" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname># of Sprints</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>4.0</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                <customfield id="customfield_12450" key="com.atlassian.jira.plugin.system.customfieldtypes:multicheckboxes">
                        <customfieldname>Backport Requested</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="16775"><![CDATA[v4.2]]></customfieldvalue>
    <customfieldvalue key="15640"><![CDATA[v4.0]]></customfieldvalue>
    
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10011" key="com.atlassian.jira.plugin.system.customfieldtypes:radiobuttons">
                        <customfieldname>Backwards Compatibility</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10038"><![CDATA[Fully Compatible]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10055" key="com.atlassian.jira.ext.charting:firstresponsedate">
                        <customfieldname>Date of 1st Reply</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Mon, 10 Dec 2018 18:41:00 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10052" key="com.atlassian.jira.toolkit:dayslastcommented">
                        <customfieldname>Days since reply</customfieldname>
                        <customfieldvalues>
                                        3 years, 43 weeks ago
    
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_18254" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Dependencies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue><![CDATA[]]></customfieldvalue>


                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_15850" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_17052" key="com.atlassian.jira.plugin.system.customfieldtypes:textarea">
                        <customfieldname>Downstream Changes Summary</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Two things need to be documented:&lt;br/&gt;
&lt;br/&gt;
1)&lt;br/&gt;
The changes made in this ticket prevent the oplog from being dropped on a standalone node when the WiredTiger storage engine is being used (or any other storage engine that supports the replSetResizeOplog command; currently on the WiredTiger storage engine supports that command). Note that dropping the oplog is already forbidden for nodes running as a part of a replica set.&lt;br/&gt;
In the past, dropping the oplog was a step in the procedure to manually resize the oplog. However, dropping the oplog had a few bad side effects, and so we are trying to get users to use the replSetResizeOplog command.&lt;br/&gt;
(For further information please see Suganthi&amp;#39;s comment on ticket &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-38356&quot; title=&quot;Forbid dropping oplog in standalone mode on storage engines that support replSetResizeOplog&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-38356&quot;&gt;&lt;strike&gt;SERVER-38356&lt;/strike&gt;&lt;/a&gt;)&lt;br/&gt;
&lt;br/&gt;
&lt;br/&gt;
2)&lt;br/&gt;
Dropping the oplog can lead to data inconsistencies, as unapplied oplog entries can be lost. In her attempt to recreate this issue and see inconsistencies, Suganthi encountered a server crash instead due to an fassert: after an unclean shutdown on the MMAPv1 storage engine, on startup recovery the server tries to replay entries from the AppliedThroughTimestamp to the top of the oplog. It checks if the first timestamp it found matches the oplog application start point, and if not, crashes (&lt;a href=&quot;https://github.com/mongodb/mongo/blob/8f4b0b3817fbf48cc0025632802aec37d21946da/src/mongo/db/repl/replication_recovery.cpp#L134-L138&quot;&gt;https://github.com/mongodb/mongo/blob/8f4b0b3817fbf48cc0025632802aec37d21946da/src/mongo/db/repl/replication_recovery.cpp#L134-L138&lt;/a&gt;).&lt;br/&gt;
More information can be found on Suganthi&amp;#39;s comment on ticket &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-38356&quot; title=&quot;Forbid dropping oplog in standalone mode on storage engines that support replSetResizeOplog&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-38356&quot;&gt;&lt;strike&gt;SERVER-38356&lt;/strike&gt;&lt;/a&gt;.</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_17050" key="com.atlassian.jira.plugin.system.customfieldtypes:radiobuttons">
                        <customfieldname>Downstream Team Attention</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="16942"><![CDATA[Needed]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10857" key="com.pyxis.greenhopper.jira:gh-epic-link">
                        <customfieldname>Epic Link</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>PM-1335</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                <customfield id="customfield_10057" key="com.atlassian.jira.toolkit:lastusercommented">
                        <customfieldname>Last comment by Customer</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>true</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10056" key="com.atlassian.jira.toolkit:lastupdaterorcommenter">
                        <customfieldname>Last commenter</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>luke.bonanomi@mongodb.com</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_11151" key="com.atlassian.jira.toolkit:LastCommentDate">
                        <customfieldname>Last public comment date</customfieldname>
                        <customfieldvalues>
                            3 years, 43 weeks ago
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_16465" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Linked BF Score</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>47.0</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10051" key="com.atlassian.jira.toolkit:participants">
                        <customfieldname>Participants</customfieldname>
                        <customfieldvalues>
                                        <customfieldvalue>asya.kamsky@mongodb.com</customfieldvalue>
            <customfieldvalue>xgen-internal-githook</customfieldvalue>
            <customfieldvalue>greg.mckeon@mongodb.com</customfieldvalue>
            <customfieldvalue>judah.schvimer@mongodb.com</customfieldvalue>
            <customfieldvalue>kevin.pulo@mongodb.com</customfieldvalue>
            <customfieldvalue>suganthi.mani@mongodb.com</customfieldvalue>
            <customfieldvalue>tess.avitabile@mongodb.com</customfieldvalue>
            <customfieldvalue>vishnu.kaushik@mongodb.com</customfieldvalue>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_14254" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Product Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hue8hr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_12550" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>2|hr8idj:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10558" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>9223372036854775807</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_23361" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Requested By</customfieldname>
                        <customfieldvalues>
                                

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_10557" key="com.pyxis.greenhopper.jira:gh-sprint">
                        <customfieldname>Sprint</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue id="2999">Repl 2019-06-03</customfieldvalue>
    <customfieldvalue id="3000">Repl 2019-06-17</customfieldvalue>
    <customfieldvalue id="3001">Repl 2019-07-01</customfieldvalue>
    <customfieldvalue id="3026">Repl 2019-07-15</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                    <customfield id="customfield_17051" key="com.atlassian.jira.plugin.system.customfieldtypes:multicheckboxes">
                        <customfieldname>Teams Impacted</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="16944"><![CDATA[Docs]]></customfieldvalue>
    <customfieldvalue key="16946"><![CDATA[Triage and Release]]></customfieldvalue>
    
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10053" key="com.atlassian.jira.ext.charting:timeinstatus">
                        <customfieldname>Time In Status</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_22870" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Triagers</customfieldname>
                        <customfieldvalues>
                                

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_14350" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>serverRank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hudur3:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                    </customfields>
    </item>
</channel>
</rss>