<!-- 
RSS generated by JIRA (9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66) at Thu Feb 08 03:03:09 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>MongoDB Jira</title>
    <link>https://jira.mongodb.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.7.1</version>
        <build-number>970001</build-number>
        <build-date>13-04-2023</build-date>
    </build-info>


<item>
            <title>[SERVER-3468] Very high cpu usage</title>
                <link>https://jira.mongodb.org/browse/SERVER-3468</link>
                <project id="10000" key="SERVER">Core Server</project>
                    <description>&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;After some time we start seeing mongostat ticks with very high (sometimes &amp;gt;100) lock %&lt;/li&gt;
	&lt;li&gt;Throughput falls, cpu skyrockets and io stays mostly idle&lt;/li&gt;
	&lt;li&gt;Machine becomes very unresponsive&lt;/li&gt;
	&lt;li&gt;The database never totally stops processing requests but is extremely slow even to call serverStatus()&lt;/li&gt;
	&lt;li&gt;Calling db.runCommand(
{closeAllDatabases:1}
&lt;p&gt;); resolves the problem&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;We have not had much luck isolating the cause of this behaviour but it does appear to be heavy reads that cause it, we have tested totally saturating with writes without this happening&lt;/p&gt;

&lt;p&gt;The timeline that most recently triggered this problem is:&lt;/p&gt;
&lt;ul class=&quot;alternate&quot; type=&quot;square&quot;&gt;
	&lt;li&gt;Two slaves doing their initial sync&lt;/li&gt;
	&lt;li&gt;Moderate read/write load &amp;lt;1mbs&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;We have tried running with numactrl interleave=all without any change (running with it on in this example)&lt;/p&gt;

&lt;p&gt;#################################################################&lt;br/&gt;
During &quot;incident&quot;:&lt;/p&gt;

&lt;p&gt;Mongostat:&lt;br/&gt;
insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn      set repl       time&lt;br/&gt;
     0      0      0      0       0       6       0   191g   382g  11.6g      0       24          0       2|1     1|2   632b     3k    28 test    M   15:47:53&lt;br/&gt;
    18      4      2      0       0      10       0   191g   382g  11.6g      1        0          0       2|3     1|3     4k     4k    28 test    M   15:47:57&lt;br/&gt;
     1      8      2      0       3       8       1   191g   382g  11.6g      1      231          0       5|0     4|1     1k    30k    28 test    M   15:48:01&lt;br/&gt;
    23      1      2      0       2       8       0   191g   382g  11.6g     10     99.5          0       1|0     1|1     3k     4k    28 test    M   15:48:05&lt;br/&gt;
     1      0      1      0       0       5       0   191g   382g  11.6g      0        0          0       2|3     1|3     1k     3k    28 test    M   15:48:08&lt;br/&gt;
     0      1      0      0       0       9       0   191g   382g  11.6g      0      241          0       3|1     2|2     1k     4k    28 test    M   15:48:14&lt;br/&gt;
     1      1      0      0       0       7       0   191g   382g  11.6g      0        0          0       3|2     2|2   931b     3k    28 test    M   15:48:17&lt;br/&gt;
     1      3      2      0       2       6       0   191g   382g  11.6g      2      264          0       1|0     1|1     1k    28k    28 test    M   15:48:21&lt;br/&gt;
    26      1      1      0      11       5       0   191g   382g  11.6g     65     75.6          0       0|0     1|0     4k     2k    28 test    M   15:48:24&lt;br/&gt;
     0      9      3      0       7      19       0   191g   382g  11.7g     40     22.2          0       0|0     1|0     3k     9k    28 test    M   15:48:39&lt;br/&gt;
     0      0      0      0       0       8       0   191g   382g  11.7g      0        0          0       3|3     2|3   759b     3k    28 test    M   15:48:44&lt;br/&gt;
     3      2      1      0       0       5       0   191g   382g  11.7g     27      174          0       4|0     4|0     2k     2k    28 test    M   15:48:47&lt;br/&gt;
     0      1      1      0       6       4       0   191g   382g  11.7g    546     59.7          0       0|0     1|0   963b   391k    28 test    M   15:48:48&lt;br/&gt;
     0      0      0      0       7       3       0   191g   382g  11.8g    425        0          0       0|0     2|0   304b     4m    28 test    M   15:48:49&lt;br/&gt;
     0      1      1      0       4       4       0   191g   382g  11.8g    250        0          0       0|0     1|0   489b     3k    28 test    M   15:48:50&lt;br/&gt;
     3      0      1      0       0       3       0   191g   382g  11.8g    372        0          0       0|0     1|0     1k     1k    28 test    M   15:48:51&lt;br/&gt;
     0      1      2      0       1       7       0   191g   382g  11.8g    121        0          0       0|1     1|1   821b     3k    28 test    M   15:48:56&lt;br/&gt;
     6      5      2      0       2      15       1   191g   382g  11.8g     78        0          0       0|0     1|0     2k    30k    28 test    M   15:49:06&lt;br/&gt;
    11      4      1      0       2       6       0   191g   382g  11.8g     42      3.7          0       0|1     1|1     2k    28k    28 test    M   15:49:07&lt;/p&gt;


&lt;p&gt;iostat&lt;/p&gt;

&lt;p&gt;avg-cpu:  %user   %nice %system %iowait  %steal   %idle&lt;br/&gt;
           0.00    0.00    7.66    0.00    0.00   92.34&lt;/p&gt;

&lt;p&gt;Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn&lt;br/&gt;
sda               5.50       102.00         0.00        204          0&lt;br/&gt;
sdb               0.00         0.00         0.00          0          0&lt;br/&gt;
sdc               0.00         0.00         0.00          0          0&lt;br/&gt;
sdd               4.00         0.00      2048.00          0       4096&lt;br/&gt;
sde               0.00         0.00         0.00          0          0&lt;br/&gt;
sdf               1.00         0.00         8.00          0         16&lt;br/&gt;
dm-0              0.00         0.00         0.00          0          0&lt;/p&gt;


&lt;p&gt;vmstat&lt;/p&gt;

&lt;p&gt;procs ----------&lt;del&gt;memory&lt;/del&gt;--------- --&lt;del&gt;swap&lt;/del&gt;- ----&lt;del&gt;io&lt;/del&gt;--- &lt;del&gt;system&lt;/del&gt;- ---&lt;del&gt;cpu&lt;/del&gt;---&lt;br/&gt;
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa&lt;br/&gt;
 1  0     88  77448  21336 23876708    0    0   768   189  120   87  0  6 94  1&lt;br/&gt;
 3  0     88  76748  21352 23876740    0    0   162  1090  672  833  0 14 86  0&lt;br/&gt;
 1  0     88  76696  21352 23876628    0    0     0  2164  634  738  0  5 95  0&lt;br/&gt;
 1  0     88  76952  21360 23876860    0    0     0  2084  666  834  0  9 91  0&lt;br/&gt;
 1  0     88  77200  21360 23876860    0    0     0  1034  591  725  0 10 90  0&lt;br/&gt;
 1  0     88  77448  21372 23876852    0    0     0  2062  669  826  0 10 90  0&lt;br/&gt;
 1  0     88  77572  21384 23876848    0    0    64  2164  718  920  0  5 95  0&lt;br/&gt;
 1  0     88  77572  21384 23876852    0    0     0  1028  550  655  0  7 93  0&lt;br/&gt;
 1  0     88  78044  21392 23876988    0    0     0  2056  568  686  0  6 94  0&lt;br/&gt;
 1  0     88  77392  21392 23876908    0    0   236  1038  719  882  0 10 89  0&lt;br/&gt;
 2  0     88  77408  21392 23876904    0    0     0  2048  600  706  0  8 92  0&lt;br/&gt;
 2  0     88  77536  21392 23876908    0    0     0  2054  606  703  0  6 94  0&lt;br/&gt;
 1  0     88  77872  21408 23876912    0    0     0  1050  657  749  0 12 87  0&lt;/p&gt;

&lt;p&gt;Log file shows lots of&lt;br/&gt;
    serverStatus was very slow: &lt;/p&gt;
{ after basic: 0, middle of mem: 1990, after mem: 1990, after connections: 1990, after extra info: 2550, after counters: 2550, after repl: 2550, after asserts: 2550 }


&lt;p&gt;#################################################################&lt;br/&gt;
After calling close files:&lt;/p&gt;

&lt;p&gt;insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn      set repl       time&lt;br/&gt;
     0    152      1      0       0       5       0   191g   382g   108m      0        0          0       0|0     0|0    19k   399k    21 test    M   15:51:59&lt;br/&gt;
     0    152      1      0       0       7       0   191g   382g   108m      0        0          0       0|0     0|0    19k   393k    21 test    M   15:52:00&lt;br/&gt;
     0    294      0      0       0       6       0   191g   382g   108m      0        0          0       0|0     0|0    37k   788k    21 test    M   15:52:01&lt;br/&gt;
    29    172      0      0       2       7       0   191g   382g   108m      0        0          0       0|0     0|0    30k   516k    21 test    M   15:52:02&lt;br/&gt;
    11    180      0      0       0       6       0   191g   382g   108m      1      0.5          0       0|0     0|0    23k   431k    21 test    M   15:52:03&lt;br/&gt;
     0    193      1      0       0       9       0   191g   382g   108m      0        0          0       0|0     0|0    29k   515k    21 test    M   15:52:04&lt;br/&gt;
     0    192      1      0       0       7       0   191g   382g   108m      0        0          0       0|0     0|0    29k   519k    21 test    M   15:52:05&lt;br/&gt;
     0    260      0      0       0       8       0   191g   382g   108m      0        0          0       0|0     0|0    34k   743k    21 test    M   15:52:06&lt;br/&gt;
     0    221      0      0       0       6       0   191g   382g   108m      0        0          0       0|0     0|0    31k   537k    21 test    M   15:52:07&lt;br/&gt;
     0    179      1      0       0       6       0   191g   382g   108m      0        0          0       0|0     0|0    28k   508k    21 test    M   15:52:08&lt;/p&gt;

&lt;p&gt;cpu returns to mostly idle with this load &lt;/p&gt;</description>
                <environment>Hardware:&lt;br/&gt;
2x Quad Core 2.4ghz Xeon (Hyperthreaded)&lt;br/&gt;
24gb ram in 2 banks (this is a NUMA system)&lt;br/&gt;
SSD storage&lt;br/&gt;
&lt;br/&gt;
We have tried the following OSes:&lt;br/&gt;
Windows 2008 R2&lt;br/&gt;
Ubuntu server 11.4 on Hyper-V&lt;br/&gt;
Ubuntu server 11.4&lt;br/&gt;
&lt;br/&gt;
mongod --journal --replSet </environment>
        <key id="20050">SERVER-3468</key>
            <summary>Very high cpu usage</summary>
                <type id="1" iconUrl="https://jira.mongodb.org/secure/viewavatar?size=xsmall&amp;avatarId=14703&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.mongodb.org/images/icons/priorities/major.svg">Major - P3</priority>
                        <status id="6" iconUrl="https://jira.mongodb.org/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="9">Done</resolution>
                                        <assignee username="mathias@mongodb.com">Mathias Stearn</assignee>
                                    <reporter username="braden">Braden Evans</reporter>
                        <labels>
                    </labels>
                <created>Fri, 22 Jul 2011 23:20:41 +0000</created>
                <updated>Tue, 12 Jul 2016 00:19:57 +0000</updated>
                            <resolved>Wed, 27 Jul 2011 21:36:01 +0000</resolved>
                                                                                        <votes>0</votes>
                                    <watches>2</watches>
                                                                                                                <comments>
                            <comment id="44955" author="eliot" created="Wed, 27 Jul 2011 21:35:57 +0000"  >&lt;p&gt;Got it.&lt;br/&gt;
See: &lt;a href=&quot;https://jira.mongodb.org/browse/SERVER-3497&quot; title=&quot;check for  /proc/sys/vm/zone_reclaim_mode at startup&quot; class=&quot;issue-link&quot; data-issue-key=&quot;SERVER-3497&quot;&gt;&lt;del&gt;SERVER-3497&lt;/del&gt;&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="44841" author="braden" created="Wed, 27 Jul 2011 16:24:37 +0000"  >&lt;p&gt;Setting zone_reclaim_mode should be unnecessary if kernel support for NUMA is fully disabled, basically we just made sure it was off. The kernel will automatically set it to 1 if it detects a NUMA machine where the cost to communicate with a non-local node is under a specific threshold.&lt;/p&gt;</comment>
                            <comment id="44830" author="eliot" created="Wed, 27 Jul 2011 16:08:13 +0000"  >&lt;p&gt;Interesting - did you test each independently? &lt;/p&gt;</comment>
                            <comment id="44826" author="braden" created="Wed, 27 Jul 2011 16:05:20 +0000"  >&lt;p&gt;We might have solved the problem, the server has been up for over 12 hours and performance is still nominal, all CPUs are enabled and so is HyperThreading.&lt;/p&gt;

&lt;p&gt;Solution: We disabled kernel NUMA support at boot time, effectively forcing memory allocation to be interleaved between nodes for all processes and set /proc/sys/vm/zone_reclaim_mode to zero. &lt;/p&gt;</comment>
                            <comment id="44562" author="braden" created="Tue, 26 Jul 2011 17:34:15 +0000"  >&lt;p&gt;processor       : 0&lt;br/&gt;
vendor_id       : GenuineIntel&lt;br/&gt;
cpu family      : 6&lt;br/&gt;
model           : 44&lt;br/&gt;
model name      : Intel(R) Xeon(R) CPU           E5620  @ 2.40GHz&lt;br/&gt;
stepping        : 2&lt;br/&gt;
cpu MHz         : 2401.000&lt;br/&gt;
cache size      : 12288 KB&lt;br/&gt;
physical id     : 0&lt;br/&gt;
siblings        : 4&lt;br/&gt;
core id         : 0&lt;br/&gt;
cpu cores       : 4&lt;br/&gt;
apicid          : 0&lt;br/&gt;
initial apicid  : 0&lt;br/&gt;
fpu             : yes&lt;br/&gt;
fpu_exception   : yes&lt;br/&gt;
cpuid level     : 11&lt;br/&gt;
wp              : yes&lt;br/&gt;
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc&lt;br/&gt;
 aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm ida arat epb dts tpr_shadow vnmi flexpriority ept vpid&lt;br/&gt;
bogomips        : 4800.39&lt;br/&gt;
clflush size    : 64&lt;br/&gt;
cache_alignment : 64&lt;br/&gt;
address sizes   : 40 bits physical, 48 bits virtual&lt;br/&gt;
power management:&lt;/p&gt;


&lt;p&gt;+7 more identical cores&lt;/p&gt;


&lt;p&gt;We are going to pull a processor out of the machine and test again&lt;/p&gt;</comment>
                            <comment id="44557" author="eliot" created="Tue, 26 Jul 2011 17:26:04 +0000"  >&lt;p&gt;Its probably flushing the kernel mapping structures, which is why i think its a numa type issue.&lt;/p&gt;

&lt;p&gt;Can you send /proc/cpuinfo&lt;/p&gt;</comment>
                            <comment id="44554" author="braden" created="Tue, 26 Jul 2011 17:14:23 +0000"  >&lt;p&gt;I think that the best clue we have right now is that calling db.runCommand(&lt;/p&gt;
{closeAllDatabases:1}
&lt;p&gt;) immediately restores performance to normal levels. Any ideas?&lt;/p&gt;</comment>
                            <comment id="44553" author="braden" created="Tue, 26 Jul 2011 17:12:50 +0000"  >&lt;p&gt;Yes, we have tried it with on and off, no difference that I could observe.&lt;/p&gt;</comment>
                            <comment id="44552" author="eliot" created="Tue, 26 Jul 2011 17:10:51 +0000"  >&lt;p&gt;You tried interleaved with hyper threading off?&lt;/p&gt;</comment>
                            <comment id="44551" author="braden" created="Tue, 26 Jul 2011 17:07:08 +0000"  >&lt;p&gt;Hi Eliot, it is already turned off, we have 8 hardware cores, 16 logical cores with HT enabled.&lt;/p&gt;</comment>
                            <comment id="44550" author="eliot" created="Tue, 26 Jul 2011 16:59:25 +0000"  >&lt;p&gt;Can you try turning off hyper threading?&lt;/p&gt;

&lt;p&gt;I just saw a very similar situation and turning off hyper threading fixed it.&lt;/p&gt;</comment>
                            <comment id="44549" author="braden" created="Tue, 26 Jul 2011 16:54:20 +0000"  >&lt;p&gt;Summary of the situation so far:&lt;/p&gt;

&lt;p&gt;0. Mongod is running with numactl --interleave=all and hyper-threading is disabled&lt;br/&gt;
1. Machine is idle, we start a mongodump, no other operations are running in the background &lt;br/&gt;
2. Dump runs at normal speed for 10 to 15 minutes, disk and CPU usage are high but performance is excellent &lt;br/&gt;
3. After 10 to 15 minutes, most disk activity ceases, CPU usage remain high, mongodump process slows down to a crawl - this is the &quot;degraded&quot; state &lt;br/&gt;
4. At this point, we can stop the dump and CPU activity will remain high, disk activity will remains minimal. &lt;br/&gt;
5. While in this degraded state, all database operation are very slow, **&lt;b&gt;even hours after mongodump was stopped&lt;/b&gt;**. Even calls to serverStatus() are incredibly slow. &lt;br/&gt;
6. Mongod will remain in this degraded state for several hours or until we run db.runCommand(&lt;/p&gt;
{closeAllDatabases:1}
&lt;p&gt;); &lt;br/&gt;
7. db.runCommand(&lt;/p&gt;
{closeAllDatabases:1}
&lt;p&gt;) returns in 1-2 seconds, performance is immediately restored to normal levels&lt;br/&gt;
8, If we run mongodb without numactl, the database will perform flawlessly for hours (instead of 15 minutes) but will eventually degrade&lt;br/&gt;
9. In the degraded state, mongodb is locking up a CPU core, iostat reveals very little disk activity, vmstat reveals that the system is not waiting for IO or in a distressed state&lt;/p&gt;</comment>
                            <comment id="44548" author="braden" created="Tue, 26 Jul 2011 16:50:48 +0000"  >&lt;p&gt;Hi Mathias, here is the output fron numactl --hardware:&lt;/p&gt;

&lt;p&gt;available: 2 nodes (0-1)&lt;br/&gt;
node 0 cpus: 0 1 2 3&lt;br/&gt;
node 0 size: 12279 MB&lt;br/&gt;
node 0 free: 64 MB&lt;br/&gt;
node 1 cpus: 4 5 6 7&lt;br/&gt;
node 1 size: 12288 MB&lt;br/&gt;
node 1 free: 12 MB&lt;br/&gt;
node distances:&lt;br/&gt;
node   0   1&lt;br/&gt;
  0:  10  21&lt;br/&gt;
  1:  21  10&lt;/p&gt;

&lt;p&gt;Unfortunately, I found mongod to be in the degraded state early this morning after performing flawlessly for hours. &lt;/p&gt;

&lt;p&gt;Right now the database is up, albeit running very slowly, locking up a core at 100%, the machine is otherwise idle. Iostat reveals very little disk activity, vmstat reveals that the machine is not waiting for IO or in a distressed state.&lt;/p&gt;

&lt;p&gt;Interestingly, if db.runCommand(&lt;/p&gt;
{closeAllDatabases:1}
&lt;p&gt; the database performance will be immediately restored to normal levels.&lt;/p&gt;</comment>
                            <comment id="44476" author="redbeard0531" created="Tue, 26 Jul 2011 11:28:25 +0000"  >&lt;p&gt;Could you send the output of numactl --hardware both while seeing the issue and with it solved?&lt;/p&gt;</comment>
                            <comment id="44429" author="eliot" created="Tue, 26 Jul 2011 04:50:39 +0000"  >&lt;p&gt;Mathias - we should try and figure out why numactl was hurting in this case.&lt;/p&gt;</comment>
                            <comment id="44397" author="braden" created="Mon, 25 Jul 2011 22:14:44 +0000"  >&lt;p&gt;Hi Eliot, we have identified the issue. The problems is caused by running mongod with &quot;numactl --interleave=all&quot; as per instructed by this message logged during the server startup:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;
	&lt;ul&gt;
		&lt;li&gt;WARNING: You are running on a NUMA machine.&lt;/li&gt;
		&lt;li&gt;We suggest launching mongod like this to avoid performance problems:&lt;/li&gt;
		&lt;li&gt;numactl --interleave=all mongod &lt;span class=&quot;error&quot;&gt;&amp;#91;other options&amp;#93;&lt;/span&gt;&lt;/li&gt;
	&lt;/ul&gt;
	&lt;/li&gt;
&lt;/ul&gt;


&lt;p&gt;As it turns out, this recommendation causes performance problems instead of solving them. Running mongod without numactl solves all the issues we were experiencing.&lt;/p&gt;

&lt;p&gt;Thank you for your help Eliot, we really appreciate it!&lt;/p&gt;</comment>
                            <comment id="44386" author="braden" created="Mon, 25 Jul 2011 21:24:06 +0000"  >&lt;p&gt;Hi Eliot,&lt;/p&gt;

&lt;p&gt;As per your suggestion, I just ran the 1.9 nightly mongodump --forceTableScan without any other command line options. Same as before, we experienced performance degradation 10-15 minutes after the dump started. According to vmstat (below), the system is not under memory pressure or waiting for IO. Mongostat (also below) shows that that the database is still serving requests, albeit at a much reduced pace.&lt;/p&gt;

&lt;p&gt; procs ----------&lt;del&gt;memory&lt;/del&gt;--------- --&lt;del&gt;swap&lt;/del&gt;- ----&lt;del&gt;io&lt;/del&gt;--- &lt;del&gt;system&lt;/del&gt;- ---&lt;del&gt;cpu&lt;/del&gt;---&lt;br/&gt;
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa&lt;br/&gt;
 1  0   2684  81620  31768 23707820    0    0    48     0 1075 1284  0 13 87  0&lt;br/&gt;
 1  0   2684  81620  31768 23707908    0    0     0     0  584  790  0 14 86  0&lt;br/&gt;
 1  0   2684  81620  31768 23707888    0    0    28     0  603  845  0 12 88  0&lt;br/&gt;
 1  0   2684  81496  31768 23707916    0    0     0     0  613  878  0 12 88  0&lt;br/&gt;
 1  0   2684  81372  31768 23707916    0    0     0     0  543  776  0 12 88  0&lt;br/&gt;
 1  0   2684  81372  31768 23707916    0    0     0     0 1037 1191  0 12 87  0&lt;br/&gt;
 1  0   2684  81372  31776 23707912    0    0     0    12  675  917  0 14 81  6&lt;br/&gt;
 1  0   2684  81496  31776 23707916    0    0     0     0  649  879  0 12 88  0&lt;br/&gt;
 2  0   2684  81496  31776 23707916    0    0     0     0  569  756  0 13 87  0&lt;br/&gt;
 1  0   2684  81496  31784 23707916    0    0     0    32  579  804  0 13 87  0&lt;br/&gt;
 1  0   2684  81620  31784 23707916    0    0     0     0  520  803  0 14 86  0&lt;br/&gt;
 1  0   2684  81620  31784 23707916    0    0     0     0  535  747  0 12 88  0&lt;br/&gt;
 1  0   2684  81620  31784 23707916    0    0     0     0  572  780  0 13 87  0&lt;br/&gt;
 1  0   2684  78124  31784 23708784    0    0     0     0 1547 1775  0 13 87  0&lt;br/&gt;
 1  0   2684  78124  31792 23708784    0    0     0    16 1603 1803  0 13 87  0&lt;br/&gt;
 1  0   2684  78248  31792 23708784    0    0     0     0 1586 1767  0 13 87  0&lt;br/&gt;
 1  0   2684  78124  31792 23708784    0    0     0     0 1576 1853  0 13 87  0&lt;br/&gt;
 1  0   2684  77876  31816 23708760    0    0     0    48 1582 1773  0 13 86  1&lt;br/&gt;
 1  0   2684  77876  31816 23708784    0    0     0     0 1601 1756  0 13 87  0&lt;br/&gt;
 1  0   2684  78000  31816 23708396    0    0     0     0 1681 1836  0 12 87  0&lt;br/&gt;
 1  0   2684  78000  31816 23708324    0    0     0     4 1723 1835  0 13 87  0&lt;br/&gt;
 1  0   2684  88664  31816 23708324    0    0     0    16 1722 1927  0 13 87  0&lt;br/&gt;
 1  0   2684  88424  31816 23708300    0    0     0     0 1718 1867  0 13 87  0&lt;/p&gt;

&lt;p&gt;insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn      set repl       time&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   375b     6k     7 wireclub    M   14:12:53&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   171b     3k     7 wireclub    M   14:12:54&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:12:55&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   14:12:56&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:12:57&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   14:12:58&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:12:59&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   14:13:00&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:13:01&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   14:13:02&lt;br/&gt;
insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn      set repl       time&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:13:03&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   14:13:04&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   375b     6k     7 wireclub    M   14:13:05&lt;br/&gt;
     0      0      0      0       0       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   124b     3k     7 wireclub    M   14:13:06&lt;br/&gt;
     0      0      0      0       1       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   360b     4k     7 wireclub    M   14:13:07&lt;br/&gt;
     0      0      0      0       0       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   186b     4k     7 wireclub    M   14:13:08&lt;br/&gt;
     0      0      0      0       1       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   360b     4k     7 wireclub    M   14:13:09&lt;br/&gt;
     0      0      0      0       0       2       0   191g   382g  18.6g      0        0          0       0|0     1|0   124b     3k     7 wireclub    M   14:13:10&lt;br/&gt;
     0      0      0      0       1       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   360b     4k     7 wireclub    M   14:13:11&lt;br/&gt;
     0      0      0      0       0       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   186b     4k     7 wireclub    M   14:13:12&lt;br/&gt;
insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn      set repl       time&lt;br/&gt;
     0      0      0      0       1       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   360b     4k     7 wireclub    M   14:13:13&lt;br/&gt;
     0      0      0      0       0       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   186b     4k     7 wireclub    M   14:13:14&lt;br/&gt;
     0      0      0      0       1       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   422b     6k     7 wireclub    M   14:13:15&lt;br/&gt;
     0      0      0      0       0       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   124b     3k     7 wireclub    M   14:13:16&lt;br/&gt;
     0      0      0      0       2       4       0   191g   382g  18.6g      0        0          0       0|0     2|0   360b     4k     7 wireclub    M   14:13:17&lt;br/&gt;
     0      0      0      0       5       3       0   191g   382g  18.6g     71        0          0       0|0     1|0   186b     4k     7 wireclub    M   14:13:18&lt;br/&gt;
     0      0      0      0       1       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   360b     4k     7 wireclub    M   14:13:19&lt;br/&gt;
     0      0      0      0       0       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   186b     4k     7 wireclub    M   14:13:20&lt;br/&gt;
     0      0      0      0       1       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   360b     4k     7 wireclub    M   14:13:21&lt;br/&gt;
     0      0      0      0       0       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   186b     4k     7 wireclub    M   14:13:22&lt;br/&gt;
insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn      set repl       time&lt;br/&gt;
     0      0      0      0       1       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   360b     4k     7 wireclub    M   14:13:23&lt;br/&gt;
     0      0      0      0       0       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   186b     4k     7 wireclub    M   14:13:24&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:13:25&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   14:13:26&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:13:27&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   14:13:28&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:13:29&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   14:13:30&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:13:31&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   14:13:32&lt;br/&gt;
insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn      set repl       time&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:13:33&lt;br/&gt;
     0      0      0      0       2       5       0   191g   382g  18.6g      2        0          0       0|0     2|0   425b     3k     7 wireclub    M   14:13:38&lt;br/&gt;
     0      0      0      0       1       8       0   191g   382g  18.6g      0        0          0       0|0     2|0   930b     7k     7 wireclub    M   14:13:46&lt;br/&gt;
     0      0      0      0       5       2       1   191g   382g  18.6g     86        0          0       0|0     1|0   186b     4k     7 wireclub    M   14:13:47&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:13:48&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   14:13:49&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:13:51&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  18.6g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   14:13:52&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  18.6g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   14:13:53&lt;/p&gt;</comment>
                            <comment id="44378" author="eliot" created="Mon, 25 Jul 2011 20:46:47 +0000"  >&lt;p&gt;Can you try a mongodump from the 1.9 nightly (using whatever server version you have) and use --forceTableScan?&lt;/p&gt;

&lt;p&gt;If that behaves differently will tell us a lot.&lt;/p&gt;

&lt;p&gt;Also, what command line options are you passing if any?&lt;/p&gt;</comment>
                            <comment id="44325" author="braden" created="Mon, 25 Jul 2011 16:50:25 +0000"  >&lt;p&gt;Hi Eliot, it seems to be transferring albeit very slowly. &lt;/p&gt;

&lt;p&gt;Here is a play-by-play of what we are observing:&lt;/p&gt;

&lt;p&gt;1. Machine is idle, we start a mongodump, no other operations are running in the background&lt;br/&gt;
2. Dump runs at normal speed for 10 to 15 minutes, disk and CPU usage are high but performance is excellent&lt;br/&gt;
3. After 10 to 15 minutes, most disk activity ceases, CPU usage remain high, mongodump process slows down to a crawl - this is the &quot;degraded&quot; state&lt;br/&gt;
4. At this point, we can stop the dump and CPU activity will remain high, disk activity will remains minimal.&lt;br/&gt;
5. While in this degraded state, all database operation are very slow, even hours after mongodump is stopped. Even calls to serverStatus() are incredibly slow.&lt;br/&gt;
6. Mongod will remain in this degraded state for several hours or until we run db.runCommand(&lt;/p&gt;
{closeAllDatabases:1}
&lt;p&gt;);&lt;br/&gt;
7. db.runCommand(&lt;/p&gt;
{closeAllDatabases:1}
&lt;p&gt;) returns in 1-2 seconds, performance is immediately restored to normal levels&lt;/p&gt;

&lt;p&gt;There are two crucial questions that if answered, could lead to understanding this problem better:&lt;br/&gt;
1. What would be keeping mongod busy for hours after the mongodump stopped? (no other calls were being made)&lt;br/&gt;
2. Why closing all the databases fix it?&lt;/p&gt;

&lt;p&gt;SSDs: sda, sdb, sdc, sde, sdf&lt;br/&gt;
HDD: sdd&lt;/p&gt;

&lt;p&gt;Below: performance information captured while MongoDB was in the degraded state, 15 minutes after I started a mongodump&lt;/p&gt;

&lt;hr /&gt;

&lt;p&gt;iostat 10 -x&lt;/p&gt;

&lt;p&gt;Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util&lt;br/&gt;
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdd               0.00     0.00    0.00    4.00     0.00    16.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sde               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdf               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;/p&gt;

&lt;p&gt;avg-cpu:  %user   %nice %system %iowait  %steal   %idle&lt;br/&gt;
           0.36    0.00   12.17    0.00    0.00   87.47&lt;/p&gt;

&lt;p&gt;Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util&lt;br/&gt;
sda               0.00     9.00    0.00   31.00     0.00   160.00    10.32     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sde               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdf               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;/p&gt;

&lt;p&gt;avg-cpu:  %user   %nice %system %iowait  %steal   %idle&lt;br/&gt;
           0.37    0.00   12.39    0.00    0.00   87.24&lt;/p&gt;

&lt;p&gt;Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util&lt;br/&gt;
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sde               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdf               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;/p&gt;

&lt;p&gt;avg-cpu:  %user   %nice %system %iowait  %steal   %idle&lt;br/&gt;
           0.12    0.00   12.35    0.00    0.00   87.53&lt;/p&gt;

&lt;p&gt;Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util&lt;br/&gt;
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sde               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdf               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;/p&gt;

&lt;p&gt;avg-cpu:  %user   %nice %system %iowait  %steal   %idle&lt;br/&gt;
           0.25    0.00   12.69    0.25    0.00   86.82&lt;/p&gt;

&lt;p&gt;Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util&lt;br/&gt;
sda              21.00     0.00   18.00    0.00   360.00     0.00    40.00     0.01    0.56    0.56    0.00   0.56   1.00&lt;br/&gt;
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sde               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdf               0.00     1.00    0.00    3.00     0.00    16.00    10.67     0.03   10.00    0.00   10.00   3.33   1.00&lt;br/&gt;
dm-0              0.00     0.00   24.00    0.00    96.00     0.00     8.00     0.08    3.33    3.33    0.00   0.42   1.00&lt;/p&gt;

&lt;p&gt;&amp;#8212;&lt;/p&gt;

&lt;p&gt;vmstat&lt;/p&gt;

&lt;p&gt;procs ----------&lt;del&gt;memory&lt;/del&gt;--------- --&lt;del&gt;swap&lt;/del&gt;- ----&lt;del&gt;io&lt;/del&gt;--- &lt;del&gt;system&lt;/del&gt;- ---&lt;del&gt;cpu&lt;/del&gt;---&lt;br/&gt;
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa&lt;br/&gt;
 1  0   2684  79500  34384 23689380    0    0     0     4 1615 1855  0 13 87  0&lt;br/&gt;
 1  0   2684  79500  34384 23689384    0    0     0     0 1619 1903  0 14 86  0&lt;br/&gt;
 1  0   2684  79500  34384 23689384    0    0     0     0 1624 1847  0 12 88  0&lt;br/&gt;
 1  0   2684  79500  34384 23689384    0    0     0     0 1617 1848  0 14 86  0&lt;br/&gt;
 1  0   2684  79484  34384 23689384    0    0     0     4 1639 1968  0 13 87  0&lt;br/&gt;
 1  0   2684  79484  34384 23689384    0    0     0     0 1572 1869  0 15 85  0&lt;br/&gt;
 1  0   2684  79484  34384 23689384    0    0     0     0 1578 1891  0 10 89  0&lt;br/&gt;
 1  0   2684  80576  34384 23689384    0    0     0     0 1654 2084  0 12 88  0&lt;br/&gt;
 2  0   2684  80724  34384 23689092    0    0     0     0 1616 1946  0 12 88  0&lt;br/&gt;
 1  0   2684  80848  34384 23688152    0    0     0     0 1704 2217  0 14 86  0&lt;br/&gt;
 1  0   2684  80980  34384 23688180    0    0     0     0 1666 1969  0 12 88  0&lt;br/&gt;
 1  0   2684  81132  34384 23688180    0    0     0     0 1600 1868  0 13 87  0&lt;br/&gt;
 1  0   2684  81132  34384 23688180    0    0     0     0 1593 1912  0 12 87  0&lt;br/&gt;
 1  0   2684  81256  34384 23688180    0    0     0     0 1639 1990  0 13 87  0&lt;br/&gt;
 1  0   2684  81504  34392 23688172    0    0     0    40 1651 1948  0 12 88  0&lt;/p&gt;

&lt;p&gt;&amp;#8212;&lt;/p&gt;

&lt;p&gt;mongostat&lt;/p&gt;

&lt;p&gt;insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn      set repl       time&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  16.7g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   13:20:27&lt;br/&gt;
     0      0      0      0       0       3       0   191g   382g  16.7g      0        0          0       0|0     1|0   251b     3k     7 wireclub    M   13:20:28&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  16.7g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   13:20:29&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  16.7g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   13:20:30&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  16.7g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   13:20:31&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  16.7g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   13:20:32&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  16.7g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   13:20:33&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  16.7g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   13:20:34&lt;br/&gt;
     0      0      0      0       1       3       0   191g   382g  16.7g      0        0          0       0|0     1|0   233b     4k     7 wireclub    M   13:20:35&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  16.7g      0        0          0       0|0     1|0   313b     4k     7 wireclub    M   13:20:36&lt;/p&gt;

&lt;p&gt;&amp;#8212;&lt;/p&gt;

&lt;p&gt;sar -b 1&lt;/p&gt;

&lt;p&gt;01:21:09 PM  pgpgin/s pgpgout/s   fault/s  majflt/s  pgfree/s pgscank/s pgscand/s pgsteal/s    %vmeff&lt;br/&gt;
01:21:10 PM      0.00      0.00    544.00      0.00   1187.00      0.00 3705835.00      0.00      0.00&lt;br/&gt;
01:21:11 PM      0.00      0.00    499.00      0.00   1063.00      0.00 3701894.00      0.00      0.00&lt;br/&gt;
01:21:12 PM      0.00      0.00    498.00      0.00   1062.00      0.00 3706644.00      0.00      0.00&lt;br/&gt;
01:21:13 PM      0.00      0.00    490.00      0.00   1018.00      0.00 3698563.00      0.00      0.00&lt;br/&gt;
01:21:14 PM      0.00      0.00    499.00      0.00   1065.00      0.00 3700551.00      0.00      0.00&lt;br/&gt;
01:21:15 PM      0.00      0.00    496.00      0.00   1025.00      0.00 3706918.00      0.00      0.00&lt;br/&gt;
01:21:16 PM      0.00      0.00    499.00      0.00   1070.00      0.00 3699924.00      0.00      0.00&lt;br/&gt;
01:21:17 PM      0.00      0.00    496.00      0.00   1026.00      0.00 3704611.00      0.00      0.00&lt;br/&gt;
01:21:18 PM      0.00      0.00    499.00      0.00   1064.00      0.00 3700359.00      0.00      0.00&lt;br/&gt;
01:21:19 PM      0.00      0.00    497.00      0.00   1032.00      0.00 3702982.00      0.00      0.00&lt;br/&gt;
01:21:20 PM      0.00      0.00    543.00      0.00   1182.00      0.00 3704660.00      0.00      0.00&lt;br/&gt;
01:21:21 PM      0.00      4.00    502.00      0.00   1071.00      0.00 3704547.00      0.00      0.00&lt;br/&gt;
01:21:22 PM      0.00     16.00    499.00      0.00   1070.00      0.00 3704263.00      0.00      0.00&lt;br/&gt;
01:21:23 PM      0.00      0.00    497.00      0.00   1027.00      0.00 3702598.00      0.00      0.00&lt;br/&gt;
01:21:24 PM      0.00      0.00    498.00      0.00   1063.00      0.00 3709844.00      0.00      0.00&lt;br/&gt;
01:21:25 PM      0.00      0.00    497.00      0.00   1032.00      0.00 3698851.00      0.00      0.00&lt;br/&gt;
01:21:26 PM      0.00      0.00    498.00      0.00   1062.00      0.00 3710471.00      0.00      0.00&lt;br/&gt;
01:21:27 PM      0.00      8.00    496.00      0.00   1026.00      0.00 3701798.00      0.00      0.00&lt;br/&gt;
01:21:28 PM      0.00      0.00    496.00      0.00   1062.00      0.00 3707988.00      0.00      0.00&lt;br/&gt;
01:21:29 PM      0.00      0.00    496.00      0.00   1026.00      0.00 3706659.00      0.00      0.00&lt;br/&gt;
01:21:30 PM      0.00      0.00    543.00      0.00   1182.00      0.00 3700647.00      0.00      0.00&lt;/p&gt;

</comment>
                            <comment id="44230" author="eliot" created="Mon, 25 Jul 2011 05:20:40 +0000"  >&lt;p&gt;That looks like its just transferring?&lt;/p&gt;

&lt;p&gt;How fast is it transferring?&lt;/p&gt;</comment>
                            <comment id="44167" author="braden" created="Sat, 23 Jul 2011 03:02:34 +0000"  >&lt;p&gt;The disks are all ssds, sdc is holding the database currently being synced.&lt;/p&gt;


&lt;p&gt;Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util&lt;br/&gt;
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sde               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdf               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;/p&gt;

&lt;p&gt;avg-cpu:  %user   %nice %system %iowait  %steal   %idle&lt;br/&gt;
           0.12    0.00   12.40    1.18    0.00   86.30&lt;/p&gt;

&lt;p&gt;Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util&lt;br/&gt;
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdc               0.00     0.00  181.00    0.00 23168.00     0.00   256.00     0.18    0.99    0.99    0.00   0.72  13.00&lt;br/&gt;
sde               0.00     0.50    0.00    1.50     0.00     8.00    10.67     0.01   10.00    0.00   10.00   3.33   0.50&lt;br/&gt;
sdf               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;/p&gt;

&lt;p&gt;avg-cpu:  %user   %nice %system %iowait  %steal   %idle&lt;br/&gt;
           0.06    0.00   22.06    0.88    0.00   77.00&lt;/p&gt;

&lt;p&gt;Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util&lt;br/&gt;
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdc               0.00     0.00  177.50    0.00 20544.00     0.00   231.48     0.18    1.04    1.04    0.00   0.68  12.00&lt;br/&gt;
sde               0.00     0.00    0.00    0.50     0.00     2.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdf               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;/p&gt;

&lt;p&gt;avg-cpu:  %user   %nice %system %iowait  %steal   %idle&lt;br/&gt;
           0.18    0.00   20.44    0.36    0.00   79.03&lt;/p&gt;

&lt;p&gt;Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util&lt;br/&gt;
sda               0.00     0.00    0.00    2.00     0.00     8.00     8.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdc               0.00     0.00  101.00    0.00  8438.00     0.00   167.09     0.07    0.64    0.64    0.00   0.50   5.00&lt;br/&gt;
sde               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdf               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;/p&gt;

&lt;p&gt;avg-cpu:  %user   %nice %system %iowait  %steal   %idle&lt;br/&gt;
           0.00    0.00   24.78    0.06    0.00   75.16&lt;/p&gt;

&lt;p&gt;Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util&lt;br/&gt;
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdc               0.00     0.00    9.00    0.00   226.00     0.00    50.22     0.07    7.22    7.22    0.00   0.56   0.50&lt;br/&gt;
sde               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
sdf               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;br/&gt;
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00&lt;/p&gt;

&lt;p&gt;avg-cpu:  %user   %nice %system %iowait  %steal   %idle&lt;br/&gt;
           0.00    0.00   26.34    0.00    0.00   73.66&lt;/p&gt;</comment>
                            <comment id="44166" author="eliot" created="Sat, 23 Jul 2011 02:50:42 +0000"  >&lt;p&gt;Can you run iostat -x 2&lt;br/&gt;
Looks like disk is probably saturated.&lt;/p&gt;</comment>
                            <comment id="44159" author="braden" created="Sat, 23 Jul 2011 00:38:06 +0000"  >&lt;p&gt;Another case of this happening, currently all that database is doing is initial sync to 1 node:&lt;/p&gt;


&lt;p&gt;Top:&lt;br/&gt;
2046 mongodb   20   0  382g  14g  14g S  100 60.8   7:18.94 mongod&lt;/p&gt;

&lt;p&gt;vmstat&lt;br/&gt;
procs ----------&lt;del&gt;memory&lt;/del&gt;--------- --&lt;del&gt;swap&lt;/del&gt;- ----&lt;del&gt;io&lt;/del&gt;--- &lt;del&gt;system&lt;/del&gt;- ---&lt;del&gt;cpu&lt;/del&gt;---&lt;br/&gt;
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa&lt;br/&gt;
 1  0      0 1339824  56904 22429384    0    0  2906  2056  482  512  2  6 89  3&lt;br/&gt;
 1  0      0 1339536  56904 22429132    0    0     0  2048  637  811  0 12 88  0&lt;br/&gt;
 1  0      0 1339544  56908 22429136    0    0     0     4  656  785  0 13 87  0&lt;br/&gt;
 1  0      0 1339552  56908 22429136    0    0     0  2048  631  760  0 13 87  0&lt;br/&gt;
 1  0      0 1339680  56908 22429136    0    0     0  2048  664  798  0 12 88  0&lt;br/&gt;
 1  0      0 1339680  56916 22429128    0    0     0    12  613  746  0 15 85  1&lt;br/&gt;
 1  0      0 1339680  56916 22429136    0    0     0  2112  655  787  0 14 86  0&lt;br/&gt;
 1  0      0 1339924  56916 22429136    0    0     0  2048  637  812  0 14 86  0&lt;br/&gt;
 1  0      0 1339924  56916 22429136    0    0     0     0  634  767  0 10 90  0&lt;br/&gt;
 1  0      0 1339924  56916 22429136    0    0     0  2048  617  748  0 15 85  0&lt;br/&gt;
 1  0      0 1340048  56916 22429136    0    0     0     0  659  821  0 14 86  0&lt;br/&gt;
 1  0      0 1340048  56924 22429132    0    0     0  2060  628  772  0 14 84  1&lt;br/&gt;
 1  0      0 1340404  56924 22428232    0    0     0  2048  729 1031  0  9 91  0&lt;br/&gt;
 1  0      0 1340512  56924 22428200    0    0     0     4  627  781  0 12 88  0&lt;br/&gt;
 1  0      0 1340512  56924 22428200    0    0     0  2048  654  781  0 13 87  0&lt;br/&gt;
 1  0      0 1340760  56924 22428200    0    0     0     0  630  746  0 13 87  0&lt;br/&gt;
 1  0      0 1340768  56924 22428020    0    0     0  2048  656  814  0 12 88  0&lt;br/&gt;
 1  0      0 1340768  56924 22428020    0    0     0  2092  653  792  0 13 87  0&lt;br/&gt;
 1  0      0 1340800  56940 22428004    0    0     0    36  676  802  0 12 87  1&lt;/p&gt;



&lt;p&gt;Mongostat:&lt;br/&gt;
insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn      set repl       time&lt;br/&gt;
     0      0      0      0       0       2       0   191g   382g  14.2g      0        0          0       0|0     0|0   124b     2k    14 wireclub    M   17:32:12&lt;br/&gt;
     0      0      0      0       0       4       1   191g   382g  14.2g      0        0          0       0|0     0|0   378b     3k    14 wireclub    M   17:32:13&lt;br/&gt;
     0      0      0      0       0       2       0   191g   382g  14.2g      0        0          0       0|0     0|0   124b     2k    14 wireclub    M   17:32:14&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  14.2g      0        0          0       0|0     0|0   378b     3k    14 wireclub    M   17:32:15&lt;br/&gt;
     0      1      1      0       0       3       0   191g   382g  14.2g      0        0          0       0|0     0|0   362b     3k    14 wireclub    M   17:32:16&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  14.2g      0        0          0       0|0     0|0   378b     3k    14 wireclub    M   17:32:17&lt;br/&gt;
     0      0      0      0       0       2       0   191g   382g  14.2g      0        0          0       0|0     0|0   124b     2k    14 wireclub    M   17:32:18&lt;br/&gt;
     0      0      0      0       0       4       0   191g   382g  14.2g      0        0          0       0|0     0|0   378b     3k    14 wireclub    M   17:32:19&lt;br/&gt;
     0      0      0      0       0       2       0   191g   382g  14.2g      0        0          0       0|0     0|0   124b     2k    14 wireclub    M   17:32:20&lt;br/&gt;
     0      1      1      0       0       5       0   191g   382g  14.2g      0        0          0       0|0     0|0   616b     3k    14 wireclub    M   17:32:21&lt;/p&gt;

&lt;p&gt;Fri Jul 22 17:33:24 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn17&amp;#93;&lt;/span&gt; serverStatus was very slow: &lt;/p&gt;
{ after basic: 0, middle of mem: 3320, after mem: 3320, after connections: 3320, after extra info: 4760, after counters: 4760, after repl: 4760, after asserts: 4760 }
&lt;p&gt;Fri Jul 22 17:33:24 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn17&amp;#93;&lt;/span&gt; query admin.$cmd ntoreturn:1 command: &lt;/p&gt;
{ serverStatus: 1 }
&lt;p&gt; reslen:1593 4803ms&lt;br/&gt;
Fri Jul 22 17:33:37 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn6&amp;#93;&lt;/span&gt; serverStatus was very slow: &lt;/p&gt;
{ after basic: 0, middle of mem: 3280, after mem: 3280, after connections: 3280, after extra info: 17790, after counters: 17790, after repl: 17790, after asserts: 17790 }
&lt;p&gt;Fri Jul 22 17:33:37 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn6&amp;#93;&lt;/span&gt; query admin.$cmd ntoreturn:1 command: &lt;/p&gt;
{ serverStatus: 1 }
&lt;p&gt; reslen:1593 17985ms&lt;br/&gt;
Fri Jul 22 17:33:37 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn10&amp;#93;&lt;/span&gt; getmore gridfs.fs.chunks cid:1249412216916989076 getMore: {}  bytes:4211712 nreturned:126 exhaust  37695ms&lt;br/&gt;
Fri Jul 22 17:33:38 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn17&amp;#93;&lt;/span&gt; serverStatus was very slow: &lt;/p&gt;
{ after basic: 0, middle of mem: 12750, after mem: 12750, after connections: 12750, after extra info: 12750, after counters: 12750, after repl: 12750, after asserts: 12750 }
&lt;p&gt;Fri Jul 22 17:33:38 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn17&amp;#93;&lt;/span&gt; query admin.$cmd ntoreturn:1 command: &lt;/p&gt;
{ serverStatus: 1 }
&lt;p&gt; reslen:1593 12889ms&lt;br/&gt;
Fri Jul 22 17:34:10 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn6&amp;#93;&lt;/span&gt; serverStatus was very slow: &lt;/p&gt;
{ after basic: 0, middle of mem: 11560, after mem: 11560, after connections: 11560, after extra info: 12220, after counters: 12220, after repl: 12220, after asserts: 12220 }
&lt;p&gt;Fri Jul 22 17:34:10 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn6&amp;#93;&lt;/span&gt; query admin.$cmd ntoreturn:1 command: &lt;/p&gt;
{ serverStatus: 1 }
&lt;p&gt; reslen:1593 12329ms&lt;br/&gt;
Fri Jul 22 17:34:10 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn17&amp;#93;&lt;/span&gt; serverStatus was very slow: &lt;/p&gt;
{ after basic: 0, middle of mem: 11600, after mem: 11600, after connections: 11600, after extra info: 12930, after counters: 12930, after repl: 12930, after asserts: 12930 }
&lt;p&gt;Fri Jul 22 17:34:10 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn17&amp;#93;&lt;/span&gt; query admin.$cmd ntoreturn:1 command: &lt;/p&gt;
{ serverStatus: 1 }
&lt;p&gt; reslen:1593 13041ms&lt;br/&gt;
Fri Jul 22 17:34:15 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn6&amp;#93;&lt;/span&gt; serverStatus was very slow: &lt;/p&gt;
{ after basic: 0, middle of mem: 2300, after mem: 2300, after connections: 2300, after extra info: 4290, after counters: 4290, after repl: 4290, after asserts: 4290 }
&lt;p&gt;Fri Jul 22 17:34:15 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn6&amp;#93;&lt;/span&gt; query admin.$cmd ntoreturn:1 command: &lt;/p&gt;
{ serverStatus: 1 }
&lt;p&gt; reslen:1593 4329ms&lt;br/&gt;
Fri Jul 22 17:34:15 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn17&amp;#93;&lt;/span&gt; serverStatus was very slow: &lt;/p&gt;
{ after basic: 0, middle of mem: 2320, after mem: 2320, after connections: 2320, after extra info: 4310, after counters: 4310, after repl: 4310, after asserts: 4310 }
&lt;p&gt;Fri Jul 22 17:34:15 &lt;span class=&quot;error&quot;&gt;&amp;#91;conn17&amp;#93;&lt;/span&gt; query admin.$cmd ntoreturn:1 command: &lt;/p&gt;
{ serverStatus: 1 }
&lt;p&gt; reslen:1593 4346ms&lt;/p&gt;</comment>
                            <comment id="44158" author="braden" created="Sat, 23 Jul 2011 00:20:56 +0000"  >&lt;p&gt;We can now rule out hyperthreading being the cause as well. &lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                <customfield id="customfield_10050" key="com.atlassian.jira.toolkit:comments">
                        <customfieldname># Replies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>24.0</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                <customfield id="customfield_10055" key="com.atlassian.jira.ext.charting:firstresponsedate">
                        <customfieldname>Date of 1st Reply</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Sat, 23 Jul 2011 02:50:42 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10052" key="com.atlassian.jira.toolkit:dayslastcommented">
                        <customfieldname>Days since reply</customfieldname>
                        <customfieldvalues>
                                        12 years, 30 weeks ago
    
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_18254" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Dependencies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue><![CDATA[]]></customfieldvalue>


                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_15850" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_10057" key="com.atlassian.jira.toolkit:lastusercommented">
                        <customfieldname>Last comment by Customer</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>true</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10056" key="com.atlassian.jira.toolkit:lastupdaterorcommenter">
                        <customfieldname>Last commenter</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>ramon.fernandez@mongodb.com</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_11151" key="com.atlassian.jira.toolkit:LastCommentDate">
                        <customfieldname>Last public comment date</customfieldname>
                        <customfieldvalues>
                            12 years, 30 weeks ago
                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10000" key="com.atlassian.jira.plugin.system.customfieldtypes:radiobuttons">
                        <customfieldname>Old_Backport</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10000"><![CDATA[No]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10032" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Operating System</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10026"><![CDATA[ALL]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_10051" key="com.atlassian.jira.toolkit:participants">
                        <customfieldname>Participants</customfieldname>
                        <customfieldvalues>
                                        <customfieldvalue>braden</customfieldvalue>
            <customfieldvalue>eliot</customfieldvalue>
            <customfieldvalue>mathias@mongodb.com</customfieldvalue>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_14254" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Product Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hrovbb:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_12550" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>2|hris7z:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10558" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>23225</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_23361" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Requested By</customfieldname>
                        <customfieldvalues>
                                

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10053" key="com.atlassian.jira.ext.charting:timeinstatus">
                        <customfieldname>Time In Status</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_22870" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Triagers</customfieldname>
                        <customfieldvalues>
                                

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_14350" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>serverRank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hri5rz:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                    </customfields>
    </item>
</channel>
</rss>