<!-- 
RSS generated by JIRA (9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66) at Thu Feb 08 03:00:55 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>MongoDB Jira</title>
    <link>https://jira.mongodb.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.7.1</version>
        <build-number>970001</build-number>
        <build-date>13-04-2023</build-date>
    </build-info>


<item>
            <title>[SERVER-2694] Replication Sets ending up with all secondaries... and no primary</title>
                <link>https://jira.mongodb.org/browse/SERVER-2694</link>
                <project id="10000" key="SERVER">Core Server</project>
                    <description>&lt;p&gt;Firstly...a s a new user... brilliant package.... thanks. (And stupidly I posted this on the Ubuntu/mongo log as well... sorry... monday morning syndrome)&lt;/p&gt;

&lt;p&gt;Now.. I have 6 instances in a replication set, spread over 2 physical machines. All works fine. If I then take down one of the machines, I end up with 3 instances, all being secondaries. This is a basic setup with default voting rights, and no arbiter.&lt;br/&gt;
The result of a rs.status() is below:&lt;/p&gt;

&lt;p&gt;mycache:SECONDARY&amp;gt; rs.status()&lt;br/&gt;
{&lt;br/&gt;
	&quot;set&quot; : &quot;mycache&quot;,&lt;br/&gt;
	&quot;date&quot; : ISODate(&quot;2011-03-04T15:49:01Z&quot;),&lt;br/&gt;
	&quot;myState&quot; : 2,&lt;br/&gt;
	&quot;members&quot; : [&lt;br/&gt;
		{&lt;br/&gt;
			&quot;_id&quot; : 0,&lt;br/&gt;
			&quot;name&quot; : &quot;n.n.n.1:27017&quot;,&lt;br/&gt;
			&quot;health&quot; : 1,&lt;br/&gt;
			&quot;state&quot; : 2,&lt;br/&gt;
			&quot;stateStr&quot; : &quot;SECONDARY&quot;,&lt;br/&gt;
			&quot;uptime&quot; : 202,&lt;br/&gt;
			&quot;optime&quot; : &lt;/p&gt;
{
				&quot;t&quot; : 1299250255000,
				&quot;i&quot; : 1
			}
&lt;p&gt;,&lt;br/&gt;
			&quot;optimeDate&quot; : ISODate(&quot;2011-03-04T14:50:55Z&quot;),&lt;br/&gt;
			&quot;lastHeartbeat&quot; : ISODate(&quot;2011-03-04T15:49:01Z&quot;)&lt;br/&gt;
		},&lt;br/&gt;
		{&lt;br/&gt;
			&quot;_id&quot; : 1,&lt;br/&gt;
			&quot;name&quot; : &quot;n.n.n.2:27018&quot;,&lt;br/&gt;
			&quot;health&quot; : 1,&lt;br/&gt;
			&quot;state&quot; : 2,&lt;br/&gt;
			&quot;stateStr&quot; : &quot;SECONDARY&quot;,&lt;br/&gt;
			&quot;optime&quot; : &lt;/p&gt;
{
				&quot;t&quot; : 1299250255000,
				&quot;i&quot; : 1
			}
&lt;p&gt;,&lt;br/&gt;
			&quot;optimeDate&quot; : ISODate(&quot;2011-03-04T14:50:55Z&quot;),&lt;br/&gt;
			&quot;self&quot; : true&lt;br/&gt;
		},&lt;br/&gt;
		{&lt;br/&gt;
			&quot;_id&quot; : 2,&lt;br/&gt;
			&quot;name&quot; : &quot;n.n.n.3:27019&quot;,&lt;br/&gt;
			&quot;health&quot; : 1,&lt;br/&gt;
			&quot;state&quot; : 2,&lt;br/&gt;
			&quot;stateStr&quot; : &quot;SECONDARY&quot;,&lt;br/&gt;
			&quot;uptime&quot; : 202,&lt;br/&gt;
			&quot;optime&quot; : &lt;/p&gt;
{
				&quot;t&quot; : 1299250255000,
				&quot;i&quot; : 1
			}
&lt;p&gt;,&lt;br/&gt;
			&quot;optimeDate&quot; : ISODate(&quot;2011-03-04T14:50:55Z&quot;),&lt;br/&gt;
			&quot;lastHeartbeat&quot; : ISODate(&quot;2011-03-04T15:49:01Z&quot;)&lt;br/&gt;
		},&lt;br/&gt;
		{&lt;br/&gt;
			&quot;_id&quot; : 3,&lt;br/&gt;
			&quot;name&quot; : &quot;n.n.1.1:27017&quot;,&lt;br/&gt;
			&quot;health&quot; : 0,&lt;br/&gt;
			&quot;state&quot; : 2,&lt;br/&gt;
			&quot;stateStr&quot; : &quot;(not reachable/healthy)&quot;,&lt;br/&gt;
			&quot;uptime&quot; : 0,&lt;br/&gt;
			&quot;optime&quot; : &lt;/p&gt;
{
				&quot;t&quot; : 1299250255000,
				&quot;i&quot; : 1
			}
&lt;p&gt;,&lt;br/&gt;
			&quot;optimeDate&quot; : ISODate(&quot;2011-03-04T14:50:55Z&quot;),&lt;br/&gt;
			&quot;lastHeartbeat&quot; : ISODate(&quot;2011-03-04T15:46:45Z&quot;),&lt;br/&gt;
			&quot;errmsg&quot; : &quot;socket exception&quot;&lt;br/&gt;
		},&lt;br/&gt;
		{&lt;br/&gt;
			&quot;_id&quot; : 4,&lt;br/&gt;
			&quot;name&quot; : &quot;n.n.1.2:27018&quot;,&lt;br/&gt;
			&quot;health&quot; : 0,&lt;br/&gt;
			&quot;state&quot; : 1,&lt;br/&gt;
			&quot;stateStr&quot; : &quot;(not reachable/healthy)&quot;,&lt;br/&gt;
			&quot;uptime&quot; : 0,&lt;br/&gt;
			&quot;optime&quot; : &lt;/p&gt;
{
				&quot;t&quot; : 1299250255000,
				&quot;i&quot; : 1
			}
&lt;p&gt;,&lt;br/&gt;
			&quot;optimeDate&quot; : ISODate(&quot;2011-03-04T14:50:55Z&quot;),&lt;br/&gt;
			&quot;lastHeartbeat&quot; : ISODate(&quot;2011-03-04T15:46:45Z&quot;),&lt;br/&gt;
			&quot;errmsg&quot; : &quot;socket exception&quot;&lt;br/&gt;
		},&lt;br/&gt;
		{&lt;br/&gt;
			&quot;_id&quot; : 5,&lt;br/&gt;
			&quot;name&quot; : &quot;n.n.1.3:27019&quot;,&lt;br/&gt;
			&quot;health&quot; : 0,&lt;br/&gt;
			&quot;state&quot; : 2,&lt;br/&gt;
			&quot;stateStr&quot; : &quot;(not reachable/healthy)&quot;,&lt;br/&gt;
			&quot;uptime&quot; : 0,&lt;br/&gt;
			&quot;optime&quot; : &lt;/p&gt;
{
				&quot;t&quot; : 1299250255000,
				&quot;i&quot; : 1
			}
&lt;p&gt;,&lt;br/&gt;
			&quot;optimeDate&quot; : ISODate(&quot;2011-03-04T14:50:55Z&quot;),&lt;br/&gt;
			&quot;lastHeartbeat&quot; : ISODate(&quot;2011-03-04T15:46:45Z&quot;),&lt;br/&gt;
			&quot;errmsg&quot; : &quot;socket exception&quot;&lt;br/&gt;
		}&lt;br/&gt;
	],&lt;br/&gt;
	&quot;ok&quot; : 1&lt;br/&gt;
}&lt;/p&gt;

&lt;p&gt;1. I tried reconfig, but that needs a primary, which I don&apos;t have.&lt;br/&gt;
2. Tried taking an instance down, freezing the other two, and bringing the third back up.... came back as a secondary.&lt;br/&gt;
3. Am going to try creating a new instance, and setting up as an arbiter, to see if that can help find a primary. However, this is not a long term solution. (see 4 below)&lt;br/&gt;
4. If I have more than one machine taking part in a replication set, in theory, for a resilient system, each machine would need to have an arbiter, in case another machine got taken out. With an even number of machines, that gives uis an even number of arbiters, which doesn&apos;t help if they are all in play (unless I am missing something obvious.... not for teh first time ).&lt;br/&gt;
If, however, we assign bitwise voting rights to each instance in a replicatuion set (1,2,4,8,16 .....), then any instance can be downed, or a whole machine can be downed, and a definite primary will also be voted in. This removes the need for an arbiter, and also gives the admins a chance to prioritise the servers taking part.... but I need a primary to change the config.&lt;/p&gt;

&lt;p&gt;Thanks in advance for any help&lt;/p&gt;</description>
                <environment>Ubuntu 10 64 bit, 8gig memory... too much disk to worry about</environment>
        <key id="15017">SERVER-2694</key>
            <summary>Replication Sets ending up with all secondaries... and no primary</summary>
                <type id="1" iconUrl="https://jira.mongodb.org/secure/viewavatar?size=xsmall&amp;avatarId=14703&amp;avatarType=issuetype">Bug</type>
                                            <priority id="3" iconUrl="https://jira.mongodb.org/images/icons/priorities/major.svg">Major - P3</priority>
                        <status id="6" iconUrl="https://jira.mongodb.org/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="9">Done</resolution>
                                        <assignee username="-1">Unassigned</assignee>
                                    <reporter username="peter.colclough">Peter Colclough</reporter>
                        <labels>
                    </labels>
                <created>Mon, 7 Mar 2011 12:08:49 +0000</created>
                <updated>Fri, 30 Mar 2012 14:26:50 +0000</updated>
                            <resolved>Mon, 7 Mar 2011 13:09:39 +0000</resolved>
                                    <version>1.8.0-rc0</version>
                    <version>1.8.0-rc1</version>
                                                    <component>Admin</component>
                    <component>Replication</component>
                    <component>Usability</component>
                                        <votes>0</votes>
                                    <watches>2</watches>
                                                                                                                <comments>
                            <comment id="25761" author="peter.colclough" created="Fri, 11 Mar 2011 11:06:53 +0000"  >&lt;p&gt;Thanks Andrew... and others. I had already read those sections. I realise we have a &apos;catch 22&apos;  here. I am off to play with some scenarios to see if we can &apos;automatically&apos; recover, while emailing the sysadmins, and not killing the system while we are getting recovered.&lt;/p&gt;

&lt;p&gt;Thanks for your help&lt;/p&gt;</comment>
                            <comment id="25750" author="plasma" created="Fri, 11 Mar 2011 05:49:05 +0000"  >&lt;p&gt;Try reading &lt;a href=&quot;http://www.mongodb.org/display/DOCS/Reconfiguring+a+replica+set+when+members+are+down&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://www.mongodb.org/display/DOCS/Reconfiguring+a+replica+set+when+members+are+down&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may consider running an arbiter node on a separate machine (eg a web server) so you have an odd number of servers.&lt;/p&gt;

&lt;p&gt;The arbitrator as mentioned previously is very light weight, is not queried and has no data on it, all it does is cast a vote when as a decision maker when there are failures.&lt;/p&gt;</comment>
                            <comment id="25745" author="peter.colclough" created="Fri, 11 Mar 2011 05:05:02 +0000"  >&lt;p&gt;Hi Eliot,&lt;/p&gt;

&lt;p&gt;Thanks for the quick response. I kindda accept that, which is why I started with 3 nodes on a server..... which is a majority, unless they each vote for the next one down the line. I then doubled up teh servers to test the &apos;inter server operability&apos; (I know... of COURSE it works &lt;img class=&quot;emoticon&quot; src=&quot;https://jira.mongodb.org/images/icons/emoticons/smile.png&quot; height=&quot;16&quot; width=&quot;16&quot; align=&quot;absmiddle&quot; alt=&quot;&quot; border=&quot;0&quot;/&gt; ).&lt;/p&gt;

&lt;p&gt;And I now understand... you need a majority for the total number of servers......not a majority from &apos;working&apos; servers.....&lt;/p&gt;

&lt;p&gt;Ok... so how do I add in a new instance on teh working server, to give me a majority, bearing in mind I can&apos;t change the config, as I don&apos;t have a primary...&lt;/p&gt;

&lt;p&gt;Thanks for the help&lt;/p&gt;

&lt;p&gt;Peter C &lt;/p&gt;

&lt;p&gt;&amp;#8211; Peter C &amp;#8211;&lt;/p&gt;
</comment>
                            <comment id="25507" author="kristina" created="Tue, 8 Mar 2011 15:28:06 +0000"  >&lt;p&gt;Thus, the recommended approach is to have an odd number of servers.  See the &quot;Rationale&quot; section of &lt;a href=&quot;http://www.mongodb.org/display/DOCS/Replica+Set+Design+Concepts&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://www.mongodb.org/display/DOCS/Replica+Set+Design+Concepts&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The short answer is: the system is self-monitoring, it elects a primary when it safely can.  You can&apos;t have what you&apos;re looking for (always have a primary) automatically without ending up with multiple masters and, thus, the possibility of conflicting writes.  &lt;/p&gt;

&lt;p&gt;It would be possible to allow more automatic reconfiguring of a set with no primary, but I don&apos;t think &quot;most members unexpectedly and permanently go down&quot; happens regularly for most people.&lt;/p&gt;</comment>
                            <comment id="25500" author="peter.colclough" created="Tue, 8 Mar 2011 09:06:53 +0000"  >&lt;p&gt;Ok, that worked... thanks. I still think it would be useful if we could &apos;programmatically&apos; force a server to be a primary. This would allow a system to self monitor, and if this situation occurred, to at least allow for a system monitor to sort something out. Its a catch 22, because of the reasons you gave (ie not wanting to have 2 sets of primaries on teh same set), but also allowing for a primary to be deduced when one system goes down, leaving no majority. An arbiter would only work if on a third machine, as if each machine has an arbiter (in case the other goes down), then normal processing will fail, because 2 arbiters would negate the need for them (if you see what I mean).&lt;/p&gt;

&lt;p&gt;Conundrum....&lt;/p&gt;
</comment>
                            <comment id="25430" author="kristina" created="Mon, 7 Mar 2011 18:02:20 +0000"  >&lt;p&gt;Shut down a server that could be primary (once your set is down to 3 servers) and restart it without the --replSet option and on a different port.  Connect to it with the shell and modify the local.system.replset document to only have the 3 servers.  Increment the version number and save the document back to the local.system.replset collection. Then restart the server on the correct port with --replSet and the other servers will pick up on the config change.&lt;/p&gt;

&lt;p&gt;e.g., going from four servers to two servers:&lt;/p&gt;

&lt;p&gt;$ mongo localhost:27021/local&lt;br/&gt;
MongoDB shell version: 1.9.0-pre-&lt;br/&gt;
connecting to: localhost:27021/local&lt;br/&gt;
&amp;gt; config = db.system.replset.findOne()&lt;br/&gt;
{&lt;br/&gt;
        &quot;_id&quot; : &quot;foo&quot;,&lt;br/&gt;
        &quot;version&quot; : 4,&lt;br/&gt;
        &quot;members&quot; : [&lt;br/&gt;
                &lt;/p&gt;
{
                        &quot;_id&quot; : 0,
                        &quot;host&quot; : &quot;ubuntu:27017&quot;
                }
&lt;p&gt;,&lt;br/&gt;
                &lt;/p&gt;
{
                        &quot;_id&quot; : 1,
                        &quot;host&quot; : &quot;ubuntu:27018&quot;
                }
&lt;p&gt;,&lt;br/&gt;
                &lt;/p&gt;
{
                        &quot;_id&quot; : 2,
                        &quot;host&quot; : &quot;ubuntu:27019&quot;
                }
&lt;p&gt;,&lt;/p&gt;
                {
                        &quot;_id&quot; : 3,
                        &quot;host&quot; : &quot;ubuntu:27020&quot;
                }
&lt;p&gt;        ]&lt;br/&gt;
}&lt;br/&gt;
&amp;gt; config.members.pop()&lt;/p&gt;
{ &quot;_id&quot; : 3, &quot;host&quot; : &quot;ubuntu:27020&quot; }
&lt;p&gt;&amp;gt; config.members.pop()&lt;/p&gt;
{ &quot;_id&quot; : 2, &quot;host&quot; : &quot;ubuntu:27019&quot; }
&lt;p&gt;&amp;gt; config.version++&lt;br/&gt;
4&lt;br/&gt;
&amp;gt; db.system.replset.remove()&lt;br/&gt;
&amp;gt; db.system.replset.save(config)&lt;br/&gt;
&amp;gt; db.system.replset.find()&lt;br/&gt;
{ &quot;_id&quot; : &quot;foo&quot;, &quot;version&quot; : 5, &quot;members&quot; : [&lt;br/&gt;
        &lt;/p&gt;
{
                &quot;_id&quot; : 0,
                &quot;host&quot; : &quot;ubuntu:27017&quot;
        }
&lt;p&gt;,&lt;/p&gt;
        {
                &quot;_id&quot; : 1,
                &quot;host&quot; : &quot;ubuntu:27018&quot;
        }
&lt;p&gt;] }&lt;/p&gt;

&lt;p&gt;See also: &lt;a href=&quot;http://www.mongodb.org/display/DOCS/Reconfiguring+a+replica+set+when+members+are+down&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://www.mongodb.org/display/DOCS/Reconfiguring+a+replica+set+when+members+are+down&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="25415" author="peter.colclough" created="Mon, 7 Mar 2011 16:57:04 +0000"  >&lt;p&gt;I see that issue now.. thanks. However, I am now in a situation where I have 3 nodes &apos;healthy&apos;, all of w3hom are secondaries, and it appears, no way of getting them to be a primary. I can&apos;t add an arbiter, as I need a primary to change the config through.&lt;/p&gt;

&lt;p&gt;Is there a way I can &apos;force&apos; a primary, even it it means using the UI to do this? btw, &apos;freezing&apos; standing down etc, also doesn&apos;t achieve this, as I am always in a minority.&lt;/p&gt;

&lt;p&gt;This is still a necessary function, as otherwise we would be in a &apos;mexican standoff&apos; given the current scenario. I also don&apos;t see how voting changes/arbiters can actually help a scenario where a machine or two are taken out of service (or drop unexpectedly), leaving a minority behind... teh arbiter would have to be on a separate system, which always has access to all servers on that system......&lt;/p&gt;

</comment>
                            <comment id="25403" author="kristina" created="Mon, 7 Mar 2011 16:05:38 +0000"  >&lt;p&gt;You can&apos;t elect a master based on the number of healthy nodes as then you could have a master on each side of a network partition.  There is no way for a cluster of nodes to tell the difference between a network partition and nodes being down.  &lt;/p&gt;

&lt;p&gt;You need a majority of the &lt;em&gt;total&lt;/em&gt; number of nodes to elect a master.  That&apos;s why we suggest having an odd number of nodes/an arbiter/giving a node one extra vote.&lt;/p&gt;</comment>
                            <comment id="25401" author="peter.colclough" created="Mon, 7 Mar 2011 15:57:35 +0000"  >&lt;p&gt;Eliot,&lt;/p&gt;

&lt;p&gt;This still remains an issue (imvho). If you use a majority of the actual servers, including those that are unreachable, you may never be able to get a usable system. For example, if we had 7 servers, split 4 on one machine and 3 on another, if we take the &apos;3&apos; off... all is fine and dandy. If we take the &apos;4&apos;  out, then we have a single server with 3, but all secondaries.&lt;/p&gt;

&lt;p&gt;So the way around this is to have an arbiter. The arbiter would have to be on a third machine, so it isn&apos;t taken out if we down a server. Having an arbiter on one of teh main machines would simply cause an issue if that machine were taken out. If teh aribter were on a third machine, and that was taken out, we are back to square one again... if you see what I mean.&lt;/p&gt;

&lt;p&gt;Surely the &apos;voting&apos; should take place between &apos;reachable &apos; systems that are &apos;healthy&apos;. That way you can always have a majority with the working systems.&lt;/p&gt;

&lt;p&gt;Or am I really missing the point here?&lt;/p&gt;

&lt;p&gt;Thanks in advance&lt;/p&gt;

&lt;p&gt;Peter C &lt;/p&gt;</comment>
                            <comment id="25388" author="eliot" created="Mon, 7 Mar 2011 13:09:32 +0000"  >&lt;p&gt;Looks like you have 3 nodes up and 3 nodes down.&lt;br/&gt;
3/6 nodes is not a majority, so it won&apos;t elect a primary.&lt;br/&gt;
you should try and have an odd numbers&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                <customfield id="customfield_10050" key="com.atlassian.jira.toolkit:comments">
                        <customfieldname># Replies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>10.0</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                <customfield id="customfield_10055" key="com.atlassian.jira.ext.charting:firstresponsedate">
                        <customfieldname>Date of 1st Reply</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>Mon, 7 Mar 2011 13:09:32 +0000</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10052" key="com.atlassian.jira.toolkit:dayslastcommented">
                        <customfieldname>Days since reply</customfieldname>
                        <customfieldvalues>
                                        12 years, 49 weeks, 5 days ago
    
                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_18254" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Dependencies</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue><![CDATA[]]></customfieldvalue>


                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_15850" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_10057" key="com.atlassian.jira.toolkit:lastusercommented">
                        <customfieldname>Last comment by Customer</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>true</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_10056" key="com.atlassian.jira.toolkit:lastupdaterorcommenter">
                        <customfieldname>Last commenter</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>ian@mongodb.com</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_11151" key="com.atlassian.jira.toolkit:LastCommentDate">
                        <customfieldname>Last public comment date</customfieldname>
                        <customfieldvalues>
                            12 years, 49 weeks, 5 days ago
                        </customfieldvalues>
                    </customfield>
                                                                                                                        <customfield id="customfield_10000" key="com.atlassian.jira.plugin.system.customfieldtypes:radiobuttons">
                        <customfieldname>Old_Backport</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10000"><![CDATA[No]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10032" key="com.atlassian.jira.plugin.system.customfieldtypes:select">
                        <customfieldname>Operating System</customfieldname>
                        <customfieldvalues>
                                <customfieldvalue key="10020"><![CDATA[Linux]]></customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_10051" key="com.atlassian.jira.toolkit:participants">
                        <customfieldname>Participants</customfieldname>
                        <customfieldvalues>
                                        <customfieldvalue>plasma</customfieldvalue>
            <customfieldvalue>eliot</customfieldvalue>
            <customfieldvalue>kristina</customfieldvalue>
            <customfieldvalue>peter.colclough</customfieldvalue>
    
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                        <customfield id="customfield_14254" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Product Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hrp4gf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                <customfield id="customfield_12550" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>2|hridtr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10558" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>20892</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                            <customfield id="customfield_23361" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Requested By</customfieldname>
                        <customfieldvalues>
                                

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            <customfield id="customfield_10053" key="com.atlassian.jira.ext.charting:timeinstatus">
                        <customfieldname>Time In Status</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                        <customfield id="customfield_22870" key="com.onresolve.jira.groovy.groovyrunner:scripted-field">
                        <customfieldname>Triagers</customfieldname>
                        <customfieldvalues>
                                

                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_14350" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>serverRank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>1|hrjojr:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                    </customfields>
    </item>
</channel>
</rss>