<!-- 
RSS generated by JIRA (9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66) at Thu Feb 08 08:52:42 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>MongoDB Jira</title>
    <link>https://jira.mongodb.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.7.1</version>
        <build-number>970001</build-number>
        <build-date>13-04-2023</build-date>
    </build-info>


<item>
            <title>[JAVA-622] Looking for performance tips loading results from a find() with the java driver.</title>
                <link>https://jira.mongodb.org/browse/JAVA-622</link>
                <project id="10006" key="JAVA">Java Driver</project>
                    <description>&lt;p&gt;I had a question about improving the performance of loading data from Mongo.&lt;br/&gt;
I&apos;m doing a query as follows:&lt;/p&gt;

&lt;p&gt;    val prefixString = &quot;^&quot; + Pattern.quote(path);&lt;br/&gt;
    val prefixPattern: Pattern = Pattern.compile(prefixString);&lt;br/&gt;
    val query: BasicDBObject = new BasicDBObject(ID_FIELD_NAME, prefixPattern);&lt;br/&gt;
    val cursor = this.collection.find(query).batchSize(10000);&lt;br/&gt;
    val arr = cursor.toArray();&lt;/p&gt;

&lt;p&gt;I&apos;m using the 2.8.0 java driver (even though the code is written in scala).&lt;/p&gt;

&lt;p&gt;When I do an &quot;explain&quot; of this query, I get the following:&lt;/p&gt;

&lt;p&gt;{ &quot;cursor&quot; : &quot;BtreeCursor &lt;em&gt;id&lt;/em&gt; multi&quot; , &quot;nscanned&quot; : 5020 , &quot;nscannedObjects&quot; : 5020 , &quot;n&quot; : 5020 , &quot;millis&quot; : 23 , &quot;nYields&quot; : 0 , &quot;nChunkSkips&quot; : 0 , &quot;isMultiKey&quot; : false , &quot;indexOnly&quot; : false , &quot;indexBounds&quot; : { &quot;_id&quot; : [ [ &quot;&quot; , { }] , [ &lt;/p&gt;
{ &quot;$regex&quot; : &quot;^\\Q\\E&quot; , &quot;$options&quot; : &quot;&quot;}
&lt;p&gt; , &lt;/p&gt;
{ &quot;$regex&quot; : &quot;^\\Q\\E&quot; , &quot;$options&quot; : &quot;&quot;}
&lt;p&gt;]]} , &quot;allPlans&quot; : [ { &quot;cursor&quot; : &quot;BtreeCursor &lt;em&gt;id&lt;/em&gt; multi&quot; , &quot;indexBounds&quot; : { &quot;_id&quot; : [ [ &quot;&quot; , { }] , [ &lt;/p&gt;
{ &quot;$regex&quot; : &quot;^\\Q\\E&quot; , &quot;$options&quot; : &quot;&quot;}
&lt;p&gt; , &lt;/p&gt;
{ &quot;$regex&quot; : &quot;^\\Q\\E&quot; , &quot;$options&quot; : &quot;&quot;}
&lt;p&gt;]]}}] , &quot;oldPlan&quot; : { &quot;cursor&quot; : &quot;BtreeCursor &lt;em&gt;id&lt;/em&gt; multi&quot; , &quot;indexBounds&quot; : { &quot;_id&quot; : [ [ &quot;&quot; , { }] , [ &lt;/p&gt;
{ &quot;$regex&quot; : &quot;^\\Q\\E&quot; , &quot;$options&quot; : &quot;&quot;}
&lt;p&gt; , &lt;/p&gt;
{ &quot;$regex&quot; : &quot;^\\Q\\E&quot; , &quot;$options&quot; : &quot;&quot;}
&lt;p&gt;]]}}}&lt;/p&gt;

&lt;p&gt;The &quot;explain&quot; says it took 23 milliseconds, but the actual time it takes to do the toArray is closer to 600 ms. This suprises me as I&apos;m doing this testing on localhost, so I would expect the data transfer to go quickly. What can I do to speed this operation up? I want to load all query results into memory as quickly as possible. I took a look in Wireshark and the total data is only 180k, so I&apos;d be surprised if the data transfer were the only issue.&lt;/p&gt;

&lt;p&gt;Thanks!&lt;/p&gt;</description>
                <environment>2.8.0 java driver. Mac OS X (development) &amp;amp; CentOS (production)</environment>
        <key id="47277">JAVA-622</key>
            <summary>Looking for performance tips loading results from a find() with the java driver.</summary>
                <type id="3" iconUrl="https://jira.mongodb.org/secure/viewavatar?size=xsmall&amp;avatarId=14718&amp;avatarType=issuetype">Task</type>
                                            <priority id="3" iconUrl="https://jira.mongodb.org/images/icons/priorities/major.svg">Major - P3</priority>
                        <status id="6" iconUrl="https://jira.mongodb.org/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="9">Done</resolution>
                                        <assignee username="-1">Unassigned</assignee>
                                    <reporter username="startupandrew">Andrew Lee</reporter>
                        <labels>
                    </labels>
                <created>Thu, 16 Aug 2012 03:41:50 +0000</created>
                <updated>Wed, 11 Sep 2019 19:13:05 +0000</updated>
                            <resolved>Mon, 20 Aug 2012 19:32:57 +0000</resolved>
                                                                                        <votes>0</votes>
                                    <watches>0</watches>
                                                                                                                <comments>
                            <comment id="155076" author="startupandrew" created="Mon, 20 Aug 2012 20:58:03 +0000"  >&lt;p&gt;I submitted a question and it just disappeared. I can&apos;t find it anywhere in&lt;br/&gt;
the history / haven&apos;t gotten any replies. I probably just did something&lt;br/&gt;
wrong.&lt;/p&gt;




&lt;p&gt;&amp;#8211; &lt;br/&gt;
Andrew Lee&lt;br/&gt;
Founder, Firebase&lt;br/&gt;
&lt;a href=&quot;http://twitter.com/startupandrew&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://twitter.com/startupandrew&lt;/a&gt;&lt;/p&gt;</comment>
                            <comment id="155067" author="jeff.yemin" created="Mon, 20 Aug 2012 20:48:23 +0000"  >&lt;p&gt;What sort of trouble did you have?&lt;/p&gt;</comment>
                            <comment id="155058" author="startupandrew" created="Mon, 20 Aug 2012 20:34:53 +0000"  >&lt;p&gt;Ok. I originally tried to post to the list but had some trouble so I was told to come here. I&apos;m stopping by the 10gen PA offices today to try to get this all figured out instead. Thanks for your help so far.&lt;/p&gt;</comment>
                            <comment id="155027" author="jeff.yemin" created="Mon, 20 Aug 2012 19:19:38 +0000"  >&lt;p&gt;It&apos;s pretty clear that this is not an issue with the Java driver, so I&apos;m going to close this.  I suggest that you post your question to the mongodb-user Google group, which is a better forum for resolving issues like this.&lt;/p&gt;

&lt;p&gt;You&apos;ll probably be asked to provide mongostat/iostat numbers during your query runs.  That will help engineers to determine the best solution.  It may be that you will have to shard your cluster to achieve the scalability that you&apos;re looking for, but it&apos;s not clear yet.&lt;/p&gt;

</comment>
                            <comment id="155019" author="startupandrew" created="Mon, 20 Aug 2012 18:56:34 +0000"  >&lt;p&gt;One thing I should note is that our _id index is massive (see stats above). I see some stuff online saying that the index needs to fit in memory for performance reasons. Is this likely the source of our problem? If so, is there any way to shrink our index? It&apos;s almost as big as our data. We want to use Mongo to store large amounts of data (terabytes), so keeping it all in memory isn&apos;t really practical.&lt;/p&gt;</comment>
                            <comment id="155007" author="startupandrew" created="Mon, 20 Aug 2012 18:26:30 +0000"  >&lt;p&gt;The 14 second number was for the &quot;warm production server&quot; example I sent. Here&apos;s a more detailed example of one that takes around 26 seconds:&lt;/p&gt;

&lt;p&gt;I&apos;m running this from the command line:&lt;/p&gt;

&lt;p&gt;time mongodump --host 10.181.97.165 -d firebase_data -c roll20 -q &quot;&lt;/p&gt;
{\&quot;_id\&quot;:/^campaign-12688-XkUuoQBmBoRdqIH3TC1mBw/}
&lt;p&gt;&quot; -o datadump2 -u firebase -p &quot;******&quot;&lt;br/&gt;
connected to: 10.181.97.165&lt;br/&gt;
DATABASE: firebase_data	 to 	datadump2/firebase_data&lt;br/&gt;
	firebase_data.roll20 to datadump2/firebase_data/roll20.bson&lt;br/&gt;
		200/96142887	0%&lt;br/&gt;
		 41660 objects&lt;/p&gt;

&lt;p&gt;real	0m26.772s&lt;br/&gt;
user	0m0.004s&lt;br/&gt;
sys	0m0.060s&lt;/p&gt;

&lt;p&gt;If I delete the output and run the command again I get:&lt;/p&gt;

&lt;p&gt;time mongodump --host 10.181.97.165 -d firebase_data -c roll20 -q &quot;&lt;/p&gt;
{\&quot;_id\&quot;:/^campaign-12688-XkUuoQBmBoRdqIH3TC1mBw/}
&lt;p&gt;&quot; -o datadump2 -u firebase -p &quot;*******&quot;&lt;br/&gt;
connected to: 10.181.97.165&lt;br/&gt;
DATABASE: firebase_data	 to 	datadump2/firebase_data&lt;br/&gt;
	firebase_data.roll20 to datadump2/firebase_data/roll20.bson&lt;br/&gt;
		 41660 objects&lt;/p&gt;

&lt;p&gt;real	0m0.391s&lt;br/&gt;
user	0m0.000s&lt;br/&gt;
sys	0m0.060s&lt;/p&gt;

&lt;p&gt;So it appears that the second time I run the command everything is way faster.  This implies to me that the network is not the issue and this is a problem on the mongo side - probably with Mongo loading stuff off disk.&lt;/p&gt;

&lt;p&gt;I&apos;ve emailed the BSON data to you so you can take a look. &lt;/p&gt;

&lt;p&gt;What can be done to speed this up? This &quot;prefix&quot; query we&apos;re doing is actually the only query we ever do on our data, so I want to do whatever I can to optimize for this use case. We really need the 391ms number and not the 26s number. &lt;/p&gt;

&lt;p&gt;Thanks!&lt;/p&gt;</comment>
                            <comment id="154485" author="jeff.yemin" created="Fri, 17 Aug 2012 19:06:05 +0000"  >&lt;p&gt;You can do a BSON dump with the bsondump tool:  with &lt;a href=&quot;http://docs.mongodb.org/manual/reference/bsondump/&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;http://docs.mongodb.org/manual/reference/bsondump/&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The original description in this issue said 600ms.  That&apos;s the number I was working with.  I agree that 14000ms is a different story.  Under what circumstances did that occur, and how are they different from the original run that took 600ms?  The only difference that you mentioned is that the latter was with a warmed up server.&lt;/p&gt;</comment>
                            <comment id="154471" author="startupandrew" created="Fri, 17 Aug 2012 18:18:11 +0000"  >&lt;p&gt;What&apos;s an easy way to get a BSON dump of the data (command line or otherwise?)? If I can do it from the Java client, then I could simply time the amount of time it takes to dump BSON vrs. time it takes to load all of the data and figure out if that&apos;s the issue.&lt;/p&gt;

&lt;p&gt;That said I would be very surprised if this is the issue. I can&apos;t picture how decoding a few megs of BSON could take 14 seconds.&lt;/p&gt;</comment>
                            <comment id="154311" author="jeff.yemin" created="Fri, 17 Aug 2012 13:30:34 +0000"  >&lt;p&gt;I suspect the bottleneck is the speed of UTF-8 string decoding of the BSON, but hard to say for sure without having your data and reproducing locally. Is one CPU pegged during the period of the test?&lt;/p&gt;

&lt;p&gt;If possible, please attach a BSON dump of the ~14K documents that match the query, so we can investigate further. &lt;/p&gt;</comment>
                            <comment id="154126" author="startupandrew" created="Thu, 16 Aug 2012 20:43:50 +0000"  >&lt;p&gt;The documents are all very simple. They simple store a &quot;d&quot; element at the root which contains a number, boolean, or string. They can also contain an optional &quot;p&quot; element as well. There is no nesting of data. We&apos;re sort of using Mongo as a key-value store. &lt;/p&gt;

&lt;p&gt;Here are some sample documents:&lt;/p&gt;
{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/campaign/initiativepage/&quot;, &quot;d&quot; : false }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/campaign/lastmodified/&quot;, &quot;d&quot; : NumberLong(0) }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/campaign/playerpageid/&quot;, &quot;d&quot; : &quot;7F73B956-71C3-4C5B-8C72-576EE4F15EA4&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/campaign/snapping_increment/&quot;, &quot;d&quot; : NumberLong(1) }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/avatar/&quot;, &quot;d&quot; : &quot;http://files.d20.io/images/1433/med.png?1335737429&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/0E7F0D1B-48B9-4B89-B848-76ECC494D3E1/avatar/&quot;, &quot;d&quot; : &quot;http://files.d20.io/images/1464/med.png?1335737697&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/0E7F0D1B-48B9-4B89-B848-76ECC494D3E1/id/&quot;, &quot;d&quot; : &quot;0E7F0D1B-48B9-4B89-B848-76ECC494D3E1&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/0E7F0D1B-48B9-4B89-B848-76ECC494D3E1/name/&quot;, &quot;d&quot; : &quot;Six of Hearts&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/0E7F0D1B-48B9-4B89-B848-76ECC494D3E1/placement/&quot;, &quot;d&quot; : NumberLong(99) }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/110C9DF1-360D-40CC-ADAB-1D99F7FC302A/avatar/&quot;, &quot;d&quot; : &quot;http://files.d20.io/images/1466/med.png?1335737714&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/110C9DF1-360D-40CC-ADAB-1D99F7FC302A/id/&quot;, &quot;d&quot; : &quot;110C9DF1-360D-40CC-ADAB-1D99F7FC302A&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/110C9DF1-360D-40CC-ADAB-1D99F7FC302A/name/&quot;, &quot;d&quot; : &quot;Eight of Hearts&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/110C9DF1-360D-40CC-ADAB-1D99F7FC302A/placement/&quot;, &quot;d&quot; : NumberLong(99) }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/1252C24A-32B6-499E-A212-355ACAF3C732/avatar/&quot;, &quot;d&quot; : &quot;http://files.d20.io/images/1465/med.png?1335737705&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/1252C24A-32B6-499E-A212-355ACAF3C732/id/&quot;, &quot;d&quot; : &quot;1252C24A-32B6-499E-A212-355ACAF3C732&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/1252C24A-32B6-499E-A212-355ACAF3C732/name/&quot;, &quot;d&quot; : &quot;Seven of Hearts&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/1252C24A-32B6-499E-A212-355ACAF3C732/placement/&quot;, &quot;d&quot; : NumberLong(99) }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/13401808-636C-41AA-B7F0-A34D7ED987BF/avatar/&quot;, &quot;d&quot; : &quot;http://files.d20.io/images/1472/med.png?1335737762&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/13401808-636C-41AA-B7F0-A34D7ED987BF/id/&quot;, &quot;d&quot; : &quot;13401808-636C-41AA-B7F0-A34D7ED987BF&quot; }

{ &quot;_id&quot; : &quot;campaign-100-Dj7RkJck1aBczmif5NCXjA/decks/A778E120-672D-49D0-BAF8-8646DA3D3FAC/cards/13401808-636C-41AA-B7F0-A34D7ED987BF/name/&quot;, &quot;d&quot; : &quot;Ace of Hearts&quot; }




&lt;p&gt;We are basically storing big trees of data in Mongo by storing each node in the tree as a mongo document where the ID is a fully-qualified path to the node. In this way we have very large ID&apos;s and indexes but much smaller amounts of data.&lt;/p&gt;</comment>
                            <comment id="154125" author="startupandrew" created="Thu, 16 Aug 2012 20:41:18 +0000"  >&lt;p&gt;Here are the stats for the database:&lt;br/&gt;
db.stats()&lt;br/&gt;
{&lt;br/&gt;
	&quot;db&quot; : &quot;firebase_data&quot;,&lt;br/&gt;
	&quot;collections&quot; : 567,&lt;br/&gt;
	&quot;objects&quot; : 105508415,&lt;br/&gt;
	&quot;avgObjSize&quot; : 174.40280224093974,&lt;br/&gt;
	&quot;dataSize&quot; : 18400963236,&lt;br/&gt;
	&quot;storageSize&quot; : 31936827328,&lt;br/&gt;
	&quot;numExtents&quot; : 1268,&lt;br/&gt;
	&quot;indexes&quot; : 565,&lt;br/&gt;
	&quot;indexSize&quot; : 24340295392,&lt;br/&gt;
	&quot;fileSize&quot; : 66473426944,&lt;br/&gt;
	&quot;nsSizeMB&quot; : 256,&lt;br/&gt;
	&quot;ok&quot; : 1&lt;br/&gt;
}&lt;/p&gt;

&lt;p&gt;And for the particular collection I&apos;m querying.&lt;/p&gt;

&lt;p&gt;&amp;gt; db.roll20.stats()&lt;br/&gt;
{&lt;br/&gt;
	&quot;ns&quot; : &quot;firebase_data.roll20&quot;,&lt;br/&gt;
	&quot;count&quot; : 92469485,&lt;br/&gt;
	&quot;size&quot; : 15554540464,&lt;br/&gt;
	&quot;avgObjSize&quot; : 168.21268620669835,&lt;br/&gt;
	&quot;storageSize&quot; : 18356551632,&lt;br/&gt;
	&quot;numExtents&quot; : 39,&lt;br/&gt;
	&quot;nindexes&quot; : 1,&lt;br/&gt;
	&quot;lastExtentSize&quot; : 2146426864,&lt;br/&gt;
	&quot;paddingFactor&quot; : 1,&lt;br/&gt;
	&quot;flags&quot; : 1,&lt;br/&gt;
	&quot;totalIndexSize&quot; : 22545344528,&lt;br/&gt;
	&quot;indexSizes&quot; : &lt;/p&gt;
{
		&quot;_id_&quot; : 22545344528
	}
&lt;p&gt;,&lt;br/&gt;
	&quot;ok&quot; : 1&lt;br/&gt;
}&lt;/p&gt;</comment>
                            <comment id="154103" author="scotthernandez" created="Thu, 16 Aug 2012 19:32:54 +0000"  >&lt;p&gt;Can you provide an example document, or the data in a bson mongodump file? Do you store large arrays of values in your documents, or deeply nested fields?&lt;/p&gt;

&lt;p&gt;Can you provide the stats for your collection?&lt;/p&gt;</comment>
                            <comment id="154089" author="startupandrew" created="Thu, 16 Aug 2012 19:07:14 +0000"  >&lt;p&gt;Is there a way for me to instrument the query so it tells me the total size of the data transferred over the wire for the query?&lt;/p&gt;</comment>
                            <comment id="154088" author="startupandrew" created="Thu, 16 Aug 2012 19:06:42 +0000"  >&lt;p&gt;I verified that I can transfer data between machines at ~27 MB / second, so I don&apos;t think bandwidth is the issue here.&lt;/p&gt;</comment>
                            <comment id="154080" author="startupandrew" created="Thu, 16 Aug 2012 18:47:36 +0000"  >&lt;p&gt;This is a &quot;cold&quot; run, you&apos;re right. I&apos;m still seeing poor performance for warm boxes as well though. Here&apos;s data from a &quot;warm&quot; query on our production servers:&lt;br/&gt;
Total Documents Returned: 14303&lt;br/&gt;
Total time: 14763ms&lt;br/&gt;
Output from Explain:&lt;br/&gt;
{ &quot;cursor&quot; : &quot;BtreeCursor &lt;em&gt;id&lt;/em&gt; multi&quot; , &quot;nscanned&quot; : 14304 , &quot;nscannedObjects&quot; : 14303 , &quot;n&quot; : 14303 , &quot;millis&quot; : 30 , &quot;nYields&quot; : 0 , &quot;nChunkSkips&quot; : 0 , &quot;isMultiKey&quot; : false , &quot;indexOnly&quot; : false , &quot;indexBounds&quot; : { &quot;_id&quot; : [ [ &quot;campaign-16055-g0d9F0cjEGRPVrVtC9xNCg/&quot; , &quot;campaign-16055-g0d9F0cjEGRPVrVtC9xNCg0&quot;] , [ &lt;/p&gt;
{ &quot;$regex&quot; : &quot;^\\Qcampaign-16055-g0d9F0cjEGRPVrVtC9xNCg/\\E&quot;}
&lt;p&gt; , &lt;/p&gt;
{ &quot;$regex&quot; : &quot;^\\Qcampaign-16055-g0d9F0cjEGRPVrVtC9xNCg/\\E&quot;}
&lt;p&gt;]]} , &quot;allPlans&quot; : [ { &quot;cursor&quot; : &quot;BtreeCursor &lt;em&gt;id&lt;/em&gt; multi&quot; , &quot;indexBounds&quot; : { &quot;_id&quot; : [ [ &quot;campaign-16055-g0d9F0cjEGRPVrVtC9xNCg/&quot; , &quot;campaign-16055-g0d9F0cjEGRPVrVtC9xNCg0&quot;] , [ &lt;/p&gt;
{ &quot;$regex&quot; : &quot;^\\Qcampaign-16055-g0d9F0cjEGRPVrVtC9xNCg/\\E&quot;}
&lt;p&gt; , &lt;/p&gt;
{ &quot;$regex&quot; : &quot;^\\Qcampaign-16055-g0d9F0cjEGRPVrVtC9xNCg/\\E&quot;}
&lt;p&gt;]]}}] , &quot;oldPlan&quot; : { &quot;cursor&quot; : &quot;BtreeCursor &lt;em&gt;id&lt;/em&gt; multi&quot; , &quot;indexBounds&quot; : { &quot;_id&quot; : [ [ &quot;campaign-16055-g0d9F0cjEGRPVrVtC9xNCg/&quot; , &quot;campaign-16055-g0d9F0cjEGRPVrVtC9xNCg0&quot;] , [ &lt;/p&gt;
{ &quot;$regex&quot; : &quot;^\\Qcampaign-16055-g0d9F0cjEGRPVrVtC9xNCg/\\E&quot;}
&lt;p&gt; , &lt;/p&gt;
{ &quot;$regex&quot; : &quot;^\\Qcampaign-16055-g0d9F0cjEGRPVrVtC9xNCg/\\E&quot;}
&lt;p&gt;]]}}}&lt;/p&gt;

&lt;p&gt;I&apos;m estimating the total size of data transferred at between 3-4 megabytes. In production mongo is on a separate physical box, but they&apos;re in the same data center and have fast network connections so I would expect 3-4M to be transferred quickly (or is there tuning I need to do to make this happen?). &lt;/p&gt;

&lt;p&gt;I would be more likely to chalk this up to a network issue if the write performance were equally slow, but I seem to be able to write data to mongo far faster than I can read it in this way.&lt;/p&gt;</comment>
                            <comment id="153896" author="jeff.yemin" created="Thu, 16 Aug 2012 10:33:53 +0000"  >&lt;p&gt;Did you warm up your app with a bunch of queries before starting your timing?  It&apos;s possible that if this is the first query you&apos;re doing, the time includes JIT compilation, opening the socket, etc.  &lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_15850" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_12550" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>2|hrhayf:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10558" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>14575</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            </customfields>
    </item>
</channel>
</rss>