<!-- 
RSS generated by JIRA (9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66) at Wed Feb 07 21:38:33 UTC 2024

It is possible to restrict the fields that are returned in this document by specifying the 'field' parameter in your request.
For example, to request only the issue key and summary append 'field=key&field=summary' to the URL of your request.
-->
<rss version="0.92" >
<channel>
    <title>MongoDB Jira</title>
    <link>https://jira.mongodb.org</link>
    <description>This file is an XML representation of an issue</description>
    <language>en-us</language>    <build-info>
        <version>9.7.1</version>
        <build-number>970001</build-number>
        <build-date>13-04-2023</build-date>
    </build-info>


<item>
            <title>[CSHARP-1061] Mongo InsertBatch and IO read performance degradaion</title>
                <link>https://jira.mongodb.org/browse/CSHARP-1061</link>
                <project id="10041" key="CSHARP">C# Driver</project>
                    <description>&lt;p&gt;So here is the use case: I have a locally saved large log file (&amp;gt;500 MB) which I read in predefined chunks in c# and every time i get a chunk I insert that to a localhost Mongo instance. Next chunk I append it to the same collection. I use &quot;InsertBatch&quot; method from MongoCollection.cs.&lt;br/&gt;
Previously I was using SqlLite for the same mechanism and every chunk of approx 10 MB size had following numbers in my machine:&lt;br/&gt;
1. Chunk Read time = ~ 0.5 s&lt;br/&gt;
2. Chunk write to SqlLite = ~ 3s&lt;/p&gt;

&lt;p&gt;Now with Mongo I have gained better performance in write but my reading performance has degraded drastically. For general chunks numbers are now like:&lt;br/&gt;
1. Chunk Read Time = ~ 3s . For few chunks (although the size is same) it shoots up like a spike and goes beyond 10-15 s)&lt;br/&gt;
2. Write time = ~2s&lt;/p&gt;

&lt;p&gt;So if I consider &quot;Total time&quot;, the performance has degraded with MongoDB because of the log reading time. How can this be related is my first question. Second is, how to get rid of this?&lt;/p&gt;</description>
                <environment>Windows 64 bit. 8GB RAM. i5 2540M 2,6 GHz</environment>
        <key id="157372">CSHARP-1061</key>
            <summary>Mongo InsertBatch and IO read performance degradaion</summary>
                <type id="3" iconUrl="https://jira.mongodb.org/secure/viewavatar?size=xsmall&amp;avatarId=14718&amp;avatarType=issuetype">Task</type>
                                            <priority id="3" iconUrl="https://jira.mongodb.org/images/icons/priorities/major.svg">Major - P3</priority>
                        <status id="6" iconUrl="https://jira.mongodb.org/images/icons/statuses/closed.png" description="The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.">Closed</status>
                    <statusCategory id="3" key="done" colorName="success"/>
                                    <resolution id="9">Done</resolution>
                                        <assignee username="-1">Unassigned</assignee>
                                    <reporter username="nilanjan">Nilanjan Dutta</reporter>
                        <labels>
                            <label>Performance</label>
                            <label>question</label>
                    </labels>
                <created>Wed, 10 Sep 2014 12:01:23 +0000</created>
                <updated>Fri, 5 Apr 2019 13:57:51 +0000</updated>
                            <resolved>Thu, 11 Sep 2014 13:18:45 +0000</resolved>
                                    <version>1.9</version>
                                                    <component>Performance</component>
                                        <votes>1</votes>
                                    <watches>3</watches>
                                                                                                                <comments>
                            <comment id="716633" author="nilanjan" created="Thu, 11 Sep 2014 13:21:25 +0000"  >&lt;p&gt;Thanks Craig, that&apos;d help. Yes I&apos;m running 2.6.4 server in my machine. Thanks for your input, hoping to get a resolution soon from the other forum.&lt;/p&gt;</comment>
                            <comment id="716628" author="craiggwilson" created="Thu, 11 Sep 2014 13:18:45 +0000"  >&lt;p&gt;Hi Nilanjan,&lt;/p&gt;

&lt;p&gt;As this is not a driver problem, having this discussion here isn&apos;t going to get the right people looking at it to help solve your issue. So, I&apos;m going to close this ticket.  However, I have a few suggestions for you. &lt;/p&gt;

&lt;p&gt;1. I&apos;d repost your question here: &lt;a href=&quot;https://groups.google.com/forum/#!forum/mongodb-user&quot; class=&quot;external-link&quot; target=&quot;_blank&quot; rel=&quot;nofollow noopener&quot;&gt;https://groups.google.com/forum/#!forum/mongodb-user&lt;/a&gt;. Include the information we discovered during our conversations, including that this is transient data, the mongo versions you are using, the fact that you are having IO contention between reading on disk and mongod, how big these files and your data is, etc...&lt;/p&gt;

&lt;p&gt;2. Make sure you are running server 2.6.4. There has been significant improvements related to IO and background flushes on windows.&lt;/p&gt;

&lt;p&gt;Sorry I couldn&apos;t be more helpful. I&apos;ll look for your post on the groups site and contribute information that may be helpful for a more general audience to diagnose your problem and provide a solution.&lt;/p&gt;</comment>
                            <comment id="716612" author="nilanjan" created="Thu, 11 Sep 2014 13:00:32 +0000"  >&lt;p&gt;sc.exe create MongoDB binPath= &quot;\&quot;D:\MongoDB \bin\mongod.exe\&quot; --service --config=\&quot;D:\MongoDB\mongod.conf\&quot;&quot; DisplayName= &quot;MongoDB&quot; start= &quot;auto&quot;&lt;/p&gt;

&lt;p&gt;this is how I created the service&lt;/p&gt;</comment>
                            <comment id="716610" author="nilanjan" created="Thu, 11 Sep 2014 12:59:00 +0000"  >&lt;p&gt;Yes. it is temporary going to get stored in the DB. once from UI the log window gets closed we plan to drop the table. I created a win service for Mongo and I read from the following config file setting:&lt;/p&gt;

&lt;p&gt;dbpath = D:\CAT\MongoDB\data\db&lt;br/&gt;
logpath = D:\CAT\MongoDB\log\mongo.log &lt;br/&gt;
logappend = false &lt;/p&gt;

&lt;p&gt;auth = true &lt;br/&gt;
quiet = true&lt;br/&gt;
nojournal = true&lt;/p&gt;

&lt;p&gt;Still I could observe the spikes coming up while read. Could you please guide if something is missing?&lt;/p&gt;</comment>
                            <comment id="716607" author="craiggwilson" created="Thu, 11 Sep 2014 12:47:31 +0000"  >&lt;p&gt;Ok. One more question, is this just transient data that can be reloaded, or does it need to persist for a long time. I&apos;m assuming it&apos;s just transient, so no need to care about failover or redundancy.&lt;/p&gt;

&lt;p&gt;If it is just transient data (essentially throw away data), then we can do some things. The first thing to try would be to turn off journaling. You can do this by starting mongod with --nojournal.&lt;/p&gt;

&lt;p&gt;I&apos;d also still like to see a very verbose log file during one of these runs.&lt;/p&gt;</comment>
                            <comment id="716587" author="nilanjan" created="Thu, 11 Sep 2014 12:25:12 +0000"  >&lt;p&gt;Yes. This is how it going to be in production. Mongo is going to get packaged in every client&apos;s laptop along with the software and client would have the log files also dumped in his/her laptop to analyze.&lt;/p&gt;</comment>
                            <comment id="716584" author="craiggwilson" created="Thu, 11 Sep 2014 12:22:34 +0000"  >&lt;p&gt;Hi Nilanjan, I thought this might be what was happening, but you&apos;ve confirmed it. You are simply competing with the io resources with the actual mongo database. I can&apos;t tell you what this read is off the top of my head as it could be many things from the 60 second flush to disk or the 100ms journal write to simply allocating some space as it sees you are writing a lot of data and wants to pre-allocate some more space. Perhaps you could provide a very verbose log file (-vvvvv) from the server and it would give some insight (some of these can be disabled or tweaked).&lt;/p&gt;

&lt;p&gt;But rather than trying to disable things, let&apos;s first see if it matters.  Is this how you are going to run in production - reading large files off your local disk and putting them in a database also running off the local disk?&lt;/p&gt;</comment>
                            <comment id="716518" author="nilanjan" created="Thu, 11 Sep 2014 09:58:25 +0000"  >&lt;p&gt;Please take a look from time stamp 2:30.. there are evidence of overlapping read and write process. Wondering if that adds up to any additional read time.&lt;/p&gt;</comment>
                            <comment id="716487" author="nilanjan" created="Thu, 11 Sep 2014 08:13:50 +0000"  >&lt;p&gt;Today morning IST I used ProcMon to give me a map of what happens when the file is read. I observed in between the file read, the calls which take larger time, there is read operation by mongod.exe on DB file.This read time of DB adds upto the total file read time. If you could please explain why this read is required and also if we can disable this read somehow. I think this may well be the root of the problem. I&apos;m attaching a snapshot from the proc mon for your reference too.&lt;/p&gt;</comment>
                            <comment id="715602" author="nilanjan" created="Wed, 10 Sep 2014 13:40:30 +0000"  >&lt;p&gt;ANTS performance profiling result. Shows higher log reading time towards the later chunks.&lt;/p&gt;</comment>
                            <comment id="715597" author="craiggwilson" created="Wed, 10 Sep 2014 13:32:58 +0000"  >&lt;p&gt;Certainly interested. Feel free to upload it to the ticket.&lt;/p&gt;</comment>
                            <comment id="715591" author="nilanjan" created="Wed, 10 Sep 2014 13:30:31 +0000"  >&lt;p&gt;I see. I was speculating about the IO lock. I did a performance profiling of the flow with ANTS performance profiler, and found that the &quot;Waiting for I/O operation&quot; is the one while reading the log file takes more time than expected or compared to Sql Lite.&lt;/p&gt;

&lt;p&gt;Total write (insert batch) of 500 MB log file in Mongo = 95s&lt;br/&gt;
Wherein SqlLite it is about 130s&lt;/p&gt;

&lt;p&gt;Total read time on similar fashion for the log file in Mongo environment = &amp;gt;100s&lt;br/&gt;
whereas in SqlLite environment = ~45s.&lt;/p&gt;

&lt;p&gt;Would you be interested in the profiling result I have saved in mongo environment?&lt;/p&gt;</comment>
                            <comment id="715573" author="craiggwilson" created="Wed, 10 Sep 2014 13:18:33 +0000"  >&lt;p&gt;No. There is no reason that inserting documents would cause the reading of a file to be slow.  That being said, the driver doesn&apos;t do asynchronous socket IO. We also don&apos;t talk to the file system, so there should be no overlap. (we are moving to async socket IO in our 2.0 release).&lt;/p&gt;

&lt;p&gt;Also, the InsertBatch call is blocking. So, if it takes a while to insert a lot of documents and your timer is running for reading the files, then it is going to include the time during writing to mongo.&lt;/p&gt;
</comment>
                            <comment id="715557" author="nilanjan" created="Wed, 10 Sep 2014 13:10:07 +0000"  >&lt;p&gt;I would like to know if C# driver issues any locks until it completes inserting async. Is there any event which we can listen to, to find out if the async insertion is complete?&lt;/p&gt;</comment>
                            <comment id="715553" author="nilanjan" created="Wed, 10 Sep 2014 13:08:17 +0000"  >&lt;p&gt;I am bringing in the reading part of the file because, with Mongo DB when I do insert batch the subsequent call to read the file takes more time. This looks like InsertBatch is happening in async and holding hogging up the IO and limiting the resources for reading of the file on the same hard drive.&lt;br/&gt;
We can accept this if it doesn&apos;t take too much time, but sometimes it spikes upto 30 to 40 seconds which normally would only take 2 seconds to read. I tried adding lock to the reading, but it didn&apos;t help.&lt;/p&gt;</comment>
                            <comment id="715539" author="craiggwilson" created="Wed, 10 Sep 2014 12:57:04 +0000"  >&lt;p&gt;Thanks. I&apos;m confused about why you are referring to this code in the context of the C# Driver. None of these classes (FileStream, BufferedStream, StreamReader) have anything to do with the C# driver. So, what part of the &quot;Chunk Read Time&quot; could be slowed down by the C# driver?&lt;/p&gt;</comment>
                            <comment id="715536" author="nilanjan" created="Wed, 10 Sep 2014 12:54:10 +0000"  >&lt;p&gt;LogEvent class is like a data structure with properties compliant with the log file. e.g. &lt;br/&gt;
&lt;span class=&quot;error&quot;&gt;&amp;#91;BsonDefaultValue(0), BsonIgnoreIfNull, BsonIgnoreIfDefault&amp;#93;&lt;/span&gt;&lt;br/&gt;
        public long Index&lt;br/&gt;
        {&lt;br/&gt;
            get&lt;/p&gt;
            {
                return this.relatedEventIndex;
            }
&lt;p&gt;            private set&lt;/p&gt;
            {
                this.relatedEventIndex = value;
            }
&lt;p&gt;        }&lt;/p&gt;

&lt;p&gt;        &lt;span class=&quot;error&quot;&gt;&amp;#91;BsonIgnoreIfNull, BsonDefaultValue(0), BsonIgnoreIfDefault&amp;#93;&lt;/span&gt;&lt;br/&gt;
        public int SecondTimeFraction&lt;br/&gt;
        {&lt;br/&gt;
            get&lt;/p&gt;
            {
                return this.secondTimeFraction;
            }
&lt;p&gt;            private set&lt;/p&gt;
            {
                this.secondTimeFraction = value;
            }
&lt;p&gt;        }&lt;/p&gt;

&lt;p&gt;        &lt;span class=&quot;error&quot;&gt;&amp;#91;BsonIgnoreIfNull&amp;#93;&lt;/span&gt;&lt;br/&gt;
        public DateTime SecondTimeStamp&lt;br/&gt;
        {&lt;br/&gt;
            get&lt;/p&gt;
            {
                return this.secondTimeStamp.AddMilliseconds((double) this.secondTimeFraction);
            }
&lt;p&gt;            private set&lt;/p&gt;
            {
                this.secondTimeStamp = value;
            }
&lt;p&gt;        }&lt;/p&gt;

&lt;p&gt;And though we have a complex code to read the file, here is the main snippet which would be helpful. Also please note that these remain same in SqlLite environment.:&lt;/p&gt;

&lt;p&gt;FileStream reader = null;&lt;br/&gt;
reader.Seek(this.fileIndex -= bufferSize, SeekOrigin.End);&lt;br/&gt;
BufferedStream bs = new BufferedStream(reader);&lt;br/&gt;
                        StreamReader sr = new StreamReader(bs);&lt;br/&gt;
                        sr.ReadBlock(bytes, 0, bufferSize + incompleteLogBytes);&lt;/p&gt;</comment>
                            <comment id="715532" author="craiggwilson" created="Wed, 10 Sep 2014 12:47:57 +0000"  >&lt;p&gt;Could you provide the code for the reading as well as what your LogEvent class looks like.&lt;/p&gt;</comment>
                            <comment id="715531" author="nilanjan" created="Wed, 10 Sep 2014 12:45:46 +0000"  >&lt;p&gt;Yes craig that risk we have in consideration since we need faster insert performance. By &quot;chunk read&quot; I meant file I/O. It&apos;s not DB read. I read it through StreamReader of C#&lt;/p&gt;</comment>
                            <comment id="715526" author="craiggwilson" created="Wed, 10 Sep 2014 12:40:52 +0000"  >&lt;p&gt;I asked about the query because you said &quot;Chunk Read Time&quot;... Perhaps you could clarify what you mean.&lt;/p&gt;

&lt;p&gt;Also, by using WriteConcern.Unacknowledged, you are stating that you don&apos;t care if the chunks were successfully uploaded.&lt;/p&gt;</comment>
                            <comment id="715525" author="nilanjan" created="Wed, 10 Sep 2014 12:37:01 +0000"  >&lt;p&gt;Thanks Craig for you response. I fail to understand the &quot;query&quot; part though. However I&apos;m posting the code snippet through which I call the InsertBatch. I used the c sharp driver to give me these wrappers:&lt;/p&gt;

&lt;p/&gt;
&lt;div id=&quot;syntaxplugin&quot; class=&quot;syntaxplugin&quot; style=&quot;border: 1px dashed #bbb; border-radius: 5px !important; overflow: auto; max-height: 30em;&quot;&gt;
&lt;table cellspacing=&quot;0&quot; cellpadding=&quot;0&quot; border=&quot;0&quot; width=&quot;100%&quot; style=&quot;font-size: 1em; line-height: 1.4em !important; font-weight: normal; font-style: normal; color: black;&quot;&gt;
		&lt;tbody &gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;  margin-top: 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;MongoDatabase db;&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;MongoServer mongoServer;&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&amp;nbsp;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;var mongo = new MongoClient();&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;mongoServer = mongo.GetServer();&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&amp;nbsp;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;db = mongoServer.GetDatabase(&quot;logDB&quot;);&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;db.Drop();&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;db = mongoServer.GetDatabase(&quot;logDB&quot;);&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&amp;nbsp;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;MongoCollection&amp;lt;LogEvent&amp;gt; logCollection = db.GetCollection&amp;lt;LogEvent&amp;gt;(&quot;LogEvent&quot; + repositoryId);&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&amp;nbsp;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;MongoInsertOptions options = new MongoInsertOptions();&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;options.WriteConcern = WriteConcern.Unacknowledged;&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;options.CheckElementNames = false;&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;            &lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;using (mongoServer.RequestStart(db))&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;{&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;       var result = logCollection.InsertBatch(logEvents, options);&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
				&lt;tr id=&quot;syntaxplugin_code_and_gutter&quot;&gt;
						&lt;td  style=&quot; line-height: 1.4em !important; padding: 0em; vertical-align: top;&quot;&gt;
					&lt;pre style=&quot;font-size: 1em; margin: 0 10px;   margin-bottom: 10px;  width: auto; padding: 0;&quot;&gt;&lt;span style=&quot;color: black; font-family: &apos;Consolas&apos;, &apos;Bitstream Vera Sans Mono&apos;, &apos;Courier New&apos;, Courier, monospace !important;&quot;&gt;}&lt;/span&gt;&lt;/pre&gt;
			&lt;/td&gt;
		&lt;/tr&gt;
			&lt;/tbody&gt;
&lt;/table&gt;
&lt;/div&gt;
&lt;p/&gt;

&lt;p&gt;Here &quot;LogEvents&quot; is a custom type which represents the logs read from the log file.&lt;/p&gt;</comment>
                            <comment id="715514" author="craiggwilson" created="Wed, 10 Sep 2014 12:18:53 +0000"  >&lt;p&gt;Hi Nilanjan,&lt;/p&gt;

&lt;p&gt;Sorry you are having issues.  Could you run an explain on the query from the shell and put the results here?  That would look something like this:&lt;/p&gt;

&lt;p&gt;{{db.chunks.find(&lt;/p&gt;
{x: 1}
&lt;p&gt;).explain();}}&lt;/p&gt;

&lt;p&gt;where chunks is your collection name and replace the query with whatever it is you are running.&lt;/p&gt;</comment>
                    </comments>
                    <attachments>
                            <attachment id="51946" name="Capture.PNG" size="266817" author="nilanjan" created="Thu, 11 Sep 2014 08:14:14 +0000"/>
                            <attachment id="51949" name="Logfile.PML" size="181260754" author="nilanjan" created="Thu, 11 Sep 2014 09:58:25 +0000"/>
                            <attachment id="51872" name="ReadProfile-latest.app8results" size="86033581" author="nilanjan" created="Wed, 10 Sep 2014 13:40:30 +0000"/>
                    </attachments>
                <subtasks>
                    </subtasks>
                <customfields>
                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_15850" key="com.atlassian.jira.plugins.jira-development-integration-plugin:devsummary">
                        <customfieldname>Development</customfieldname>
                        <customfieldvalues>
                            
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    <customfield id="customfield_12550" key="com.pyxis.greenhopper.jira:gh-lexo-rank">
                        <customfieldname>Rank</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>2|hs23mn:</customfieldvalue>

                        </customfieldvalues>
                    </customfield>
                                                                <customfield id="customfield_10558" key="com.pyxis.greenhopper.jira:gh-global-rank">
                        <customfieldname>Rank (Obsolete)</customfieldname>
                        <customfieldvalues>
                            <customfieldvalue>136918</customfieldvalue>
                        </customfieldvalues>
                    </customfield>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            </customfields>
    </item>
</channel>
</rss>