-
Type: Task
-
Resolution: Done
-
Priority: Major - P3
-
None
-
Affects Version/s: 1.9
-
Component/s: Performance
-
Environment:Windows 64 bit. 8GB RAM. i5 2540M 2,6 GHz
So here is the use case: I have a locally saved large log file (>500 MB) which I read in predefined chunks in c# and every time i get a chunk I insert that to a localhost Mongo instance. Next chunk I append it to the same collection. I use "InsertBatch" method from MongoCollection.cs.
Previously I was using SqlLite for the same mechanism and every chunk of approx 10 MB size had following numbers in my machine:
1. Chunk Read time = ~ 0.5 s
2. Chunk write to SqlLite = ~ 3s
Now with Mongo I have gained better performance in write but my reading performance has degraded drastically. For general chunks numbers are now like:
1. Chunk Read Time = ~ 3s . For few chunks (although the size is same) it shoots up like a spike and goes beyond 10-15 s)
2. Write time = ~2s
So if I consider "Total time", the performance has degraded with MongoDB because of the log reading time. How can this be related is my first question. Second is, how to get rid of this?