Details
-
Bug
-
Resolution: Done
-
Major - P3
-
None
-
2.6.0
-
None
-
Win 8.1 x64, i7, 16GB of RAM, 10GB available for Mongo.
-
Windows
-
Description
I have a db and 2 processes that aggregate the data. They do the following:
1. find oldest record in collection B, if any
2. select some records from collection A for aggregation
3. aggregate (outside mongo) and insert into collection B
Issue was not present in 2.4, it's 2.6 only. Both collections have proper indices (
{s: 1, p: 1, d: 1}and
{d:1, s:1 }), I query by s and p + sort by d. No sharding.
When 1 worker is doing its job - it's ok.
when 2 workers are doing the job - it seems that indices stop working, because find() operation is veery slow (at least 50 times slower), details below. Disk is barely used all the time.
when i kill one worker and only one remains - everything works as usually, i mean fast.
i'm running windows, data + indices are slightly bigger than RAM.
console log, 2 workers are working, and then one is killed:
2014-04-14T18:49:12.406+0200 [conn106] query kwitdb.daily.products query: { $query: { s: 3259, p: 551 }, $orderby: { d: 1 } } planSummary: IXSCAN { s: 1.0, d: 1.0 } ntoreturn:1 ntoskip:0 nscanned:4441837 nscannedObjects:4441836 keyUpdates:0 numYields:34642 locks(micros) r:12437794 nreturned:1 reslen:158 6518ms
|
2014-04-14T18:49:16.774+0200 [conn106] query kwitdb.daily.products query: { $query: { s: 3259, p: 552 }, $orderby: { d: 1 } } planSummary: IXSCAN { s: 1.0, d: 1.0 } ntoreturn:1 ntoskip:0 nscanned:3001134 nscannedObjects:3001133 keyUpdates:0 numYields:23445 locks(micros) r:8551123 nreturned:1 reslen:158 4319ms
|
2014-04-14T18:49:21.931+0200 [conn106] query kwitdb.daily.products query: { $query: { s: 3259, p: 556 }, $orderby: { d: 1 } } planSummary: IXSCAN { s: 1.0, d: 1.0 } ntoreturn:1 ntoskip:0 nscanned:3536380 nscannedObjects:3536379 keyUpdates:0 numYields:27541 locks(micros) r:10049331 nreturned:1 reslen:158 5081ms
|
2014-04-14T18:49:26.713+0200 [clientcursormon] mem (MB) res:5723 virt:86414
|
2014-04-14T18:49:26.713+0200 [clientcursormon] mapped (incl journal view):85946
|
2014-04-14T18:49:26.713+0200 [clientcursormon] connections:6
|
2014-04-14T18:49:30.227+0200 [conn106] query kwitdb.daily.products query: { $query: { s: 3259, p: 582 }, $orderby: { d: 1 } } planSummary: IXSCAN { s: 1.0, d: 1.0 } ntoreturn:1 ntoskip:0 nscanned:4441850 nscannedObjects:4441849 keyUpdates:0 numYields:34688 locks(micros) r:12389182 nreturned:1 reslen:158 6361ms
|
2014-04-14T18:49:35.156+0200 [conn106] query kwitdb.daily.products query: { $query: { s: 3259, p: 583 }, $orderby: { d: 1 } } planSummary: IXSCAN { s: 1.0, d: 1.0 } ntoreturn:1 ntoskip:0 nscanned:3486873 nscannedObjects:3486872 keyUpdates:0 numYields:27241 locks(micros) r:9676540 nreturned:1 reslen:158 4884ms
|
2014-04-14T18:49:41.679+0200 [conn106] query kwitdb.daily.products query: { $query: { s: 3259, p: 603 }, $orderby: { d: 1 } } planSummary: IXSCAN { s: 1.0, d: 1.0 } ntoreturn:1 ntoskip:0 nscanned:3721757 nscannedObjects:3721756 keyUpdates:0 numYields:29036 locks(micros) r:10548485 nreturned:1 reslen:165 5340ms
|
2014-04-14T18:49:47.215+0200 [conn106] query kwitdb.daily.products query: { $query: { s: 3259, p: 607 }, $orderby: { d: 1 } } planSummary: IXSCAN { s: 1.0, d: 1.0 } ntoreturn:1 ntoskip:0 nscanned:3721759 nscannedObjects:3721758 keyUpdates:0 numYields:28665 locks(micros) r:10396267 nreturned:1 reslen:158 5254ms
|
2014-04-14T18:49:47.216+0200 [conn106] end connection 192.168.1.19:61191 (5 connections now open)
|
2014-04-14T18:50:10.551+0200 [conn99] insert kwitdb.weekly.products ninserted:1 keyUpdates:0 numYields:0 locks(micros) w:19 196ms
|
2014-04-14T18:51:10.727+0200 [conn99] insert kwitdb.weekly.products ninserted:1 keyUpdates:0 numYields:0 locks(micros) w:31 352ms
|
could you please explain that? is it fixable, or it's just the global write lock?