[SERVER-15984] Monitoring and performance improvement Created: 06/Nov/14 Updated: 03/Dec/14 Resolved: 03/Dec/14 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Diagnostics, Index Maintenance, Performance |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Question | Priority: | Major - P3 |
| Reporter: | Ritesh Gupta | Assignee: | Ramon Fernandez Marina |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Participants: |
| Description |
|
Dear team, We are using MongoDB as a backend for Cube for logging streaming events. We are facing some performance issues while querying the DB. When firing several queries I can see the queries get executed almost serially and the CPU usage pretty high. Any pointers would be greatly appreciated. Regards, |
| Comments |
| Comment by Ramon Fernandez Marina [ 03/Dec/14 ] | |||
|
Please see the documentation for $query, which shows how to use explain(). In particular:
I also see that you're using a 2.4 version. Somewhere along the 2.6 series the logging was greatly improved, and slow queries now contain additional information:
The query logged above shows an index scan of the {x:1} index for example. You may want to try 2.6.5 in a test environment if you find this useful. Finally, note that the SERVER project is for reporting bugs or feature suggestions for the MongoDB server. For MongoDB-related support discussion please post on the mongodb-user group or Stack Overflow with the mongodb tag, where your question will reach a larger audience. A question like this involving more discussion would be best posted on the mongodb-user group. | |||
| Comment by Ritesh Gupta [ 07/Nov/14 ] | |||
|
Hi Ramon, Thanks for your inputs. We turned on the profiler with a 100ms query slow time. There is a weird observation. We are using Cube and therefore there is a NodeJS front-end which is making calls to MongoDB. Here is what I find in logs: Fri Nov 7 13:38:12.152 [conn217] query cube_development.pl_info_metrics query: { $query: { i: false, _id.e: "median(pl_info(pageloadTime).eq(domain,'ritesh.com').eq(geography,'AG'))", _id.l: 86400000, _id.t: { $gte: new Date(1414713600000), $lt: new Date(1415404800000) }}, orderby: { _id.t: 1 }} ntoreturn:1000 ntoskip:0 nscanned:558326 scanAndOrder:1 keyUpdates:0 numYields: 4 locks(micros) r:2659501 nreturned:0 reslen:20 1372ms As you can see the query is wrapped in a key $query. When I run an explain with the $query - I see a BasicCursor used and when I run the same query removing the $query - I see a BTreeCursor used. So essentially my index usage is getting affected with this. Is this a bug with NodeJS? Are you aware of any such issues with NodeJS and MongoDB? Here is the output from the explain plan steps I ran above: , "d.domain": "ritesh.com", "d.geography": "AW", "d.pageloadTime": { $exists: true }}, orderby: { t: 1 } }).explain() }, >>>>db.pl_info_events.find({ t: { $gte: new Date(1412726400000), $lt: new Date(1413331200000) }, "d.domain": "ritesh.com", "d.geography": "AW", "d.pageloadTime": { $exists: true }, orderby: { t: 1 } }).explain() , | |||
| Comment by Ramon Fernandez Marina [ 06/Nov/14 ] | |||
|
ritesh.gupta@techvedika.com, by default all queries taking longer than 100ms should appear in the logs, you can see what mongod reports as "slow". You may want to play with slowOpThresholdMS values to get a better idea, or perhaps enable the profiler. |