[SERVER-10738] Why is my mongodb so slow? Created: 11/Sep/13  Updated: 10/Dec/14  Resolved: 03/Oct/13

Status: Closed
Project: Core Server
Component/s: Index Maintenance, Performance, Querying
Affects Version/s: 2.4.1
Fix Version/s: None

Type: Question Priority: Major - P3
Reporter: dongweiming Assignee: Unassigned
Resolution: Done Votes: 0
Labels: performance, query
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Ubuntu 12.04.2 LTS 3.5.0-23-generic x86_64


Participants:

 Description   

Our company produces environment uses mongodb as a web database, but I found us a simple query or update (with index) will be a few hundred ms time-consuming, and some even more. Was not what we use right?

Here is the log part (most of them are like this):

Wed Sep 11 16:47:36.187 [conn46044] update sandbox.status_5167852c208eb5a2f1c2b260_201309 query:

{ _id: "244593027441387" }

update: { $set:

{ rcount: 10, hot: true, ccount: 2 }

} nscanned:1 nupdated:1 keyUpdates:0 locks(micros) w:614810 614ms
Wed Sep 11 16:47:36.187 [conn24456] update sandbox.status query:

{ _id: "333820030946999" }

update: { $set:

{ rsince_timestamp: 1378873552 }

, $inc:

{ sm_flash_factor: 0, repost_count: 0 }

} idhack:1 nupdated:1 fastmod:1 keyUpdates:0 locks(micros) w:123 554ms
Wed Sep 11 16:47:36.188 [conn24054] update sandbox.influence query:

{ date: new Date(1378857600000), id: "ftchinese" }

update: { $inc:

{ nctc: 0 }

} nscanned:1 nupdated:1 fastmod:1 keyUpdates:0 locks(micros) w:169 521ms
Wed Sep 11 16:47:36.188 [conn39887] insert sandbox.follow_relations ninserted:1 keyUpdates:0 locks(micros) w:120 488ms
Wed Sep 11 16:47:45.104 [conn40782] update sandbox.status_50054a70b3158aa4709ebfc5_201309 query:

{ _id: "228751112143775" }

update: { $set:

{ rcount: 0, hot: false, ccount: 0 }

} nscanned:1 nupdated:1 keyUpdates:0 locks(micros) w:397837 397ms
Wed Sep 11 16:47:45.104 [conn39887] update sandbox.followers query:

{ _id: "lonetan" }

update: { $set: { id: "lonestan", city: "", verified: false, activeness: 0.005800625374694182, followers_count: 167, status_created_at: new Date(1251590400000), location: "unkown", province: "31", description: "", friends_count: 1277, tags: {}, profile_image_url: "http://mat1.gtimg.com/www/mb/images/head_50.jpg", sm_flwr_quality: 0.002194834672492721, screen_name: "longes", supdate_time: new Date(1378857600000), favourites_count: 0, name: "lonetan", url: "http://www.google.com/", gender: "m", created_at: new Date(1251590400000) }, $addToSet:

{ sm_uids: "htcwildfire" }

} nscanned:1 nupdated:1 keyUpdates:0 locks(micros) w:446 342ms

example the first line. you can see ocks(micros) w:614810 and use 614ms!! this is indexes:
> db.status_5167852c208eb5a2f1c2b260_201309.getIndexes()
[
{
"v" : 1,
"key" :

{ "_id" : 1 }

,
"ns" : "sandbox.status_5167852c208eb5a2f1c2b260_201309",
"name" : "id"
},
{
"v" : 1,
"key" :

{ "cdate" : -1 }

,
"ns" : "sandbox.status_5167852c208eb5a2f1c2b260_201309",
"name" : "cdate_-1"
},
{
"v" : 1,
"key" :

{ "idate" : -1 }

,
"ns" : "sandbox.status_5167852c208eb5a2f1c2b260_201309",
"name" : "idate_-1"
}
]

> db.status_5167852c208eb5a2f1c2b260_201309.find({_id: "244593027441387"}).explain()
{
"cursor" : "BtreeCursor id",
"isMultiKey" : false,
"n" : 1,
"nscannedObjects" : 1,
"nscanned" : 1,
"nscannedObjectsAllPlans" : 1,
"nscannedAllPlans" : 1,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"start" :

{ "_id" : "244593027441387" }

,
"end" :

{ "_id" : "244593027441387" }

},
"server" : "bj-43:27018"
}

the collection has 17384 entry.

> db.status_5167852c208eb5a2f1c2b260_201309.find().count()
17384

> db.status_5167852c208eb5a2f1c2b260_201309.stats()
{
"ns" : "sandbox_opinion.status_5167852c208eb5a2f1c2b260_201309",
"count" : 17384,
"size" : 29005104,
"avgObjSize" : 1668.4942475839853,
"storageSize" : 37797888,
"numExtents" : 8,
"nindexes" : 3,
"lastExtentSize" : 15290368,
"paddingFactor" : 1.0040000000002294,
"systemFlags" : 1,
"userFlags" : 0,
"totalIndexSize" : 1970416,
"indexSizes" :

{ "_id_" : 948416, "cdate_-1" : 506912, "idate_-1" : 515088 }

,
"ok" : 1
}



 Comments   
Comment by Daniel Pasette (Inactive) [ 03/Oct/13 ]

The explain() output you've posted shows the query takes 0 ms. and is using the index properly. There are many things that can cause your query to temporarily perform poorly. This JIRA project is for reporting bugs with mongodb. The user group is a better venue for general questions. https://groups.google.com/forum/#!forum/mongodb-user

I recommend using MMS for tracking the health of your mongodb cluster over time and for identifying problems. It's a free service. You can sign in with your JIRA username and password here: http://mms.mongodb.com.

Comment by dongweiming [ 12/Sep/13 ]

sorry Our business is not to use a replica set, only a single server. and not in MMS. following a record for mongostat:

dwm@social-43:~$ mongostat --port 27018
connected to: 127.0.0.1:27018
insert  query update delete getmore command flushes mapped  vsize    res faults            locked db idx miss %     qr|qw   ar|aw  netIn netOut  conn       time 
    10    104     73     *0       0   265|0       0   125g   251g  11.4g      0 sandbox:6.3%          0       0|0     0|0   247k   297k   532   22:05:15 
     7    206     91     *0       0   333|0       0   125g   251g  11.4g      0 sandbox:6.8%          0       0|0     0|1   290k   118k   532   22:05:16 
     8    134     70     *0       0   247|0       0   125g   251g  11.4g      0  sandbox:5.3%          0       0|0     0|0   237k   264k   532   22:05:18 
    10    109     78     *0       0   280|0       0   125g   251g  11.4g      1  sandbox:6.4%          0       0|0     0|0   267k    80k   532   22:05:19 
    13    153     81     *0       0   313|0       0   125g   251g  11.4g      3  sandbox:5.6%          0       0|0     0|0   286k    90k   532   22:05:20 
     3    137     86     *0       0   300|0       0   125g   251g  11.4g      0  sandbox:6.0%          0       0|0     0|1   254k    87k   532   22:05:21 
    19    114     76     *0       0   268|0       0   125g   251g  11.4g      0  sandbox:5.0%          0       0|0     0|0   246k    79k   532   22:05:22 
    16    132     91     *0       0   313|0       0   125g   251g  11.4g      0  sandbox:5.7%          0       0|0     0|0   284k    86k   532   22:05:23 
    37    120     97     *0       0   326|0       0   125g   251g  11.4g      0  sandbox:4.8%          0       0|0     0|0   310k   107k   532   22:05:24 
insert  query update delete getmore command flushes mapped  vsize    res faults             locked db idx miss %     qr|qw   ar|aw  netIn netOut  conn       time 
    20     72     46     *0       0   244|0       0   125g   251g  11.4g      0  sandbox:6.0%          0       0|0     0|0   265k    63k   532   22:05:25 
     9     63     47     *0       0   265|0       0   125g   251g  11.4g      0  sandbox:5.5%          0       0|0     0|0   284k    81k   532   22:05:26 
    19     78     58     *0       0   288|0       0   125g   251g  11.4g      0  sandbox:4.8%          0       0|1     0|1   307k    76k   532   22:05:28 
    24     71     45     *0       0   258|0       0   125g   251g  11.4g      0  sandbox:4.2%          0       0|0     0|0   289k    69k   532   22:05:29 
    28     55     37     *0       0   262|0       0   125g   251g  11.4g      0  sandbox:2.6%          0       0|0     0|0   293k    71k   532   22:05:30 
    31     29     19     *0       0   227|0       0   125g   251g  11.4g      0  sandbox:4.0%          0       0|0     0|0   276k    57k   532   22:05:31 
    15    649     51     *0       0   271|0       0   125g   251g  11.4g      2  sandbox:6.0%          0       0|0     0|0   331k   459k   532   22:05:32 
    25     88     39     *0       0   253|0       0   125g   251g  11.4g      2  sandbox:6.3%          0       1|0     0|1   288k    80k   532   22:05:33 
    27    145     56     *0       0   301|0       0   125g   251g  11.4g      0  sandbox:6.2%          0       0|0     0|0   301k    84k   532   22:05:34 
    21     79     51     *0       0   271|0       0   125g   251g  11.4g      0  sandbox:4.8%          0       0|0     0|0   295k    69k   532   22:05:35 
insert  query update delete getmore command flushes mapped  vsize    res faults             locked db idx miss %     qr|qw   ar|aw  netIn netOut  conn       time 
    12    149     83     *0       0   314|0       0   125g   251g  11.4g      0                .:9.2%          0       0|0     0|1   298k   106k   532   22:05:36 
    12    171    111     *0       0   364|0       1   125g   251g  11.4g      0  sandbox:5.8%          0       0|0     0|0   337k   121k   532   22:05:37 
    15    236    108     *0       0   346|0       0   125g   251g  11.4g      0  sandbox:7.3%          0       0|0     1|0   312k   173k   532   22:05:38 
     9    131     82     *0       0   311|0       0   125g   251g  11.4g      0  sandbox:6.8%          0       0|0     0|0   308k   103k   532   22:05:39 
     7    123     78     *0       0   285|0       0   125g   251g  11.4g      0  sandbox:6.1%          0       0|0     0|0   284k    91k   532   22:05:40 
     5    125     63     *0       0   273|0       0   125g   251g  11.4g      0  sandbox:5.5%          0       0|0     0|0   275k    88k   532   22:05:41 
     6    137     56     *0       0   243|0       0   125g   251g  11.4g      0  sandbox:4.3%          0      26|2     0|2   257k   123k   532   22:05:42 
     7    150     89     *0       0   327|0       0   125g   251g  11.4g      0               .:33.6%          0       0|0     0|0   316k   104k   532   22:05:43 
     7    179     95     *0       0   317|0       0   125g   251g  11.4g      0  sandbox:5.4%          0       0|0     0|0   290k    99k   532   22:05:44 
     4    263    105     *0       0   339|0       0   125g   251g  11.4g      0  sandbox:7.0%          0       0|0     0|0   313k   189k   532   22:05:45 
insert  query update delete getmore command flushes mapped  vsize    res faults             locked db idx miss %     qr|qw   ar|aw  netIn netOut  conn       time 
    *0    160     83     *0       0   293|0       0   125g   251g  11.4g      0  sandbox:6.1%          0       0|0    0|11   285k   149k   532   22:05:46 
    *0   1162    107     *0       0   350|0       0   125g   251g  11.4g      0  sandbox:9.7%          0       0|0     0|0   392k   839k   532   22:05:47 
    *0    116     79     *0       0   304|0       0   125g   251g  11.4g      0  sandbox:6.0%          0       0|0     0|1   312k   103k   532   22:05:48 
    *0    113     69     *0       0   283|0       0   125g   251g  11.4g      0  sandbox:4.8%          0       0|0     0|0   285k   317k   532   22:05:49 
    *0    917     67     *0       0   285|0       0   125g   251g  11.5g      0  sandbox:6.1%          0       0|0     0|0   375k   754k   532   22:05:50 
    *0    269     69     *0       0   281|0       0   125g   251g  11.4g      0  sandbox:7.0%          0       0|0     0|0   310k   213k   532   22:05:51 
    *0    102     66     *0       0   282|0       0   125g   251g  11.4g      0  sandbox:6.0%          0       0|0     0|0   297k   186k   532   22:05:52 
    *0    158     77     *0       0   300|0       0   125g   251g  11.4g      0  sandbox:5.5%          0       0|0     1|0   315k   149k   532   22:05:53 
     8    137     84     *0       0   336|0       0   125g   251g  11.4g      0  sandbox:6.6%          0       0|0     0|0   334k   335k   530   22:05:55 

By the way. The MMS services can help us find the problem, positioning it?

Comment by James Wahlin [ 11/Sep/13 ]

Is your replica set in MMS? If so can you share a link?

If not, can you run mongostat while experiencing the slowness and attach the output to this ticket?

Generated at Thu Feb 08 03:23:56 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.