[SERVER-6776] Mongos take 100 % cpu intermittently Created: 16/Aug/12  Updated: 08/Mar/13  Resolved: 21/Sep/12

Status: Closed
Project: Core Server
Component/s: Sharding
Affects Version/s: 2.0.3
Fix Version/s: None

Type: Bug Priority: Major - P3
Reporter: jitendra Assignee: Spencer Brody (Inactive)
Resolution: Cannot Reproduce Votes: 0
Labels: mongos
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

linux OEL


Attachments: Text File mongos_log_16_8_12.log    
Operating System: Linux
Participants:

 Description   

hi,

Mongos take 100% cpu intermittently.

Setup : mongos -1 , mongod (shard) - 10 , config server - 3

I have attached mongos logs.

Please tell me why does mongos take 100% cpu.

Thanks,
Jitendra R Verma



 Comments   
Comment by Spencer Brody (Inactive) [ 27/Aug/12 ]

HI Jitendra,
Do you have the mongod logs from when this happened on your mongod?

It would be very helpful for our ability to debug this to get "mongostat --discover" run against the mongos and "iostat -x 2" run on any machines experiencing the high cpu usage the next time this happens.

You may also want to try upgrading to 2.0.7 to see if that helps.

Comment by jitendra [ 22/Aug/12 ]

hi ,

Now one of the mongod instances took 100% CPU.

Below mongod serverStatus:

> db.serverStatus()
{
"host" : "CSS-FU-2:55000",
"version" : "2.0.3-rc0",
"process" : "mongod",
"uptime" : 1732,
"uptimeEstimate" : 1703,
"localTime" : ISODate("2012-08-22T14:50:53.776Z"),
"globalLock" : {
"totalTime" : 1732672663,
"lockTime" : 337955639,
"ratio" : 0.1950487511096607,
"currentQueue" :

{ "total" : 0, "readers" : 0, "writers" : 0 }

,
"activeClients" :

{ "total" : 0, "readers" : 0, "writers" : 0 }

},
"mem" :

{ "bits" : 64, "resident" : 5574, "virtual" : 521112, "supported" : true, "mapped" : 260272, "mappedWithJournal" : 520544 }

,
"connections" :

{ "current" : 19, "available" : 19981 }

,
"extra_info" :

{ "note" : "fields vary by platform", "heap_usage_bytes" : 543856, "page_faults" : 4805 }

,
"indexCounters" : {
"btree" :

{ "accesses" : 67, "hits" : 67, "misses" : 0, "resets" : 0, "missRatio" : 0 }

},
"backgroundFlushing" :

{ "flushes" : 28, "total_ms" : 3063, "average_ms" : 109.39285714285714, "last_ms" : 698, "last_finished" : ISODate("2012-08-22T14:50:01.908Z") }

,
"cursors" :

{ "totalOpen" : 0, "clientCursors_size" : 0, "timedOut" : 0 }

,
"network" :

{ "bytesIn" : 603602034, "bytesOut" : 371782, "numRequests" : 4041 }

,
"opcounters" :

{ "insert" : 1619, "query" : 1, "update" : 0, "delete" : 1, "getmore" : 0, "command" : 2422 }

,
"asserts" :

{ "regular" : 0, "warning" : 0, "msg" : 0, "user" : 0, "rollovers" : 0 }

,
"writeBacksQueued" : false,
"dur" : {
"commits" : 941,
"journaledMB" : 0.024576,
"writeToDataFilesMB" : 0.062181,
"compression" : 0.9999593115514506,
"commitsInWriteLock" : 0,
"earlyCommits" : 0,
"timeMs" :

{ "dt" : 3000, "prepLogBuffer" : 0, "writeToJournal" : 13, "writeToDataFiles" : 0, "remapPrivateView" : 0 }

,
"journalCommitIntervalMs" : 2
},
"ok" : 1
}

Please tell me, why mongod cpu go 100% ?

Thanks,
Jitendra R Verma

Comment by Spencer Brody (Inactive) [ 17/Aug/12 ]

Unfortunately there isn't a lot to go on in the mongos logs you attached. Without more information there's not much we can do to diagnose this. Is this still happening? Can you not run mongostat --discover against the problematic mongos (and take note of the cpu breakdown between user and system time) the next time this occurs?

Comment by jitendra [ 17/Aug/12 ]

Hi,
Unfortunately i do not have logs. I attached log mongos_log_16_8_12.log.
Log can help.

Please tell me what should do when mongos go at 100 % cpu

Thanks,
Jitendra R Verma

Comment by Scott Hernandez (Inactive) [ 16/Aug/12 ]

Can you please attach mongostat output from before and during the high cpu? There are many things which might cause the process to use high cpu during normal workloads.

Also, what is the cpu breakdown? Is it all user time?

Generated at Thu Feb 08 03:12:40 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.