-
Type:
Bug
-
Resolution: Duplicate
-
Priority:
Major - P3
-
None
-
Affects Version/s: 2.2.0-rc0
-
Component/s: Diagnostics
-
None
-
Environment:linux and osx
-
ALL
-
None
-
None
-
None
-
None
-
None
-
None
-
None
First noticed this in the http console, but repro'd in the shell.
Open two different connections to mongod with and insert a million simple docs:
> for (i = 0; i < 1000000; i++) { db.foo.insert({x:i}); }
From a third shell, filter the currentOp output:
> while ( 1 ) { db.currentOp().inprog.filter(function(x) { if ( x.secs_running == 1271310 ) { printjson(x); } } ) }
You should see lots of ops reported with the bad secs_running value:
{
"opid" : 783608,
"active" : true,
"secs_running" : 1271310,
"op" : "insert",
"ns" : "",
"query" : {
},
"client" : "127.0.0.1:54088",
"desc" : "conn14",
"threadId" : "0x112460000",
"connectionId" : 14,
"locks" : {
"^" : "w",
"^foo" : "W"
},
"waitingForLock" : false,
"numYields" : 0,
"lockStats" : {
"timeLockedMicros" : {
},
"timeAcquiringMicros" : {
"r" : NumberLong(0),
"w" : NumberLong(3)
}
}
}
secs_running is calculated by subtracting curTimeMicros64() value from the _start recorded for the currentOp (another call to curTimeMicros64()) in curop.h – perhaps the current time value is less than the _start value causing some overflow issue?
- duplicates
-
SERVER-4740 Use monotonic clock sources for Timer
-
- Closed
-
- related to
-
SERVER-2886 Many commands being executed for 1271310319ms, mongo
-
- Closed
-