-
Type: Bug
-
Resolution: Done
-
Priority: Major - P3
-
None
-
Affects Version/s: None
-
Component/s: None
-
Labels:None
-
ALL
Testing QA-175 for the case where working set size exceeds available RAM, but I can't seem to come up with results that make sense, not sure if this is a bug or flaw in my procedure here.
On a linux machine with these mem characteristics:
total used free shared buffers cached Mem: 435132 355152 79980 0 12440 185944 -/+ buffers/cache: 156768 278364 Swap: 262140 19200 242940
I loaded up the enron data set, size is:
"count" : 501513, "size" : 1527603312, "avgObjSize" : 3045.9894598943597, "storageSize" : 1605427200,
Tried to get db to access the entire data set, about 1.4gb which should exceed ram size, and thus consider the entire collection as the "working set".
To do this I queried for the whole collection repeatedly (using both true and false for snapshot, which should force the _id index to be paged in):
function query_data(snapshot){ var counter = 0; if(snapshot){ x = db.messages.find() }else{ x = db.messages.find().snapshot() } while(x.hasNext()){ counter++; var y = x.next(); if(counter % 10000 == 0){ print(counter) } } print(counter, "documents read.") }
Afterwards when I run workingSet, the result is 264134 pagesInMemory, which works out to 1081892864 bytes = about 1 gb, way under the actual data size (about 1.4gb).
I can't seem to get the pagesInMemory to exceed that value, tried other queries as well as .count() and db.runCommand(
) but none seem to have an effect.