Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-28108

Standalone Mongo Performance Severely Downgraded

    • Type: Icon: Question Question
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: MMAPv1
    • Labels:
      None
    • Environment:
      Running in Linux AWS

      Our performance of our MongoDB Standalone started running extremely slow. It has been running seamlessly for 3 months now.

      Basically yesterday I noticed a 70% downgrade in the ability to iterate through a cursor.

      I've increased the RAM on the box to 500GB even though that didn't seem an issue. We know this should be sharded but it is not. Our primary collection:

      db.maid_location.count();
      759848687
      

      This machine also is NOT using WiredTiger.

      There also is TTL index running:

      db.maid_location.getIndexes();
      [
      	{
      		"v" : 1,
      		"key" : {
      			"_id" : 1
      		},
      		"name" : "_id_",
      		"ns" : "twine.maid_location"
      	},
      	{
      		"v" : 1,
      		"key" : {
      			"ts" : 1
      		},
      		"name" : "ts_1",
      		"ns" : "twine.maid_location"
      	},
      	{
      		"v" : 1,
      		"key" : {
      			"md" : 1
      		},
      		"name" : "md_1",
      		"ns" : "twine.maid_location"
      	},
      	{
      		"v" : 1,
      		"key" : {
      			"d" : 1
      		},
      		"name" : "d_1",
      		"ns" : "twine.maid_location",
      		"expireAfterSeconds" : 518400
      	}
      ]
      

      Here are the stats:

      db.stats();
      {
      	"db" : "twine",
      	"collections" : 12,
      	"objects" : 837614104,
      	"avgObjSize" : 227.24632088573333,
      	"dataSize" : 190344723456,
      	"storageSize" : 278239156320,
      	"numExtents" : 229,
      	"indexes" : 17,
      	"indexSize" : 145874965600,
      	"fileSize" : 485028265984,
      	"nsSizeMB" : 16,
      	"extentFreeList" : {
      		"num" : 0,
      		"totalSize" : 0
      	},
      	"dataFileVersion" : {
      		"major" : 4,
      		"minor" : 22
      	},
      	"ok" : 1
      }
      

      I was going to run a mongodump on the collection so that I can remove probably over half of what is in there. The query for the mongodump is using the field "d".. which is the same one the TTL index is using.

      This is the output from the mongodump. Is this the number of documents dumped? If so it's running as slow as my process was running so this isn't going to help me. Is there any reason a database would just slow down when iterating over a cursor? There is a ton of memory on here.

      2017-02-24T22:13:03.380+0000	twine.maid_location  13128025
      2017-02-24T22:13:06.380+0000	twine.maid_location  13128025
      2017-02-24T22:13:09.380+0000	twine.maid_location  13128025
      2017-02-24T22:13:12.380+0000	twine.maid_location  13128025
      

            Assignee:
            Unassigned Unassigned
            Reporter:
            kflores Krystal Flores
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: