Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-22087

Mongodump oplog not working on a large database.

    • Type: Icon: Bug Bug
    • Resolution: Done
    • Priority: Icon: Critical - P2 Critical - P2
    • None
    • Affects Version/s: 3.2.0
    • Component/s: Admin, Tools
    • Labels:
    • Environment:
      Ubuntu 14.04

      Hi All,

      I am not able to take an incremental oplog dump. I am not getting any error messages the connection happens fine and also there is a operation recorded(shows up in db.currentOp()), but still nothing happens, not sure why?

      I have executed the same command on a smaller database where everything works fine no problems at all, but the same command on a larger database(talking about 10-11 billion records which is increasing day by day), does not work.

      The command is mentioned below:

      mongodump --host $MONGODB_HOST:$MONGODB_PORT --authenticationDatabase admin -u $MONGODB_USERNAME -p $MONGODB_PASSWORD -d local -c oplog.rs -o backup/oplogDump/$currentTime --query '{"ts":{$gt: Timestamp( 1452157469, 37)}}'

      After executing this command, the entire secondory mongo machine gets stuck, i mean letrely i need to restart the machine to get mongod start running again.

      Another change that i have done recently is, i have increased the nssize to 1 GB as per developers requirement, I am not sure since when this issue started or what is causing this issue, Any Help will be really appreiciated?

            Assignee:
            ramon.fernandez@mongodb.com Ramon Fernandez Marina
            Reporter:
            prashanthsun9 Prashanth
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: