[SERVER-22087] Mongodump oplog not working on a large database. Created: 07/Jan/16 Updated: 14/Apr/16 Resolved: 08/Jan/16 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Admin, Tools |
| Affects Version/s: | 3.2.0 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Critical - P2 |
| Reporter: | Prashanth | Assignee: | Ramon Fernandez Marina |
| Resolution: | Done | Votes: | 0 |
| Labels: | mongodump | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Ubuntu 14.04 |
||
| Participants: |
| Description |
|
Hi All, I am not able to take an incremental oplog dump. I am not getting any error messages the connection happens fine and also there is a operation recorded(shows up in db.currentOp()), but still nothing happens, not sure why? I have executed the same command on a smaller database where everything works fine no problems at all, but the same command on a larger database(talking about 10-11 billion records which is increasing day by day), does not work. The command is mentioned below: mongodump --host $MONGODB_HOST:$MONGODB_PORT --authenticationDatabase admin -u $MONGODB_USERNAME -p $MONGODB_PASSWORD -d local -c oplog.rs -o backup/oplogDump/$currentTime --query '{"ts":{$gt: Timestamp( 1452157469, 37)}}' After executing this command, the entire secondory mongo machine gets stuck, i mean letrely i need to restart the machine to get mongod start running again. Another change that i have done recently is, i have increased the nssize to 1 GB as per developers requirement, I am not sure since when this issue started or what is causing this issue, Any Help will be really appreiciated? |
| Comments |
| Comment by Ramon Fernandez Marina [ 08/Jan/16 ] |
|
prashanthsun9, I see Adam's excellent response in StackExchange, so I'm going to resolve this ticket. Please note that the SERVER project is for reporting bugs or feature suggestions for the MongoDB server. For MongoDB-related support discussion other forums like the mongodb-user group, Stack Overflow with the mongodb tag, or StackExchange are more appropriate. You should not add indexes on internal collections, please see the first answer in this post for more information. Regards, |
| Comment by Prashanth [ 08/Jan/16 ] |
|
Yes, it does work correctly if I do not give the query part and yes I guess you are correct, so now should I index that field, I mean it will take lot of time to create the index for that field considering the amount of data that is there, any suggestions on how to proceed, further. |
| Comment by Ramon Fernandez Marina [ 07/Jan/16 ] |
|
prashanthsun9, I think what's happening is that using mongodump --query on the oplog triggers a tablescan of the oplog collection, since there are no indexes on the oplog. For a very large oplog this could result in the behavior you describe. If you run the same mongodump command without the --query part, does mongodump report progress immediately? |