-
Type: Bug
-
Resolution: Works as Designed
-
Priority: Major - P3
-
None
-
Affects Version/s: 3.2.11
-
Component/s: mongorestore
-
Environment:Ubuntu 14.04 (Vagrant VM with 2048MB of RAM)
-
v3.2
I'm having trouble restoring a large database (12GB of .bson) and find that when mongorestore errors out it also causes mongod to to stop. I can improve the situation slightly by stopping non-essential services to free up as much memory as possible for the process to run to a point where more data is imported before the command errors again.
I've already tried the suggested workarounds listed in TOOLS-939 but I'm yet to succeed in restoring the database.
The full command I'm running is:
mongorestore -vvvvv --numParallelCollections=2 --batchSize=10 --drop -d qa .
The (abbreviated) output is:
mongorestore -vvvvv --numParallelCollections=2 --batchSize=2 --drop -d qa . --authenticationDatabase admin -u <user> -p <password> 2017-05-16T15:39:19.502+0000 checking options 2017-05-16T15:39:19.504+0000 dumping with object check disabled 2017-05-16T15:39:19.684+0000 connected to node type: replset 2017-05-16T15:39:19.684+0000 using write concern: w='majority', j=false, fsync=false, wtimeout=0 2017-05-16T15:39:19.684+0000 mongorestore target is a directory, not a file 2017-05-16T15:39:19.684+0000 building a list of collections to restore from . dir 2017-05-16T15:39:19.684+0000 reading collections for database qa in . ... ... ... 2017-05-16T15:39:19.721+0000 finalizing intent manager with multi-database longest task first prioritizer 2017-05-16T15:39:19.721+0000 restoring up to 2 collections in parallel 2017-05-16T15:39:19.721+0000 starting restore routine with id=1 2017-05-16T15:39:19.721+0000 will listen for SIGTERM and SIGINT 2017-05-16T15:39:19.732+0000 starting restore routine with id=0 2017-05-16T15:39:20.246+0000 dropping collection qa.upload.chunks before restoring 2017-05-16T15:39:20.246+0000 dropping collection qa.versions before restoring 2017-05-16T15:39:20.417+0000 reading metadata for qa.upload.chunks from upload.chunks.metadata.json 2017-05-16T15:39:20.422+0000 creating collection qa.upload.chunks using options from metadata 2017-05-16T15:39:20.422+0000 using collection options: bson.D(nil) 2017-05-16T15:39:20.446+0000 reading metadata for qa.versions from versions.metadata.json 2017-05-16T15:39:20.447+0000 creating collection qa.versions using options from metadata 2017-05-16T15:39:20.447+0000 using collection options: bson.D(nil) 2017-05-16T15:39:20.462+0000 restoring qa.upload.chunks from upload.chunks.bson 2017-05-16T15:39:20.478+0000 restoring qa.versions from versions.bson 2017-05-16T15:39:21.146+0000 using 1 insertion workers 2017-05-16T15:39:21.153+0000 using 1 insertion workers 2017-05-16T15:39:22.722+0000 [........................] qa.versions 507KB/2.55GB (0.0%) 2017-05-16T15:39:22.722+0000 [........................] qa.upload.chunks 5.45MB/5.17GB (0.1%) 2017-05-16T15:39:22.722+0000 2017-05-16T15:39:25.728+0000 [........................] qa.versions 2.62MB/2.55GB (0.1%) 2017-05-16T15:39:25.728+0000 [........................] qa.upload.chunks 30.6MB/5.17GB (0.6%) 2017-05-16T15:39:25.728+0000 2017-05-16T15:39:28.727+0000 [........................] qa.versions 6.36MB/2.55GB (0.2%) 2017-05-16T15:39:28.728+0000 [........................] qa.upload.chunks 54.3MB/5.17GB (1.0%) ... ... ... 2017-05-16T15:45:37.809+0000 [####....................] qa.versions 482MB/2.55GB (18.5%) 2017-05-16T15:45:37.810+0000 Failed: qa.versions: error restoring from versions.bson: insertion error: EOF
In this example one of the collections being restored is a GridFS collection of file upload chunks, however I've tried restoring the database without this and get the same result on a different collection.
- is related to
-
TOOLS-939 Error restoring database "insertion error: EOF"
- Closed