-
Type: Task
-
Resolution: Unresolved
-
Priority: Minor - P4
-
Affects Version/s: 3.0.15, 3.2.16, 3.4.10
-
Component/s: mongorestore
-
2
-
1,367
Instead of allocating a new array for each document to restore (cf line 268), mongorestore should use a memory pool in order to reduce pressure on GC.
I tried to implement it and got a ~25% speedup when restoring 2 collections with 10 millions documents:
> du -h -a dump/test 4,0K dump/test/link.metadata.json 3,0G dump/test/test.bson 4,0K dump/test/test.metadata.json 535M dump/test/link.bson 3,5G dump/test
old version:
> time mongorestore --dir dump mongorestore --dir dump/ 62,65s user 21,19s system 34% cpu 4:03,84 total
new version:
> time mongorestore --dir dump mongorestore --dir dump 54,98s user 19,31s system 39% cpu 3:08,43 total
- causes
-
TOOLS-2783 Mongorestore uses huge amount of RAM
- Closed
- is depended on by
-
TOOLS-2665 Optimize usage of Result type in Mongorestore
- Waiting (Blocked)
- is related to
-
TOOLS-2665 Optimize usage of Result type in Mongorestore
- Waiting (Blocked)
- related to
-
TOOLS-2875 Limit the BufferedBulkInserter's batch size by bytes
- Closed
-
TOOLS-2642 Investigate and implement performance-related optimizations in mongoimport
- Accepted
- links to