[SERVER-20113] File descriptors can be exhausted by killing map/reduce jobs on WiredTiger Created: 25/Aug/15  Updated: 05/Apr/17  Resolved: 29/Nov/16

Status: Closed
Project: Core Server
Component/s: Storage
Affects Version/s: 3.1.7
Fix Version/s: 3.5.1

Type: Bug Priority: Major - P3
Reporter: Matt Dannenberg Assignee: Tess Avitabile (Inactive)
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Related
related to SERVER-27337 Map/Reduce output inline option is ma... Closed
Backwards Compatibility: Fully Compatible
Operating System: ALL
Sprint: Query 2016-12-12
Participants:

 Description   

ulimit -n 512
python buildscripts/resmoke.py jstests/core/mr_killop.js --repeat=200

It fails for me after the 25th run each time.



 Comments   
Comment by Githook User [ 29/Nov/16 ]

Author:

{u'username': u'tessavitabile', u'name': u'Tess Avitabile', u'email': u'tess.avitabile@mongodb.com'}

Message: SERVER-20113 Do not allow interrupts during mapReduce temp collection cleanup
Branch: master
https://github.com/mongodb/mongo/commit/84650bdac943af4d30d1b26a1a176038828a9993

Comment by Eric Milkie [ 15/Nov/16 ]

This appears to be due to the mishandling of temp tables – we don't clean up if a process is interrupted. We should check that all uses of temporary tables are cleaned up on all error paths.

Generated at Thu Feb 08 03:53:11 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.