-
Type: Bug
-
Resolution: Fixed
-
Priority: Major - P3
-
Affects Version/s: 3.2.15, 3.4.6
-
Component/s: Storage
-
Storage Execution
-
Fully Compatible
-
ALL
-
(copied to CRM)
Example:
- dropDatabase with about 70 collections with 2 indexes each for a total of about 200 WT tables
- about 325k open cursors
- dropDatabase takes about 30 seconds and holds the global lock the entire time.
Collecting perf data shows mongod using 100% of a CPU for the duration, all in this stack:
mongo::WiredTigerSession::closeAllCursors(std::__cxx11::basic_string<...> const&) mongo::WiredTigerSessionCache::closeAllCursors(std::__cxx11::basic_string<...> const&) mongo::WiredTigerKVEngine::_drop(mongo::StringData) mongo::WiredTigerKVEngine::dropIdent(mongo::OperationContext*, mongo::StringData)
For each of the 200 WT tables dropped we call closeAllCursors for that table. The perf stacks show all the time is spent in closeAllCursors itself, and the ftdc metrics also show that we aren't actually closing any cursors, so it appears that all the time is spent by closeAllCursors scanning the list of 325k open cursors to find cursors with a matching table uri.
- depends on
-
SERVER-33122 add option to limit WiredTiger cursor cache size
- Closed
- is related to
-
SERVER-30238 High disk usage and query blocking on database drops
- Closed
-
SERVER-31101 WT table not dropped after collection is dropped due to long-running OperationContext
- Closed
-
SERVER-27347 Only close idle cached cursors on the WiredTiger ident that is busy
- Closed