[SERVER-5326] Restructure MMAPV1 journaling to not require the global read lock Created: 16/Mar/12 Updated: 31/Aug/16 Resolved: 17/Nov/15 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Concurrency, Storage |
| Affects Version/s: | 2.1.1 |
| Fix Version/s: | None |
| Type: | New Feature | Priority: | Critical - P2 |
| Reporter: | Andy Schwerin | Assignee: | Unassigned |
| Resolution: | Won't Fix | Votes: | 6 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||
| Backwards Compatibility: | Minor Change | ||||||||||||
| Participants: | |||||||||||||
| Description |
|
The current journaling implementation in MMAPV1 requires a global read lock on the mongod instance, to ensure that it can read consistent versions of all database private views when writing the journal. This makes deadlock avoidance in commitIfNeeded() somewhat hairy, and could become a performance bottleneck. Instead, mongod should only lock one database at a time (plus the "local" database, perhaps) when writing to the journal from threads other than the durability thread (and optionally also the durability thread). For performance, in MMAPV1 only any extremely long running single document update can block other operations if the journal thread is waiting for its completion. Best solution is to avoid any updates that update many thousands of fields in a document in a single update. |
| Comments |
| Comment by Asya Kamsky [ 28/Aug/15 ] |
|
This issue does not apply to WiredTiger where journaling mechanics are different and reads don't block other operations and writes also work significantly differently. Due to that, it's not clear that this mechanism is in the critical path going forward. Adjusted description accordingly. Will change title as well to make it clear. |
| Comment by Andy Schwerin [ 19/Sep/13 ] |
|
No, it slipped, I'm afraid. |
| Comment by Asya Kamsky [ 19/Sep/13 ] |
|
Is this on the radar for 2.6? |