[SERVER-29275] Two Phase Drops: implement collection drop commit logic Created: 18/May/17 Updated: 30/Oct/23 Resolved: 25/May/17 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Replication |
| Affects Version/s: | None |
| Fix Version/s: | 3.5.8 |
| Type: | Task | Priority: | Major - P3 |
| Reporter: | Benety Goh | Assignee: | Benety Goh |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||||||||||||||||||
| Sprint: | Repl 2017-05-29 | ||||||||||||||||||||||||
| Participants: | |||||||||||||||||||||||||
| Description |
|
Background: collections are no longer dropped immediately but renamed to a special drop-pending name and added to a task list in the ReplicationCoordinator. When the node sees the commit level (as dictated by the writeConcernMajorityJournalDefault) reach the drop op, it drops the drop-pending collection and removes the task from the list. Note that the proposed initial trigger for phase 2 (the drop op itself becomes known to be present on a majority of nodes) may not remain so in future versions; it may need to be delayed until a different condition. |
| Comments |
| Comment by Guillaume Guerra [X] [ 30/Aug/18 ] |
|
Hi guys,
Is there an option to disable this behavior in 3.6 ? It has a deadly side effect on our 3 members replica set cluster (1 arbiter and 2 nodes) : if we lose our secondary node, our file system will keep on growing up, as we drop and recreate new versions of collections regularly. Even though the total amount of data to store remains fairly stable, at some point we'll face a shortage of disk space.
Obviously, restarting our secondary node is very soon no longer an option : sync will not happen, as the node will get stalled pretty quickly. So we just can't run with a dead secondary node : eventually primary will die too, whereas in 3.4 it could have lived happily ever after ...
Thanks a lot |
| Comment by Githook User [ 24/May/17 ] |
|
Author: {u'username': u'benety', u'name': u'Benety Goh', u'email': u'benety@mongodb.com'}Message: |
| Comment by Githook User [ 24/May/17 ] |
|
Author: {u'username': u'benety', u'name': u'Benety Goh', u'email': u'benety@mongodb.com'}Message: |
| Comment by Githook User [ 19/May/17 ] |
|
Author: {u'username': u'benety', u'name': u'Benety Goh', u'email': u'benety@mongodb.com'}Message: |