[SERVER-1940] implement { replSetFreeze : <bool> } please Created: 13/Oct/10 Updated: 12/Jul/16 Resolved: 26/Oct/10 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | 1.7.2 |
| Type: | New Feature | Priority: | Major - P3 |
| Reporter: | Kenny Gorman | Assignee: | Kristina Chodorow (Inactive) |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Participants: |
| Description |
|
We needed today the ability to freeze a set and have no decisions made on elections for some period of time. The exact issue was our code started blowing up the # of connections. We wanted to restart the master w/o having a slave pick up. We could bring down each slave first, but it would be nice just to tell the 'cluster' to not make any decisions on the cluster, but when we tried that we got: Wed Oct 13 14:40:28 git version: aef371ecf5d2a824f16ccdc3b745f3702165602f Wed Oct 13 14:40:28 shutdown: going to close listening sockets... Wed Oct 13 14:40:28 dbexit: really exiting now |
| Comments |
| Comment by Kenny Gorman [ 26/Oct/10 ] |
|
Yeah, this mode of operation is fine, and probably better so we don't forget to unfreeze. If it can happen, it will. |
| Comment by auto [ 26/Oct/10 ] |
|
Author: {'login': 'kchodorow', 'name': 'Kristina Chodorow', 'email': 'kristina@10gen.com'}Message: replsetfreeze test |
| Comment by Eliot Horowitz (Inactive) [ 26/Oct/10 ] |
|
Kristina - can you add a test |
| Comment by Dwight Merriman [ 15/Oct/10 ] |
|
see below. the behavior may be different than what you expect - but this is easy and safe? each member should be sent the command. thoughts? virtual void help( stringstream &help ) const { "; to unfreeze sooner.\n"; |
| Comment by auto [ 15/Oct/10 ] |
|
Author: {'login': 'dwight', 'name': 'Dwight', 'email': 'dwight@10gen.com'}Message: rs replSetFreeze needs testing still |
| Comment by Kenny Gorman [ 13/Oct/10 ] |
|
just to be a bit more clear. we tried bringing down the slaves first, then finally the master. The above error message was generated on the master when we finally went to bring it down. |