[SERVER-11489] Fail gracefully when more than 2 distinct sslMode settings are interacting Created: 30/Oct/13 Updated: 06/Dec/22 |
|
| Status: | Backlog |
| Project: | Core Server |
| Component/s: | Replication, Security |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Minor - P4 |
| Reporter: | Kyle Erf | Assignee: | Backlog - Security Team |
| Resolution: | Unresolved | Votes: | 1 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Assigned Teams: |
Server Security
|
||||||||
| Participants: | |||||||||
| Description |
|
The new sslMode feature allows many more combinations of connection types between members of a mongodb cluster, which also means there are many more ways to set up an cluster incorrectly. As an example: If one sets up a replica set where each node is a different sslMode, some pretty weird behavior occurs. A set using require, preferSSL, and allowSSL can end up in a position where the"require" primary thinks the "allowSSL" secondary is up and properly replicating, but the "allowSSL" secondary thinks the primary is down. If we look at the log file it is clear that something is going horribly wrong, but from a mongo client perspective, things look okay, and will be okay until the primary goes down temporarily, in which case things could go haywire. It would be nice if we could recognize asymmetric cluster setups like these and alert the user / fail accordingly. |
| Comments |
| Comment by Andreas Nilsson [ 30/Oct/13 ] |
|
milkie will you put it in an appropriate planning bucket |
| Comment by Kyle Erf [ 30/Oct/13 ] |
|
"I think you can file a ticket and assign to Eric for triage. It wont go in for 2.6 but this behavior might be nice to "wrinkle" out at some point. " |