[DOCS-10947] Update Ops Manager example topologies: Currently include an unreliable PSA setup for production Created: 25/Oct/17 Updated: 29/Oct/23 Resolved: 22/Apr/19 |
|
| Status: | Closed |
| Project: | Documentation |
| Component/s: | Ops Manager |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Mariano Escribano | Assignee: | Anthony Sansone (Inactive) |
| Resolution: | Fixed | Votes: | 13 |
| Labels: | deployment, diagram | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
https://docs.opsmanager.mongodb.com/current/core/deployments/#redundant-metadata-and-snapshots |
||
| Attachments: |
|
||||||||
| Issue Links: |
|
||||||||
| Participants: | |||||||||
| Days since reply: | 4 years, 41 weeks, 2 days ago | ||||||||
| Epic Link: | DOCSP-1743 | ||||||||
| Story Points: | 0.4 | ||||||||
| Description |
|
Our Ops Manager documentation includes an example topology for production that is comprised of a Primary, Secondary, Arbiter setup. It can be found here. This is the very first example listed with some huge caveats in the note box right above it, namely the fact that the application database for Ops Manager uses a write concern of 2, yet the diagram advocates arbiters which do not count towards this requirement. As the note suggests, losing one node means losing access to Ops Manager. To make things worse, the arbiters are on the same servers are the primaries, so if that single server goes down, even the blockstore would lose a majority. This setup is extremely unreliable and should be removed in favor of a simplistic but redundant 3 server approach. At the very least, it should not be visible to the public and should only be provided by TSEs when specifically asked due to disk space concerns. In general, I think we should try to avoid any prominent PSA examples for any part of the documentation. Thanks! |
| Comments |
| Comment by Githook User [ 29/Apr/19 ] |
|
Author: {'email': 'tony.sansone@mongodb.com', 'name': 'Anthony Sansone', 'username': 'atsansone'}Message: ( |
| Comment by Githook User [ 26/Apr/19 ] |
|
Author: {'email': 'tony.sansone@mongodb.com', 'name': 'Anthony Sansone', 'username': 'atsansone'}Message: ( |
| Comment by Githook User [ 23/Apr/19 ] |
|
Author: {'email': 'tony.sansone@mongodb.com', 'name': 'Anthony Sansone', 'username': 'atsansone'}Message: ( |
| Comment by Githook User [ 22/Apr/19 ] |
|
Author: {'name': 'Anthony Sansone', 'username': 'atsansone', 'email': 'tony.sansone@mongodb.com'}Message: ( |
| Comment by Anthony Sansone (Inactive) [ 18/Apr/19 ] |
|
mariano.escribano: This is now a PR: https://github.com/10gen/mms-docs/pull/2264 Please review. |
| Comment by Mariano Escribano [ 16/Apr/19 ] |
|
tony.sansone Very nice, yes, these look much better and also get rid of the Arbiter reference as hoped. Of course, once the simple deployment diagram is in, we will have to adjust the NOTE right above it to say that high availability will be lost after 2 nodes are lost due to w:2. Emilio's concern above is valid, but that should be the exception, not the norm. The docs should always reflect best practices and If a customer has budget/space constrains then they should seek our guidance for a slimmer deployment. |
| Comment by Anthony Sansone (Inactive) [ 16/Apr/19 ] |
|
mariano.escribano: Please have a look at these updates. If these look good, I will adjust the page text accordingly. |