[DOCS-543] Tag-Aware Sharding image recommends risky deployment Created: 21/Sep/12 Updated: 07/Feb/13 Resolved: 02/Feb/13 |
|
| Status: | Closed |
| Project: | Documentation |
| Component/s: | dochub |
| Affects Version/s: | mongodb-2.2 |
| Fix Version/s: | mongodb-2.2 |
| Type: | Task | Priority: | Critical - P2 |
| Reporter: | A. Jesse Jiryu Davis | Assignee: | Ed Costello |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Participants: | |
| Days since reply: | 11 years, 2 weeks, 5 days ago |
| Description |
|
The image here: http://www.mongodb.org/display/DOCS/Tag+Aware+Sharding Seems to show a single machine in each region. For each shard tagged 'JP', for example, best to have several replica-set members deployed in Japan with high replica-set priority, and then de-prioritize the RS members out of Japan. Then if the primary goes down, another machine in Japan will take over the write load from app servers in Japan. Same, obviously, for the other regions besides Japan. As currently drawn the cluster is deployed so that if a machine goes down in Japan a secondary in NY or London will take over as primary and applications will send writes across the globe. |
| Comments |
| Comment by Sam Kleinman (Inactive) [ 02/Feb/13 ] |
|
the migrated version of this page removes this graphic. we could still use better documentation of more ideal deployment patterns, but I think it makes the most sense to track those issues in different tickets. |
| Comment by A. Jesse Jiryu Davis [ 26/Sep/12 ] |
|
Check w/ me if you want to hear my ideal deployment scenario; I feel like |