[DOCS-8416] Comment on: "manual/tutorial/upgrade-config-servers-to-replica-set.txt" Created: 22/Jul/16  Updated: 07/Apr/23  Resolved: 09/Sep/16

Status: Closed
Project: Documentation
Component/s: None
Affects Version/s: None
Fix Version/s: 01112017-cleanup

Type: Bug Priority: Major - P3
Reporter: Fory Horio Assignee: Ravind Kumar (Inactive)
Resolution: Done Votes: 0
Labels: collector-298ba4e7
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Linux, Centos 7.

Location: https://docs.mongodb.com/manual/tutorial/upgrade-config-servers-to-replica-set/
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36
Referrer: https://www.google.com/
Screen Resolution: 1920 x 1200


Participants:
Days since reply: 7 years, 28 weeks, 2 days ago

 Description   

On this page, https://docs.mongodb.com/manual/core/sharded-cluster-config-servers/ It says "Each sharded cluster must have its own config servers. Do not use the same config servers for different sharded clusters." Our current config servers manage two different sharded clusters. How do I convert and divide that to serve only one shard cluster per config server?



 Comments   
Comment by Ravind Kumar (Inactive) [ 01/Aug/16 ]

I am glad to hear that the changeover was successful.

Updating a tag range does require removing the existing range and adding the new range.

Use the sh.removeTagRange() method to remove the old tag range, and sh.addTagRange to add the updated tag range.

You might consider the optional step of disabling the balancer temporarily to ensure that no chunk migrations occur while you are reconfiguring the tags. If you choose to disable the balancer, please make sure to re-enable it once you confirm the tags are reconfigured.

Comment by Fory Horio [ 30/Jul/16 ]

Thanks for the help. I think I successfully converted config servers to replica. As for the tags, to fix the current configuration, do I need to remove and add tag ranges? I didn't see "update" command.

Comment by Ravind Kumar (Inactive) [ 27/Jul/16 ]

The config servers, whether mirrored or as a replica set, will support multiple tags as needed. The number of config servers should not constrain the number of tags or shards that the cluster can support. A 3-member config server replica set is simply the smallest suggested size for CSRS deployments.

If you would like, you can reach out to MongoDB support to confirm any of these details with our dedicated support personnel.

Comment by Fory Horio [ 27/Jul/16 ]

Thank you for your help. As I mentioned earlier we will be adding NorthWest DBs. Can the newly converted config servers (three config servers) with replica manage three regions (West, East, and NorthWest) that are all controlled by tags for the data distribution?

Comment by Ravind Kumar (Inactive) [ 27/Jul/16 ]

A 3 member replica set is a good standard to aim for, whether it is for a shard or a config server.

As far as how to set up your specific group of config servers, I would recommend going to the google group, as that level of support is beyond the scope of what I can provide via documentation. The tutorial provides a good framework of how to convert a mirrored config server deployment to a 3 member config server replica set. We have an operational checklist in our documentation. As far as config servers, it specifies:

Place your config servers on dedicated hardware for optimal performance in large clusters. Ensure that the hardware has enough RAM to hold the data files entirely in memory and that it has dedicated storage.

There are some additional considerations on that page that I hope may be of some use as well.

As far as your tag configuration,

For USWest, your lower bound is 1000000 inclusive and your upper bound is 1999999 exclusive. This means the effective range is from 1000000 to 1999998.

For USEast, your lower bound is 2000000 inclusive and your upper bound is 2999999 exclusive. This means the effective range is from 2000000 to 2999998

In both cases, the specified upper bound is excluded from the range. So if you had a document where containerKey : 1999999, it would fall between the ranges and could route to any shard in the cluster. Similarly, a document where containerKey : 2999999, it would fall out of the second range and could route to any shard in the cluster.

Let me know if that makes sense. You can find the google group here. Questions regarding optimal deployment practices or specific questions based on your current deployment architecture can be best answered on those forums. If you post to the google group please include a link to this JIRA ticket so that Mongo Support Staff can work off of the information provided here in addition to whatever you provide within that forum.

Comment by Fory Horio [ 26/Jul/16 ]

Why "One note: tags are inclusive on the lower bound, and exclusive on the upper bound. The tag for USWest actually excludes the value set for the upper bound. It may be worth reconfiguring the tag ranges such that the gap is covered." is important?

We are:

tag: USWest

{ "containerKey" : "1000000" }

-->>

{ "containerKey" : "1999999" }

tag: USEast

{ "containerKey" : "2000000" }

-->>

{ "containerKey" : "2999999" }

What is wrong with this configuration?

Comment by Fory Horio [ 26/Jul/16 ]

Yes, we are controlling shards with tags. So, how many config servers do I need when I convert to replica config servers? Can I just set up 3 config servers using one of the current config servers as sccc?

Comment by Ravind Kumar (Inactive) [ 26/Jul/16 ]

Hello,

The important information is here:

 
shards:
        {  "_id" : "USEast",  "host" : "USEast/host1:27018,host2:27018",  "tags" : [ "USEast" ] }
        {  "_id" : "USWest",  "host" : "ShardA/host3:27018,host4:27018",  "tags" : [ "USWest" ] }

Your current deployment consists of a single sharded cluster. There are two configured tags - USEast, and USWest. Each tag has two shards associated to it. Tags do not divide or segment a sharded cluster - they allow for you to apply certain controls to data routing within the sharded cluster. See Tag Aware Sharding

The tag ranges associate a range of the shard key to either USEast or USWest. All of the data lives on a single sharded cluster, but your configured tags allow for data to be routed to the configured shards.

Under this deployment you can follow the tutorial without concerns. You have a single config server deployment for your single sharded cluster deployment.

One note: tags are inclusive on the lower bound, and exclusive on the upper bound. The tag for USWest actually excludes the value set for the upper bound. It may be worth reconfiguring the tag ranges such that the gap is covered.

For example, the following has no gap between USWest and USEast

 
 tag: USWest { "key1" : 1} -->> {"key1" : 100}
 tag: USEast { "key1" : 100} -->> {"key1" : 200}

Please see Manage Shard Tags for information on managing shard tags. In general, if you want to go through with that process, disable the balancer before removing and re-adding the corrected tags to ensure no migrations occur

I hope this answers your question

Comment by Ravind Kumar (Inactive) [ 26/Jul/16 ]

Hello,

I've redacted portions of the host as it specified information on your shards. As the documentation project is public facing, please do a pass through and redact any additional information you would be uncomfortable being public.

I have taken a copy as well, so feel free to simply remove the comment.

Comment by Ravind Kumar (Inactive) [ 26/Jul/16 ]

Can you provide the output of sh.status()? You can connect a mongo shell to one of the mongos instances for each sharded cluster and provide the output. Please remove or redact internal information as appropriate.

Comment by Fory Horio [ 26/Jul/16 ]

We are using the single set of mirrored config servers to serve two shared clusters.

Comment by Ravind Kumar (Inactive) [ 26/Jul/16 ]

Thanks for the information.

To confirm - you have two sharded clusters using the same single mirrored config server deployment?

It may be possible to be running two mirrored config server deployments on the same hardware - please confirm that this is not the case, and both sharded clusters are definitely pointing to the same mirrored config servers. You can examine the value passed to the sharding.configDB option. If it is identical for both sharded clusters, please let us know.

Comment by Fory Horio [ 26/Jul/16 ]

We are using 3.2.8. We are using mirrored config servers, and I was reading this documentation to convert it to replica servers. I need to know how to convert. Currently we have 2 shards per cluster and we have 2 clusters served in the mirrored config servers. We have West-primary and West-secondary with USWest shard, and East-primary and East-secondary with USEast shared replicas.

Also, we will be adding NorthWest-primary and NorthWest-secondary with USNW shard soon (this is not in the current config servers.)

We can tolerate quite a long down time since I am working on a new environment.

Comment by Ravind Kumar (Inactive) [ 26/Jul/16 ]

Hello,

Can you provide more information on your deployment?

  • What version of MongoDB are you using?
  • If 3.2+, are you using mirrored config servers or replica set config servers?
  • How many shards per sharded cluster are there?
  • How much downtime can this system tolerate?

Please note that support tickets of this kind are out of scope for the documentation project. We can provide limited guidance, but strongly suggest creating a post on the MongoDB Google Group.

If your organization has a support contract, please contact your contact at MongoDB for assistance.

Comment by Fory Horio [ 26/Jul/16 ]

Any updates?

Generated at Thu Feb 08 07:56:16 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.