Uploaded image for project: 'MongoDB Database Tools'
  1. MongoDB Database Tools
  2. TOOLS-3450

Investigate changes in PM-2290: Make dedicated config servers optional for sharded clusters

    • Type: Icon: Investigation Investigation
    • Resolution: Done
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: None
    • Labels:
      None
    • Tools and Replicator

      Original Downstream Change Summary

      The design document has more details about these changes in WRITING-8015.

      == User-Facing Syntax Changes ==
      Config servers will now support serving as a user data shard. They will not be a shard by default in 7.0, but this may change in a later project.

      To opt into a config server acting as a shard (called a "catalog shard"), a sharded cluster must be in FCV 7.0, and a user must run the new transitionToCatalogShard admin command. This should take effect quickly, like the addShard command, and will make the config server visible as a shard to the rest of the cluster, e.g. the balancer will consider it eligible to receive chunks from sharded collections. In this state, there will be an entry in "config.shards" with _id: "config" which represents the config server shard. A cluster with a catalog shard cannot downgrade FCV below 7.0 without transitioning out of catalog shard mode, described below.

      To make a catalog shard a dedicated config server again (ie a config server that owns no user data), a user must run the new transitionToDedicatedConfigServer admin command. This command has the same flow as the removeShard command. The first transitionToDedicatedConfigServer will begin the draining procedure, then users will need to use the movePrimary command to move all databases off the config server and wait for the balancer to move all chunks off the config server. Running transitionToDedicatedConfigServer during this process will return the status of draining. Running transitionToDedicatedConfigServer after draining has completed, will finish the transition and the config server will no longer be visible as a shard to the rest of the cluster.

      Neither transitionToCatalogShard nor transitionToDedicatedConfigServer take any arguments (e.g.

      Unknown macro: {transitionToCatalogShard}

      and

      Unknown macro: {transitionToDedicatedConfigServer}

      ), and both must be run on the admin database on a mongos.

      == Auth changes ==
      Both new commands, transitionToCatalogShard and transitionToDedicatedConfigServer, have new auth privileges, transitionToCatalogShard and transitionToDedicatedConfigServer, respectively, and both privileges are included in the clusterManager built in role.

      Shard local users are not possible on a catalog shard. Any user made on a catalog shard will be considered the same as a user made on the config server, and be a cluster wide user.

      == Other changes ==
      The config server will now consider itself a shard server always (even if transitionToCatalogShard has not been run), so previously shard only fields and collections will exist on a config server. This includes:
      1. The config server will have a "shard identity" document in its "admin.system.version" collection, looking like:

      Unknown macro: { _id}

      2. The config server will have cached copies of sharding routing metadata, in the config.cache.chunks.<cached namespace> collections used by shards.
      3. The previously shard only hello response fields, isImplicitDefaultMajorityWC and cwc, will now be included in the hello response from a config server.

      Description of Linked Ticket

       

      Epic Summary

      Summary

      We will make dedicated config servers optional for sharded clusters. Customers will have the option to designate a special shard that will hold both user data and config data. For one-shard clusters on Atlas, the shard will automatically be a special shard with both the shard role and config role.

      Motivation

      Eliminating dedicated config servers will reduce both the cost and the architectural complexity involved in single-shard sharded clusters. On Atlas, a single-shard M30 sharded cluster costs TWICE as much as an M30 replica set. This project would bring cost parity to single-shard clusters on Atlas and will make it easier for customers to start out with a sharded cluster or switch to a sharded cluster.

      It also supports other use-cases:

      • Serverless v2 - Serverless would like to remove the cost and complexity of dealing with config servers.
      • Kubernetes - Some customers prefer to use a sharded cluster with a single shard because the mongos can act as a proxy. No config server would mean better resource utilization and less operational overhead

      Cast of Characters

      Documentation

      Product Description
      Scope Document
      Technical Design Document
      {paDocs Update

            Assignee:
            Unassigned Unassigned
            Reporter:
            backlog-server-pm Backlog - Core Eng Program Management Team
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

              Created:
              Updated:
              Resolved: