Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-8870

mongos unaware of database move after movePrimary

    • Type: Icon: Bug Bug
    • Resolution: Duplicate
    • Priority: Icon: Critical - P2 Critical - P2
    • None
    • Affects Version/s: 2.2.1, 2.4.0-rc1
    • Component/s: Sharding
    • Labels:
      None
    • ALL
    • Hide
      1. Start a 3 shard cluster with 2 mongos processes (for my test it was single replica shards, 1 config server & 2 mongos processes)
      2. Attach a mongo shell to each mongos process
      3. Insert a small # of records into a new database
      4. Run find() in each shell to display records
      5. Perform a movePrimary command to move the new database to a different shard.
      6. Run find() in each mongo shell -> you will see that only the shell attached to the mongos against which movePrimary was run displays records.

      Note that the mongos containing the stale shard location can be refreshed with either a restart or by running the flushRouterConfig command.

      Show
      Start a 3 shard cluster with 2 mongos processes (for my test it was single replica shards, 1 config server & 2 mongos processes) Attach a mongo shell to each mongos process Insert a small # of records into a new database Run find() in each shell to display records Perform a movePrimary command to move the new database to a different shard. Run find() in each mongo shell -> you will see that only the shell attached to the mongos against which movePrimary was run displays records. Note that the mongos containing the stale shard location can be refreshed with either a restart or by running the flushRouterConfig command.

      We moved an (unsharded) database from one shard to another using movePrimary command and following the instructions here:

      http://docs.mongodb.org/manual/tutorial/remove-shards-from-cluster/#remove-shard-move-unsharded-databases

      Having done that users starting complaining of unauthorized access. Sure enough connecting to their local mongos showed that the database that had been moved, and the system.users collection within it, were empty. I.e. the mongos didn't pick up the fact that the database had moved.

      This is somewhat worrying, and essentially required us to restart mongos' across the cluster. This makes us worry that, if a process had auth (to admin say) they would be writing to the wrong shard for that database - and we'd experience data loss. It's also worrying that mongoses don't appear to automatically pick up changes like this.

            Assignee:
            james.wahlin@mongodb.com James Wahlin
            Reporter:
            jblackburn James Blackburn
            Votes:
            0 Vote for this issue
            Watchers:
            10 Start watching this issue

              Created:
              Updated:
              Resolved: