[DOCS-3742] Document Resync Backup for a Sharded Cluster Created: 10/Jul/14  Updated: 16/Mar/15  Resolved: 19/Nov/14

Status: Closed
Project: Documentation
Component/s: Cloud Manager
Affects Version/s: None
Fix Version/s: v1.3.14

Type: Task Priority: Critical - P2
Reporter: Bob Grabar Assignee: Bob Grabar
Resolution: Done Votes: 0
Labels: None
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Attachments: PNG File shards.png    
Participants:
Days since reply: 9 years, 10 weeks, 5 days ago

 Description   

Select Backup > Sharded Cluster Status

In the SHARD STATUS column click "See Shards/Configs."

The column expands to display a table.

For each shard and config server:

  • Click "resync" .
  • Accept the default selection in the "Sync source" drop-down list.
  • Click the "Resync" button.
  • If prompted, enter your two-factor verification code and select "Verify." Then click the "Resync" button again.


 Comments   
Comment by Githook User [ 05/Dec/14 ]

Author:

{u'username': u'bgrabar', u'name': u'Bob Grabar', u'email': u'bob.grabar@10gen.com'}

Message: DOCS-3742 resync backup for a cluster or replica set
Branch: master
https://github.com/10gen/mms-docs/commit/c357aed9b1250a66a2533bd911e60c73d1d08026

Comment by Githook User [ 04/Nov/14 ]

Author:

{u'username': u'bgrabar', u'name': u'Bob Grabar', u'email': u'bob.grabar@10gen.com'}

Message: DOCS-3742 resync backup for a sharded cluster
Branch: master
https://github.com/10gen/mms-docs/commit/ff18f1da8dcc8a7a1a43ee464203e63d4bcbf36b

Comment by Steve Briskin (Inactive) [ 31/Oct/14 ]

Additional cases where Dan's point #1 (agent losing the oplog tail) can happen:

  • The agent is running on an under-provisioned or over-utilized machine and cannot keep up with oplogs being generated.
  • Another variation: the agent can lose the tail of the oplog if the agent is down for longer than the time span of the oplog. A classic case is where the agent is brought down during maintenance and not brought back up in a timely manner.
  • When all replica set data is deleted and new replica set is brought up with the same name. This is mostly common with dev/test deployments that are regularly torn down and rebuilt.
  • If there's a rollback and we are unable to find a common point. This is pretty rare in practice.
Generated at Thu Feb 08 07:46:25 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.