[SERVER-39566] Questions related to Filtered Replication. Created: 13/Feb/19 Updated: 14/Feb/19 Resolved: 14/Feb/19 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Replication |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | Question | Priority: | Major - P3 |
| Reporter: | Karthick [X] | Assignee: | Kelsey Schubert |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Participants: | |||||||||
| Comments |
| Comment by Kelsey Schubert [ 14/Feb/19 ] |
|
Hi Karthick, Thanks for your report. Please note that the SERVER project is for reporting bugs or feature suggestions for the MongoDB server. The feature you reference is tracked in SERVER-9780, but I think there are a number of other possible workarounds that may be as performant in your case. Such as initial syncing off of a secondary or using a backup and restore solution. Unfortunately, discussion about the best data migration technique to complete this operation require more in depth discussion. For MongoDB-related support discussion please post on the mongodb-user group or Stack Overflow with the mongodb tag. A question like this involving more discussion would be best posted on the mongodb-users group. Kind regards, |
| Comment by Karthick [X] [ 13/Feb/19 ] |
|
Hi Team, Have an urgent requirement.
1. Need to move around 200 GB of data from one data center to another Data center. Thinking of a filtered replication approach , add a new secondary member to the new Data Center and start replicating databases one after another or collections one after another , so that the writes on the Primary aren't impacted by the replication process or due any unforeseen latency.. Repeat the process for 2 more nodes and finally decommission nodes in Data center 1. Is this possible and feature (Filtered replication ) already available ? While the feature of export and import is already available , the data is huge around 200 G and hence looking for a simplistic approach. Please advise.
2. can we replicate data from a cluster in 3.x version to another member in 4.0.3 directly ?
|