[SERVER-14669] Updates/deletes on sharded collections shouldn't affect orphan documents Created: 24/Jul/14 Updated: 06/Dec/22 Resolved: 03/Feb/22 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding, Write Ops |
| Affects Version/s: | 2.2.7, 2.4.10, 2.6.3 |
| Fix Version/s: | 5.3.0 |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Jason Rassi | Assignee: | [DO NOT USE] Backlog - Sharding EMEA |
| Resolution: | Done | Votes: | 2 |
| Labels: | stop-orphaning-fallout | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||||||||||||||||||
| Assigned Teams: |
Sharding EMEA
|
||||||||||||||||||||||||
| Backwards Compatibility: | Fully Compatible | ||||||||||||||||||||||||
| Operating System: | ALL | ||||||||||||||||||||||||
| Sprint: | Sharding EMEA 2022-01-24, Sharding EMEA 2022-02-07 | ||||||||||||||||||||||||
| Participants: | |||||||||||||||||||||||||
| Case: | (copied to CRM) | ||||||||||||||||||||||||
| Description |
|
Multi-updates and multi-deletes on sharded collections affect orphan documents, and are counted in WriteResult / getLastError stats. Writes to sharded collections should not affect orphan documents. Affects all currently-released versions.
|
| Comments |
| Comment by Sergi Mateo Bellido [ 03/Feb/22 ] | |||
|
This issue has been fixed in the context of PM-2423 and it will be available in 5.3 and newer versions. | |||
| Comment by Andy Schwerin [ 13/Apr/18 ] | |||
|
Changing this behavior would require, at least for updates, substantial changes to either the chunk migration or routing table protocols. | |||
| Comment by Asya Kamsky [ 06/Sep/17 ] | |||
|
On update that's not targeted by shard key to a single shard, this happens on non-multi update as well:
| |||
| Comment by Asya Kamsky [ 06/Sep/17 ] | |||
My testing shows that it applies when shard key is in the query predicate if the predicate applies to the orphan documents. | |||
| Comment by Henrik Ingo (Inactive) [ 07/Oct/16 ] | |||
|
Use case related comment: When tailing an oplog on a sharded cluster, it is unfortunate that updates and deletes to documents that "aren't really there" get logged. It seems impossible for the application tailing the oplog to know or filter out such "orphan updates". |