[SERVER-34878] if setShardVersion for dropCollection not sent and collection re-created, migrations for new collection can fail indefinitely Created: 07/May/18  Updated: 27/Oct/23  Resolved: 26/Jul/21

Status: Closed
Project: Core Server
Component/s: Sharding
Affects Version/s: 3.7.9
Fix Version/s: 4.1 Desired, 5.0.0

Type: Bug Priority: Major - P3
Reporter: Esha Maharishi (Inactive) Assignee: [DO NOT USE] Backlog - Sharding EMEA
Resolution: Gone away Votes: 0
Labels: sharding-causes-bfs-hard
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Depends
Related
related to SERVER-34904 recipient shard may be unable to acce... Closed
Assigned Teams:
Sharding EMEA
Operating System: ALL
Steps To Reproduce:

Apply the following patch to remove the setShardVersion from dropCollection (to deterministically reproduce this issue) and run jstests/sharding/upsert_sharded.js.

diff --git a/src/mongo/db/s/config/sharding_catalog_manager_collection_operations.cpp b/src/mongo/db/s/config/sharding_catalog_manager_collection_operations.cpp
index 16dd7c4..8c4f39f 100644
--- a/src/mongo/db/s/config/sharding_catalog_manager_collection_operations.cpp
+++ b/src/mongo/db/s/config/sharding_catalog_manager_collection_operations.cpp
@@ -420,55 +420,6 @@ Status ShardingCatalogManager::dropCollection(OperationContext* opCtx, const Nam     LOG(1) << "dropCollection " << nss.ns() << " collection marked as dropped";-    for (const auto& shardEntry : allShards) {
-        auto swShard = shardRegistry->getShard(opCtx, shardEntry.getName());
-        if (!swShard.isOK()) {
-            return swShard.getStatus();
-        }
-
-        const auto& shard = swShard.getValue();
-
-        SetShardVersionRequest ssv = SetShardVersionRequest::makeForVersioningNoPersist(
-            shardRegistry->getConfigServerConnectionString(),
-            shardEntry.getName(),
-            fassertStatusOK(28781, ConnectionString::parse(shardEntry.getHost())),
-            nss,
-            ChunkVersion::DROPPED(),
-            true);
-
-        auto ssvResult = shard->runCommandWithFixedRetryAttempts(
-            opCtx,
-            ReadPreferenceSetting{ReadPreference::PrimaryOnly},
-            "admin",
-            ssv.toBSON(),
-            Shard::RetryPolicy::kIdempotent);
-
-        if (!ssvResult.isOK()) {
-            return ssvResult.getStatus();
-        }
-
-        auto ssvStatus = std::move(ssvResult.getValue().commandStatus);
-        if (!ssvStatus.isOK()) {
-            return ssvStatus;
-        }
-
-        auto unsetShardingStatus = shard->runCommandWithFixedRetryAttempts(
-            opCtx,
-            ReadPreferenceSetting{ReadPreference::PrimaryOnly},
-            "admin",
-            BSON("unsetSharding" << 1),
-            Shard::RetryPolicy::kIdempotent);
-
-        if (!unsetShardingStatus.isOK()) {
-            return unsetShardingStatus.getStatus();
-        }
-
-        auto unsetShardingResult = std::move(unsetShardingStatus.getValue().commandStatus);
-        if (!unsetShardingResult.isOK()) {
-            return unsetShardingResult;
-        }
-    }
-
     LOG(1) << "dropCollection " << nss.ns() << " completed";     catalogClient 

Participants:
Linked BF Score: 27

 Description   

Let's say the setShardVersion for dropCollection is not sent, and the collection is re-created (and re-sharded). This means the old chunks remain as garbage in the shard's config.cache.chunks.<ns>. (This is because the shard will not refresh after the drop. If it had, it would have seen the collection had been dropped and scheduled a task to drop its local config.cache.chunks.<ns> collection).

The setShardVersion for shardCollection for the new collection will cause the shard to refresh and find the new collection entry. Since the epoch is different from the collection entry in the shard's config.cache.collections, the shard will query for chunks with version $gte 0|0|<new epoch>.

The shard will find the chunk for the new collection and persist it in its config.cache.chunks.<ns>, alongside the garbage chunks. As part of this, the shard will attempt to delete chunks in config.cache.chunks.<ns> with ranges that "overlap" the new chunk.

However, the old chunks may not get deleted if their shard key sorts as "less than" the new shard key (see SERVER-34856).

. . .

Now, at the beginning of a migration, the donor shard forces a refresh.

The refresh queries config.chunks for chunks with version $gte the highest version of any chunk persisted on the shard in config.cache.chunks.<ns> (notice that the new collection's epoch is not included in the local query).

If there is a garbage chunk with a major version greater than 1 (that is, at least one migration had occurred), then it will be returned by the local query and its version will be used as the $gte for the remote query. So, the remote query will find no chunks.

The ShardServerCatalogCacheLoader will return ConflictingOperationInProgress, which will cause the CatalogCache to retry the refresh 3 times and get the same result.

Once the retries are exhausted, the donor will fail the moveChunk command with ConflictingOperationInProgress.



 Comments   
Comment by Kaloian Manassiev [ 26/Jul/21 ]

As of FCV 5.0, the new DDL code paths no longer use setShardVersion so closing this as 'Gone Away'.

Comment by Esha Maharishi (Inactive) [ 11/May/18 ]

I think it was introduced in 3.6 because that's when we added the loader.

Yes, similar to SERVER-34904; in this one, the setShardVersion to the donor was not sent; in that one, the setShardVersion to the recipient was not sent.

Comment by Kaloian Manassiev [ 11/May/18 ]

This has existed since 3.2, right? Also it is related to SERVER-34904?

Generated at Thu Feb 08 04:38:09 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.