Details
Description
It seems difficult to split a chunk that is too big to move when using a hashed shard key.
splitFind claimed to split it, but didn't:
config> db.chunks.find({shard:"test-rs4"}).count()
|
12
|
config> sh.splitFind("test.hashy", {'user_id' : {$gt : NumberLong("6136905156946055959"), $lt : NumberLong("6376878206583911474")}})
|
{ "ok" : 1 }
|
config> db.chunks.find({shard:"test-rs4"}).count()
|
12
|
split+middle gave the fantabulous error message:
> db.adminCommand({split:"test.hashy", middle:{user_id:NumberLong("6236905156946055959")}})
|
{
|
"cause" : {
|
"errmsg" : "exception: can split { user_id: 4525968722311181770 } -> { user_id: 4914820391477438346 } on { user_id: 6236905156946055959 }",
|
"code" : 14040,
|
"ok" : 0
|
},
|
"ok" : 0,
|
"errmsg" : "split failed"
|
}
|
(NumberLong("6236905156946055959") was chosen randomly, just in case you could pass in a hashed value and have it work).
It also seems easy to end up in this situation: I randomly populated a collection with 1 million docs and a 1MB chunk chunk size and no chunk I've tried so far has been small enough to move.
Attachments
Issue Links
- depends on
-
DOCS-1043 moveChunk and split commands have/will have new field parameters for 2.4
-
- Closed
-
- is related to
-
SERVER-4172 moveChunk need to specify find error unclear
-
- Closed
-
- related to
-
DOCS-990 Document bounds parameter of moveChunk
-
- Closed
-
-
SERVER-8335 remember _dataWritten for unaffected chunks after partial chunk manager reload
-
- Closed
-