[SERVER-2231] Existing sharding rules seems to be immutable even if i drop the target collection and rebuild one. Created: 16/Dec/10 Updated: 30/Mar/12 Resolved: 02/Sep/11 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding |
| Affects Version/s: | 1.6.4 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Guti.Deng | Assignee: | Unassigned |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
CentOS release 5.4 (Final) , mongodb-linux-x86_64-1.6.4.tgz |
||
| Operating System: | Linux |
| Participants: |
| Description |
|
I've made a mistake by sharding a GridFS chunks collection with it's '_id'. . I tried to runCommand({shardcollection: "xx.xx.chunks', key: {files_id: 1, n: 1}}), then mongo told me something like 'the collection has already been sharded'. Then, with in database 'admin', i runned 'db.printShardingStatus()'. The result showed that the sharding rules on "xx.xx.chunks" had been modified as i wish. But when i run my script to load data into this GridFS, the same error message from pymongo raised. I've already found a solution, quite ugly by renaming the GridFS.... |
| Comments |
| Comment by Eliot Horowitz (Inactive) [ 16/Dec/10 ] |
|
Yes - jira is not the best source of current information. It should be sharded by files_id only |
| Comment by Guti.Deng [ 16/Dec/10 ] |
|
The latter problem is solved by replace chunk key: {files_id:1, n:1}with {files_id: 1}. In this issue(http://jira.mongodb.org/browse/SERVER-889), {files_id:1, n:1}is recommended. Some mechanism changed in the past year? Would it be nice if the drivers took care of sharding GridFSes ? |
| Comment by Guti.Deng [ 16/Dec/10 ] |
|
Another strange thing happens. From replica-set s002/primary: Thu Dec 16 12:21:32 [conn32] building new index on { _id: 1 } for weibo.gfs_msg_rt.files for weibo.gfs_msg_rt.chunks for weibo.gfs_msg_rt.chunks Operations and results: > use admin <ction: "weibo.gfs_msg_rt.chunks", key: {files_id:1, n:1}}) { "collectionsharded" : "weibo.gfs_msg_rt.chunks", "ok" : 1 }Process TreeLoader-1:5: > use weibo > db.gfs_msg_rt.chunks.find() { "_id" : ObjectId("4d09970e5d4bb07a3f000001"), "n" : 0, "data" : BinData(2,"MgAAAHsiMjAxMTAwNzExMzEyMzE1NjUiOiB7fSwgIjIwMTEwMDcxMTMxMDYzNTUwIjoge319"), "files_id" : "20110071131055329" } { "_id" : ObjectId("4d09970e5d4bb07a3f000003"), "n" : 0, "data" : BinData(2,"SQAAAHsiMjAxMTAwNzExMzE2NTc0ODIiOiB7fSwgIjIwMTEwMDcxMTMxMjM2NzUxIjogeyIyMDExMDA3MTEzMTU2MzgwOSI6IHt9fX0="), "files_id" : "20110071131055328" }weibo.gfs_msg_rt.chunks chunks: , "n" : { $minKey : 1 }} -->> { "files_id" : { $maxKey : 1 }, "n" : { $maxKey : 1 }} on : s002 { "t" : 1000, "i" : 0 } weibo.gfs_msg_rt.files chunks: } -->> { "_id" : { $maxKey : 1 }} on : s002 { "t" : 1000, "i" : 0 } |
| Comment by Guti.Deng [ 16/Dec/10 ] |
|
{ files_id: 1.0, n: 1.0 }
looks strange. I'm sure what i typed is {files_id: 1, n:1}. |
| Comment by Guti.Deng [ 16/Dec/10 ] |
|
I grep 'weibo.gfs_msg_rtree' from repset s002/primary, where printShardingStatus indicates that it is the only location of 'weibo.gfs_msg_rtree.files' and 'weibo.gfs_msg_rtree.chunks'. Wed Dec 15 22:22:37 [conn31] building new index on { _id: 1 } for weibo.gfs_msg_rtree.files for weibo.gfs_msg_rtree.chunks for weibo.gfs_msg_rtree.chunks for weibo.gfs_msg_rtree.chunks for weibo.gfs_msg_rtree.chunks for weibo.gfs_msg_rtree.files for weibo.gfs_msg_rtree.chunks for weibo.gfs_msg_rtree.chunks for weibo.gfs_msg_rtree.files |
| Comment by Eliot Horowitz (Inactive) [ 16/Dec/10 ] |
|
Do you still have the logs? |