[SERVER-3393] Dropping a sharded collection and re-sharding it leads to inconsistent inserts Created: 07/Jul/11 Updated: 02/Sep/11 Resolved: 02/Sep/11 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding |
| Affects Version/s: | 1.8.2 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Mike K | Assignee: | Unassigned |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Ubuntu Natty Narwhal, EC2 |
||
| Issue Links: |
|
||||||||||||||||
| Operating System: | ALL | ||||||||||||||||
| Participants: | |||||||||||||||||
| Description |
|
Steps to repro (has happened 3 times to us): Given a cluster of 2 shards, (let's call them S0, S1) 1. On T0, issue enablesharding and shardcollection At this point, we noticed the pre-split wasn't quite correct and was only writing to S1 (our mistake), so we did: 1. Stopped throwing writes at cluster Expected behavior would be to have the mongos on A0 and A1 pick up the new chunk information; instead, all the inserts went into S1, as if the mongos on A0 and A1 never picked up the changes from the second pre-split (even though printShardingStatus() on A0 and A1's mongos looked okay). The only solution we found was to restart mongos after issuing our presplit commands. After restarting mongos, all the writes went to the correct places, split between S0 and S1. Seems like either a bug in mongos failing to pick up these changes, or the docs should be updated specifying that mongos must be restarted after a collection has been dropped and re-sharded. |
| Comments |
| Comment by Eliot Horowitz (Inactive) [ 02/Sep/11 ] |
|
See |