[SERVER-10155] restoring data into a sharded collection but it is not balanced Created: 10/Jul/13 Updated: 10/Dec/14 Resolved: 13/Aug/13 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding |
| Affects Version/s: | 2.2.3 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major - P3 |
| Reporter: | Gary Conway | Assignee: | Andre de Frere |
| Resolution: | Cannot Reproduce | Votes: | 1 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Linux 2.6.18-308.el5 #1 SMP Fri Jan 27 17:17:51 EST 2012 x86_64 x86_64 x86_64 GNU/Linux |
||
| Operating System: | Linux |
| Steps To Reproduce: | took a mongodump of an unshared collection collA. |
| Participants: |
| Description |
|
mongorestore not sharding data correctly and there are errors in the mongos log Wed Jul 10 09:08:57 [conn167] warning: splitChunk failed - cmd: I stopped the mongorestore after a while and have wait since to see if the balancer would move chunks around but nothing has happened. Balancer is running. Any ideas why it has not sharded the data? Should it have during the restore? The shard key will contain strings When creating the the 3 chunks it has created them as this explains why all data is in chunk 2 but after 10GB of data I would have expected A-Z to split by now. We have a large distribution of values and our chunk size is 256MB |
| Comments |
| Comment by Kevin J. Rice [ 13/Aug/13 ] |
|
The mongorestore into an empty database is something I did several times, with no balancing happening. I believe this is the same problem that hits balancing in general - too much activity results in (a) splitchunk failed metadata lock (which I've seen), (b) migration failed too many times and it gave up or something (inferred). I then did a presplit to one chunk per shard, and it worked better, in that it spread the load to a couple of (not all) shards during loading of about 1/8th of of my mongorestore. Then, I decided to balance it before continuing. to get it to start balancing (it wasn't), I had to restart my mongos processes and toggle off/on setBalancerState(true/false), then it started working and balanced. Then, I loaded the remaining mongorestore data. |
| Comment by Stennie Steneker (Inactive) [ 13/Aug/13 ] |
|
Hi Gary, Thank you for confirming; closing the issue now. Regards, |
| Comment by Gary Conway [ 13/Aug/13 ] |
|
Hi. We have since done a number of mongorestores and sharding has either happened straight away or not long after the process has finished. So this does look like it was a one off due to some unknown conditions. Happy to close this. Thanks |
| Comment by Andre de Frere [ 13/Aug/13 ] |
|
I've tried to reproduce this issue but cannot. Are you still seeing this issue? Did this issue still persist after the mongorestore finished? If so are you able to open a new ticket on it, as the version is different between the two and may be a different cause? |
| Comment by Kevin J. Rice [ 29/Jul/13 ] |
|
This has occured with our mongorestore as well. I'm hoping that it will resolve after the mongorestore is done. note we're running mongo 2.4.2 sharded and replicated, but our replicas are currently offline. |