[SERVER-28066] Sharding on GridFS files, all the files ends up on the same shard Created: 21/Feb/17 Updated: 31/May/17 Resolved: 23/Mar/17 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding |
| Affects Version/s: | 3.2.10 |
| Fix Version/s: | None |
| Type: | Question | Priority: | Major - P3 |
| Reporter: | Stephane Marquis | Assignee: | Kelsey Schubert |
| Resolution: | Done | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||
| Participants: | |||||
| Description |
|
Hi ! We have a collection that contains files (and a lot of them) that have sharding enabled on it on the following indexes: fs.chunks : {file_id : 1, n: 1}The files in it are ~15.27mb and when I run fs.chunks.getShardDistribution() I'm getting : Shard shard0000 at server1:27018 Shard shard0001 at server2:27018 Shard shard0002 at server3:27018 Totals We're starting to run out of space on the server that is hosting shad0002 and can't figure out why the shard aren't balancing out :S In the log we're seeing error like : and googling it doesn't give me lot of information Is there anything we're missing there ? |
| Comments |
| Comment by Kelsey Schubert [ 08/Mar/17 ] |
|
Hi smarquis, The log error you've shared indicates that the maximum split points in a chunk, 8192 has been reached. This isn't a limit to the number of chunks you can end up with, just the number of pieces you can split one chunk into at a time. To work around this, you can increase the chunk size from the default 64MB for that collection to something higher, say 128MB or 256 MB. This will reduce the number of pieces that a particular chunk needs to be split into. At a later stage you can then lower the chunk size back to the default 64MB, so that you don't end up with very large chunks. Would you please follow these steps, and let us know if it resolves the issue? Thank you, |