[SERVER-21411] allow limits on the amount of data stored on a shard Created: 11/Nov/15 Updated: 06/Dec/22 Resolved: 12/Jun/17 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | Sharding |
| Affects Version/s: | None |
| Fix Version/s: | None |
| Type: | New Feature | Priority: | Minor - P4 |
| Reporter: | marian badinka | Assignee: | [DO NOT USE] Backlog - Sharding Team |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | None | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Assigned Teams: |
Sharding
|
||||||||
| Participants: | |||||||||
| Description |
| Comments |
| Comment by Kelsey Schubert [ 12/Jun/17 ] | ||
|
On second look, this appears to a be duplicate of Kind regards, | ||
| Comment by marian badinka [ 02/Dec/15 ] | ||
|
Hi Thomas, we have 20+ deployments of MongoDB in our data center. Many servers have spare CPU and Disc capacity. The idea behind is to utilize the spare parts of existing mongo instances (mainly disk space, CPU limit would be also great) by another projects. We could create a Sharded Cluster with shards built only from spare parts of existing servers. But to avoid impact of this new Cluster on existing projects, the limit parameter is required. F.e. A Sharded Cluster for BI tools could use only 10 GB from Server1, only 20 GB and 3 CPUs from Server2, and only 4 GB and 8 CPU from Server3. The limit parameter could ensure there is no impact on existing Mongod projects by space growing of another database. Regards \ Marian | ||
| Comment by Kelsey Schubert [ 30/Nov/15 ] | ||
|
That's right, the maxSize only affects which nodes are selected to send new chunks. For others who are following this ticket, the documentation can be found here. At this time, this is no way to limit the amount of data stored on a shard beyond this functionality. Since I am not aware of a current feature request, I am repurposing this ticket to an improvement request. If you could share your use-case it would help us to understand the specific functionality that you are looking for. Thank you, | ||
| Comment by marian badinka [ 20/Nov/15 ] | ||
|
Hi Thomas, Yes we use WT. What we observed, the mongos doesn`t accept the limit parameter and moving on with saving data to any shard regardless any limit set. I understand now that " The maxSize value only affects the balancer’s selection of destination shards." So if the balancer is OFF, no limit is used. But is there any way how to say mongos....utilise only f.e. 500 MB for this shard and do not go behind ? Thanks \ marian | ||
| Comment by Kelsey Schubert [ 18/Nov/15 ] | ||
|
I have noticed that the limits are not strictly observed, however the limit was enforced and the balancer did stop sending new chunks to the shard. What level of discrepancy between the maxSize limit and the storage size are you seeing? What's the impact of this discrepancy on your deployment? Are you using WiredTiger or mmapv1? Thank you, | ||
| Comment by marian badinka [ 11/Nov/15 ] | ||
|
Hi Ramon, yes yes,
the command sh.addShard doesn`t support maxSize.. Unfortunately the result is same,.....no limit is accepted. Thanks \ MArian | ||
| Comment by Ramon Fernandez Marina [ 11/Nov/15 ] | ||
|
marian.badinka@dhl.com, the sh.addShard() helper takes only one argument; quoting from the sh.addShard() documentation:
Have you tried using the addShard command and see if the maxSize parameter takes effect? Thanks, |