[SERVER-10395] Delete is too heavy for disk ! Created: 01/Aug/13 Updated: 10/Dec/14 Resolved: 14/Aug/13 |
|
| Status: | Closed |
| Project: | Core Server |
| Component/s: | None |
| Affects Version/s: | 2.4.2, 2.4.5 |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Chimeng Wong | Assignee: | Unassigned |
| Resolution: | Done | Votes: | 0 |
| Labels: | crash, performance, replication, update | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Environment: |
Redhat 5.7 , RAID6 disk , 4x 2.5GHz cpu, 8G RAM |
||
| Participants: |
| Description |
|
My mongodb keep inserting new data, at the same time, i have to remove old data in last minute, from my observation, i found that write lock of insert and remove are using the same one, how to distribute the lock? i am gonna improve my write operation. In EMC level i saw the queue of disk is 8~10, it is critical for me, the iostat when i am just insert and replication getmore as below. I am using 9 mongos, 1 config, 1 master, 1 secondary, 1 arbiter as my mongodb architecture VxVM5000 0.00 0.00 17.00 24.00 0.10 0.14 11.85 0.15 3.76 3.17 13.00 The utilization of disk is under 30% running on sharding and replication, but when i try to remove last minute data, the utilization is gonna crazy and out of control. I observe that %util keep over 90% to 100%. iostat -xm 1 Seems delete data is too heavy for mongodb, how can i solve this problem? may i manually distribute write lock? Suppose it should be run concurrency. |
| Comments |
| Comment by Stennie Steneker (Inactive) [ 14/Aug/13 ] |
|
Hi Chimeng, The Core Server project is intended for filing feature requests and bug reports. For support questions please create a new discussion in the mongodb-user group: https://groups.google.com/forum/?fromgroups#!forum/mongodb-user Thanks, |