[SERVER-13775] maxTimeMS on an instance/database level Created: 28/Apr/14 Updated: 30/Aug/23 |
|
| Status: | Backlog |
| Project: | Core Server |
| Component/s: | Admin, Stability |
| Affects Version/s: | 2.6.0 |
| Fix Version/s: | None |
| Type: | Improvement | Priority: | Major - P3 |
| Reporter: | Jason Ford | Assignee: | Backlog - Query Execution |
| Resolution: | Unresolved | Votes: | 25 |
| Labels: | maxTimeMs, performance | ||
| Remaining Estimate: | Not Specified | ||
| Time Spent: | Not Specified | ||
| Original Estimate: | Not Specified | ||
| Issue Links: |
|
||||||||
| Assigned Teams: |
Query Execution
|
||||||||
| Participants: | |||||||||
| Case: | (copied to CRM) | ||||||||
| Description |
|
MongoDB 2.6 introduced the .maxTimeMS() cursor method, which allows you to specify a max running time for each query. This is awesome for ad-hoc queries, but I wondered if there was a way to set this value on a per-instance or per-database (or even per-collection) level, to try and prevent locking in general. And if so, could that value then be OVERWRITTEN on a per-query basis? I would love to set an instance level timeout of 3000ms or thereabouts (since that would be a pretty extreme running time for queries issued by my application), but then be able to ignore it if I had a report to run. |
| Comments |
| Comment by Asya Kamsky [ 13/Jan/15 ] |
|
Currently this can be simulated by a script running on the server (or just connecting to the server) which periodically queries for all current operations on a particular namespace running longer than certain amount of time, being careful to exclude operations that don't come from the client (not mongod internal or replication or sharding related operations) and then kills them. The docs have examples starting here: You want something like db.currentOp({secs_running:{$gt:1}, ns:"theNameSpace"}) (with other conditions there) to identify ops to kill. Starting with upcoming 2.8 there is a "microsecs_running" field in addition to "secs_running" |