Uploaded image for project: 'Core Server'
  1. Core Server
  2. SERVER-64584

Don't hard-code the connection thread stack size

    • Type: Icon: Improvement Improvement
    • Resolution: Unresolved
    • Priority: Icon: Major - P3 Major - P3
    • None
    • Affects Version/s: None
    • Component/s: None
    • Service Arch

      We are currently setting the connection thread stack size at a hard-coded 1MB here:

      https://github.com/mongodb/mongo/blob/0e1fc9235a8aea220f10bacb76ef44fecd98bdfd/src/mongo/transport/service_executor_utils.cpp#L81

      Technically, we are setting the size to the maximum of either the current stack size ulimit or 1MB, whichever is less. This was done at the request of a user a long, long time ago in SERVER-2707 to reduce memory pressure on highly-threaded systems:

      https://groups.google.com/g/mongodb-user/c/GOAOwYH483c

      This is not actually the case. The initial stack size is not an allocation but a reservation of address space. There is no additional memory pressure introduced by keeping the stack size at the default 8MB limit. The function of the ulimit only prevents actually using beyond a certain amount and does to by inducing an artificial stack exhaustion. In any case, this is not a thing you can do on Windows anyway, so it also represents a self-imposed platform difference.

      Since stack consumption is generally a function of platform (compiler, kernel, etc.) and usage, the optimal stack size is not actually predictable in this way. We should instead offer an option that allows users on supported platforms to set an initial stack size lower than the default 8MB and (potentially) an upper limit using setrlimit. This should be documented as a feature available on resource-constrained platforms and not as a means of limiting resource consumption on production systems in general.

            Assignee:
            backlog-server-servicearch [DO NOT USE] Backlog - Service Architecture
            Reporter:
            ryan.egesdahl@mongodb.com Ryan Egesdahl (Inactive)
            Votes:
            1 Vote for this issue
            Watchers:
            7 Start watching this issue

              Created:
              Updated: