[GODRIVER-211] Revise connection pooling options on Client/connstring Created: 30/Jan/18  Updated: 28/Oct/23  Resolved: 27/Nov/18

Status: Closed
Project: Go Driver
Component/s: Options & Configuration
Affects Version/s: None
Fix Version/s: 0.1.0

Type: Improvement Priority: Major - P3
Reporter: David Golden Assignee: Isabella Siu (Inactive)
Resolution: Fixed Votes: 0
Labels: Stitch, beta
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified

Issue Links:
Depends
Epic Link: Finalize mongo API

 Description   

Revised:
There are many connection options supported in the private level, but the connection string and Client object should support only two for consistency with other drivers:

  • maxPoolSize - maximum number of (non-monitor) simultaneous connections per host
  • maxIdleTimeMS - time after which idle connections are closed; if set, then after an activity spike up to maxPoolSize, idle connections will drain away.

Original:

I don't see how connection pool options in the connection string (which appear to be non-standard anyway) are wired up to affect the actual connection pooling.

We should investigate to confirm if this is working or broken and, if broken, develop a plan to fix it.

We don't necessarily need to fix problems for the alpha release, but we do need to be able to describe known issues to address in the beta.



 Comments   
Comment by Githook User [ 27/Nov/18 ]

Author:

{'name': 'Isabella Siu', 'email': 'isabella.siu@10gen.com'}

Message: GODRIVER-211 revise connection pooling options on client/connstring

Change-Id: Ic9b1f10ef3646e535a19a20a53decc0444fe1fe1
Branch: master
https://github.com/mongodb/mongo-go-driver/commit/802b0ba4dad61e1d1b523ad309bc3738b197ee26

Comment by David Golden [ 05/Feb/18 ]

Jesse suggest skipping "minPoolSize" and seeing if anyone actually requests it.

Comment by David Golden [ 31/Jan/18 ]

Thank you for explaining it further, Craig. To confirm my understanding, this is what I think you're saying:

  • MaxConnsPerHost is tracked via the CappedProvider – via server – rather than via the pool so that the max limit on connections can be enforced even when connections might live in different pools.

Some of my thoughts about where to go from here:

  • I don't see a lot of value in having both "maxIdleTimeMS" and "maxLifeTimeMS" and I'm curious the use case for the latter and why it's such a common option elsewhere. As the manual only lists the former, I'm inclined to only expose that until we hear otherwise.
  • At the client/connstring level, I think "maxConnsPerHost" shouldn't be exposed. I'd prefer to stick with standard URI options already documented. The private server can still support maxConnsPerHost to support BICs requirement.
  • "maxIdleConnsPerHost" is another interesting knob but I think would be unique to the Go driver. I'm inclined to stick with maxIdleTimeMS until we get user feedback that that is insufficient.
  • I think we should consider implementing a "minPoolSize" parameter with semantics like Pymongo where it's the minimum number of connections (idle or in use).

In summary, I think this is what the client should offer as connection string options:

  • maxPoolSize - maximum number of (non-monitor) simultaneous connections per host
  • minPoolSize - minimum number of (non-monitor) simultaneous connections per host (i.e. open new connections in the background if ever below this number)
  • maxIdleTimeMS - time after which idle connections are closed; if set, then after an activity spike up to maxPoolSize, idle connections will drain away. If they drop below minPoolSize, they will be refreshed and replaced.

I don't think we need the "waitQueueMultiple" and "waitQueueTimeoutMS" as client can control timeouts/cancellation via contexts objects to avoid blocking indefinitely.

Generated at Thu Feb 08 08:33:45 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.