[SERVER-6726] Add configuration to "mongos" to allow finer-grained control of the connection pool Created: 07/Aug/12  Updated: 23/Mar/15  Resolved: 26/Jun/14

Status: Closed
Project: Core Server
Component/s: Internal Client, Networking, Stability
Affects Version/s: None
Fix Version/s: None

Type: New Feature Priority: Major - P3
Reporter: Hiroaki Assignee: Unassigned
Resolution: Done Votes: 2
Labels: connection, mongos, pull-request
Remaining Estimate: Not Specified
Time Spent: Not Specified
Original Estimate: Not Specified
Environment:

Various platforms.
Especially, any types of Linux distributions. CentOS , Debian x86_64, i686 and so on...


Issue Links:
Depends
Duplicate
Participants:

 Description   

Please review and accept these kind of the features that enable us to control the connection pools.

We are trying to use it with the each parameter of "maxSpareConnPools=50" , "connPoolTimeout=300".
Because we are difficult to keep the long idling TCP connection in our Data-center.
It could be causing the session table overflow on the switch. And eventually the connections will be halted.

==== Pull-Request ====
https://github.com/mongodb/mongo/pull/278

I'm Hiroaki Kubota from rakuten.co.jp. (hiroaki.kubota@mail.rakuten.com)

We are using the MongoDB with sharding structure.
And connect to the Mongo-Sharding from approximates 100 web servers
(that have 50 httpd process per server).
We also run the "mongos" on each web server.

Normally the number of the connections between "mongs" and "mongod" is approximates 55.
It's the good form !
But sometimes, the number of the connections raise to 300-350 on every web server.
And the "mongod" would refuse to connect from "mongos" more than 20000 as the limit of the mongod.

So we want to control the connection pool in the "mongos".

You already provide us the "maxConns" configuration.
However "maxConns" affects client side of the "mongos".

It's useless for us.

So we added some configurations to control "mongos" more minutely.

maxSpareConnPools
50 is default.
It's used in PoolForHost::getStaleConnections()

connPoolTimeout
1800 is default.
It's used in PoolForHost::StoredConnection::ok()
It's hard coded now.

Please review it and accept this pull-request.

Regards,



 Comments   
Comment by Matt Kangas [ 26/Jun/14 ]

From Greg:

It's not 100% clear what is meant by release connections positively, but if you are looking for a way to flush the connection pool, the "connPoolSync" command should work, particularly in 2.6 where every connection is in the pool between requests.

There's an internal parameter in v2.6 as well to control connection pool size:

> db.runCommand({getParameter:'*'})
{
...
	"connPoolMaxConnsPerHost" : 200,
	"connPoolMaxShardedConnsPerHost" : 200,
}

We believe this issue is now resolved. Feel free to reopen this ticket or open a new ticket if you feel otherwise.

Comment by Hiroaki [ 09/May/13 ]

Thanks !!
I confirmed releaseConnectionsAfterResponse option behavior.
It worked well ! So I'll always set true this option.

But I could not resolve this issues.
Because Mongos won't release connections positively, even if releaseConnectionsAfterResponse option is specified to Mongos.

So I need the way to release connection from Mongos yet.

Comment by Greg Studer [ 07/May/13 ]

> I understood the releaseConnectionsAfterResponse option totally invalidate connection pool in mongos. Am I correct ?

No, the option allows better use of the connection pool, it doesn't invalidate the connections in the pool. Normally, mongos->mongod connections for insert/update/delete/query are cached individually for each incoming connection, and can't be re-used until the incoming connection is closed, even if they are idle and there are other active incoming connections.

What the releaseConnectionsAfterResponse option does is allow the mongos->mongod connection to be re-used (returned to the pool) after any read op (including getLastError(), so after safe writes as well). It shouldn't have a significant performance impact - the connection isn't destroyed, it's just returned from the incoming connection cache to the shared pool early.

> Yes, we have to design and control our system structure minutely for our service stability and performance. So we strongly need the way to control mongos behaviour around connection pool as much as possible.

Sure, I just wanted to emphasize that this particular parameter, maxSpareConnPools, won't prevent the connection overloading since the extra connections aren't actually stored in the pool, they're stored in a per-incoming-connection cache.

Comment by Hiroaki [ 03/May/13 ]

> for example, from holding 300 mongos->mongod connections in each connection's (ShardConnection) cache.
Yes, we have to design and control our system structure minutely for our service stability and performance.
So we strongly need the way to control mongos behaviour around connection pool as much as possible.

By the way,the most ideal conn-pool behavior for us, The mongod should determine dynamically each connection's fate whether it should be pooled or destroyed.

Comment by Hiroaki [ 03/May/13 ]

Thanks, I will try releaseConnectionsAfterResponse option on next week. (This week is national holiday in Japan)

I took a glance at server.cpp.
I understood the releaseConnectionsAfterResponse option totally invalidate connection pool in mongos. Am I correct ?

I'm thinking about its performance impact for our service that need about 10 request in each page view in web.

Ideal behavior for us, mongos keep connection pools in extent of the specified MAX for performance, and some connections of request exceeding from MAX of the connection is destroyed.

Comment by Jon Hoffman [ 03/May/13 ]

I brought this up ticket up with Eliot, and he said this wouldn't do what we wanted. That's when he created the releaseConnectionsAfterResponse patch which has been working great for us. Massive reduction in primary connections.

Comment by Greg Studer [ 03/May/13 ]

crumbjp - Thanks for submitting this pull request - have you tested this fix under load on your mongoses? If I'm reading this correctly, this will allow you specify that you periodically clean your connection pools to some configurable size X, but won't actually prevent the 300 active web server connections, for example, from holding 300 mongos->mongod connections in each connection's (ShardConnection) cache.

For this, you'd need something similar to SERVER-9022, which allows mongos to share mongos->mongod connections between active incoming connections - this looks like it's the core issue. 2.2.4 has the releaseConnectionsAfterResponse option - would trying that option be possible?

hoffrocket - have a particular use case in mind? Ideally we'd like to keep the knobs here pretty minimal, and we're working on multiplexing ops over the same connections which would eliminate the problem entirely, but if the defaults here are causing you pain we'd def like to know.

Comment by Jon Hoffman [ 25/Feb/13 ]

These limits might be helpful for us at Foursquare as well.

Comment by Hiroaki [ 18/Jan/13 ]

I posted the pull-request again.
https://github.com/mongodb/mongo/pull/359

Comment by Hiroaki [ 10/Aug/12 ]

I confirmed it.

We began to discuss about this Agreement with our company.
But unfortunately, next week is the Bon that is a summer holiday in Japan.
So we probably cannot answer about this Agreement until around 8/27.

Sorry for inconvenient.

Regards,

Comment by Ian Whalen (Inactive) [ 07/Aug/12 ]

Hiroaki, thanks for submitting your pull request. Before we can begin to discuss this with you we'll need you to sign our Contributor Agreement available at http://www.10gen.com/contributor.

Generated at Thu Feb 08 03:12:31 UTC 2024 using Jira 9.7.1#970001-sha1:2222b88b221c4928ef0de3161136cc90c8356a66.